Safety vs speed: Artificial general intelligence divide at Davos

  • Home
  • Technology
  • Safety vs speed: Artificial general intelligence divide at Davos
Technology
Safety vs speed: Artificial general intelligence divide at Davos


New Delhi : Leaders of two of the world’s most powerful artificial intelligence labs laid out contrasting visions for the future of the technology — how fast it should be developed and how soon what is known as AGI can be achieved — at the World Economic Forum (WEF) 2026, with divergent views over development timelines and the necessity of slowing the pace of innovation.

CEOs of DeepMind and Anthropic, Demis Hassabis (left) and Dario Amodei, at a talk during the WEF meeting on January 20. REUTERS

During ‘The Day After AGI’ session, Demis Hassabis, co-founder and CEO of Google DeepMind, and Dario Amodei, CEO and co-founder of Anthropic, presented divergent views on artificial general intelligence (AGI) schedules, geopolitical risks and societal impacts. The disagreement highlights a significant difference in approach between the developers of Gemini and Claude, two of the market’s most capable models.

In contrast to WEF 2025, which was dominated by discussions on the cost-efficiency of Chinese firm DeepSeek’s large language models, this year’s forum broadened its focus. The conversation shifted from who built the cheapest or fastest model to how the technology will be implemented and its societal risks.

In recent times, industry leaders such as Microsoft CEO Satya Nadella have struck a cautionary note, stating that AI must “do something useful that changes the outcomes of people and communities and countries and industries” to sustain investment and build supportive infrastructure.

A key portion of this conversation is AGI, or artificial general intelligence, which refers to AI systems capable of performing any intellectual task that a human being can do.

Hassabis acknowledged that the next couple of years would be complicated as the industry navigates geopolitical questions, but pointed to DeepMind’s AlphaFold and scientific work as evidence of AI solving real-world problems.

“I think the balance of what the industry is doing is not enough balance towards those types of activities. We should have a lot more examples of AlphaFold-like things that help with unequivocal good in the world,” Hassabis said. “It is incumbent on the industry and on us leading players to not just talk about it but demonstrate that.”

He called for a worldwide consensus to establish minimum safety standards, emphasising the “cross-border nature of the technology”.

“It’s vitally needed, because it’s going to affect all of humanity,” he said.

To be sure, models such as AlphaFold are specialised AI models; AlphaFold is the first to solve the decades-old problem of predicting protein shapes from their amino acid structure. Fundamentally, they are different from AGI.

Amodei stood by his previous prediction that Anthropic would achieve a model capable of Nobel laureate-level performance across many fields by 2026-27.

“It’s hard to know exactly when something will happen,” Amodei said, but added: “I don’t think it’ll turn out to be that far off.”

He detailed Anthropic’s strategy of using AI proficient in coding to accelerate development. The company’s engineers no longer write code manually; they allow Claude models to generate it, stepping in only to edit.

“We might be six to 12 months away from when a model may be doing all of what software engineers do end to end. And then it’s a question of how fast that loop closes,” Amodei said.

The debate on slowing down

Hassabis suggested a need to decelerate the pace of AI evolution to create a window for solutions. “It would be good to have a slightly slower pace than what we are currently predicting. Even my timelines say we can get this right societally,” he said.

“I prefer your timelines, that I’ll concede,” Amodei replied, referring to Hassabis’s estimate that AGI has a 50% chance of arriving by 2030—a more conservative figure than Amodei’s two-year forecast.

However, when moderator Zanny Minton Beddoes, editor-in-chief of The Economist, suggested that companies “could just slow down”, Amodei argued it was impossible due to international competition.

“The reason we can’t do that is because we have geopolitical adversaries building the same technology, at a similar pace. It’s very hard to have an enforceable agreement where they slow down and we slow down,” Amodei said.

“We’re just one company, we are trying to do the best we can and operate in an environment that exists no matter how crazy it is,” he added.

The China factor

The discussion took place against the backdrop of intense scrutiny over Nvidia’s plans for semiconductor exports to China. If done, Google DeepMind and Anthropic are worried it’ll give more arsenal to Chinese AI companies such as DeepSeek. Amodei did not address DeepSeek’s claim of training its V3 model for $5.5 million, but the contrast with the US spending is stark. According to research by Epoch.AI, Google and OpenAI spent between $70 million and $100 million in 2023 to train the Gemini 1.0 Ultra and GPT-4 frontier models respectively—costs that have only increased since.

Amodei instead focused on hardware restrictions. Referring to Nvidia CEO Jensen Huang’s recent comments at CES in Las Vegas about “very high” demand for H200 chips in China, Amodei said: “If we can just not sell the chips, then this isn’t a question of competition between the US and China, this is a question of competition between me and Demis, which I’m very confident we can work out.”

Huang had previously stated that Nvidia had “fired up our supply chain, and H200s are flowing through the line,” implying the company was preparing stock for shipments to China.

Amodei compared the export of high-end chips to selling arms.

“It is more a decision like are we going to sell nuclear weapons to North Korea because that produces some profit for Boeing and we can say the cases were made by Boeing and the US is winning,” he said.

He concluded by citing the removal of Huawei equipment from US telecom networks as a precedent for prioritising security over commercial interests.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *