EventsEventsPodcasts
Loader
Find Us
ADVERTISEMENT

Anthropic launches its latest, most powerful generative AI model

Claude Sonnet 3.5
Claude Sonnet 3.5 Copyright Anthropic
Copyright Anthropic
By Pascale Davies
Published on
Share this articleComments
Share this articleClose Button

Generative AI companies are racing to get one step ahead of the other.

ADVERTISEMENT

Anthropic has launched a new and more powerful generative artificial intelligence (AI) model, which comes three months after its earlier version, and claims it outperforms its competitors such as ChatGPT-4o.

The company calls itself an AI safety research company and was founded by former OpenAI executives and researchers. Google and Amazon are also major investors in the firm. 

Anthropic said its new model – Claude 3.5 Sonnet  – the first release in the forthcoming Claude 3.5 model group, is the “most powerful model yet”.

In an internal agentic coding evaluation, Claude 3.5 Sonnet solved 64 per cent of problems, outperforming Claude 3 Opus which solved 38 per cent, the company said. 

It also has better nuance, humour, and complex instruction capabilities and is exceptional at writing high-quality content with a natural, relatable tone, the company added. 

Anthropic also found that on graduate-level reasoning it scored 59 per cent, compared to ChatGPT-4o which was at 53 per cent. 

On reasoning over text, it also outperformed other companies at 87 per cent, compared to 83 per cent for ChatGPT-4o, 74 per cent by Google’s Gemini, and Meta’s Llama large language model, which scored 83 per cent. 

Anthropic data
Anthropic data Anthropic

However, on math problem solving, it was beaten by ChatGPT-4o which was five per cent more accurate than Claude 3.5. 

Race to the top on AI

Generative AI companies are racing to get one step ahead of each other. In April, OpenAI, Google, and Mistral AI all released new versions of their frontier AI models within 12 hours of one another. 

This has sparked concern that tech companies are rolling out the technology quicker than developers can intervene and stop the models from being used for harmful purposes or if models present biases that need fixing. 

“Creating systems that are not only capable but also reliable, safe, and aligned with human values is a complex challenge,” said Dario Amodei, Anthropic’s co-founder and CEO.

“We don't have all the answers, but we're dedicated to working on these problems thoughtfully and responsibly,” he said in a press release. 

Anthropic also announced the launch of Artifacts on Claude.ai, which allows for content such as code snippets or text documents to appear in a dedicated window alongside their creations. 

“This creates a dynamic workspace,” Anthropic said, where users can see, edit, and build upon what they are working on in real-time. 

The company said it marks “Claude's evolution from a conversational AI to a collaborative work environment”. 

ADVERTISEMENT

“It's just the beginning of a broader vision for Claude.ai, which will soon expand to

support team collaboration,” the company said.

It said in the near future, teams, and eventually organisations, will be able to securely centralise their knowledge, documents, and ongoing work in one shared space, with Claude serving as an “on-demand teammate”.

Anthropic said it later plans to release upgrades to its model family, including 

ADVERTISEMENT

Claude 3.5 Haiku and Claude 3.5 Opus later this year “while also pursuing our safety research to ensure these systems remain safe”.

The company was formed in 2021 by brother and sister Dario and Daniela Amodei, after they left OpenAI.

In an interview with Fortune, Dario said he left the company due to the lack of attention OpenAI paid to safety.

The announcement comes as OpenAI co-founder and former chief scientist announced his own AI company dedicated to safety. 

ADVERTISEMENT
Share this articleComments

You might also like