OpenAI rival Anthropic launches chatbot Claude in Europe to give users more choice

IOS App image
IOS App image Copyright Anthropic
Copyright Anthropic
By Pascale Davies
Share this articleComments
Share this articleClose Button

The company's Claude chatbot supports multiple European languages.


Anthropic’s artificial intelligence (AI) assistant is now available in Europe but the company’s model, which aims to make AI chatbots accessible to all, arrived on the continent just hours after its Microsoft-backed rival ChatGPT released its latest version.

Anthropic bills itself as an AI safety research company and was founded by former OpenAI executives and researchers. Google and Amazon are also major Anthropic investors.

As of Tuesday users in Europe can access, the web-based version of the AI assistant, as well as the free version, the Claude iOS app and the Claude Team plan for businesses. It supports French, Spanish, German, Italian, and many other European languages.

“We're incredibly excited to launch in Europe because the region is well-positioned to harness the benefits of AI,” an Anthropic spokesperson told Euronews Next.

“Claude has a number of specific features that uphold EU values and caters for the EU market; we designed Claude to avoid biases, discrimination, and hate speech; focused on reliable, steerable, and accessible AI for all.

The use of our technology in political campaigns and lobbying is banned, we’ve built automated systems to detect and prevent misuse like misinformation or influence ops.

“Claude also offers multilingual capabilities; it understands and generates content in most EU languages while respecting cultural nuances,” the spokesperson added.

Anthropic launched its fastest and most powerful large language model (LLM) in March, which later included three state-of-the-art models in ascending order of capability: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus.

How does it compare to ChatGPT?

Asked how it compares to ChatGPT, the Anthropic spokesperson said its “human tone, expanded context window and accessible user interface are among the reasons [users] choose Claude 3, over and above the model’s exceptional intelligence and horsepower" as well as the “commitment to AI safety,” which sets the company apart.

Anthropic founders brother and sister Daniela Amodei (R) and Dario Amodei (L)
Anthropic founders brother and sister Daniela Amodei (R) and Dario Amodei (L)Anthropic

“It’s important that customers in Europe have choice and we’re excited to provide a suite of leading models to suit their different needs,” the spokesperson added.

Additionally, Anthropic claims the most capable of the models Opus outperformed OpenAI’s GPT-4 and Google’s Gemini Ultra on tests such as reasoning, basic maths, and undergraduate and graduate-level knowledge.

One of the main selling points is the possibility to upload and process long documents, which is only possible on ChatGPT’s paid version.

Opus can summarise approximately 200,000 tokens, while ChatGPT can summarise about 3,000. A token is a unit of measurement that calculates the amount of text data that is processed by your chatbot.

On Monday, OpenAI announced its new ChatGPT-4o model that boasts a more human-like interaction and has a free, non-subscription version.

Fighting misinformation

Claude says it is “as trustworthy as it is capable” and has dedicated teams that track and mitigate a broad spectrum of risks, ranging from misinformation, bias, election interference and national security threats.

Instead of using humans to identify harmful output, it uses a method that can be trained to moderate itself by using a core set of principles, called Constitutional AI. The Claude 3 model shows fewer biases than previous models, according to the Bias Benchmark for Question Answering (BBQ).

The company says it has taken a range of measures to help protect against misinformation and disinformation in elections.

“The use of our technology in political campaigns and lobbying is banned, we’ve built automated systems to detect and prevent misuse like misinformation or influence ops.

"We’re also conducting ‘red team’ tests to identify election-related vulnerabilities and risks and we’ve created quantitative tests to evaluate political parity, misuse resistance, disinformation,” the spokesperson said.


Privacy is also a concern for Anthropic, which does not use prompts to train models unless you give them permission. Data is also only retained for 90 days.

How much does it cost? and the Claude iOS app are free. But Claude Pro and all models, including the most advanced Claude 3 Opus model are €18, which is slightly less expensive than ChatGPT4.

The team plan, which allows five users is €28 + VAT per user per month (or local

currency equivalent), with a minimum of five seats.

The Anthropic spokesperson said there will be more updates in the coming weeks.

Share this articleComments

You might also like