OpenAI unveils new AI model ChatGPT-4o with eerily human voice assistant

 The OpenAI logo appears on a mobile phone in front of a computer screen with random binary data, Thursday, March 9, 2023, in Boston.
The OpenAI logo appears on a mobile phone in front of a computer screen with random binary data, Thursday, March 9, 2023, in Boston. Copyright Michael Dwyer/Copyright 2023 The AP.
Copyright Michael Dwyer/Copyright 2023 The AP.
By Pascale Davies
Share this articleComments
Share this articleClose Button
Copy/paste the article video embed link below:Copy to clipboardCopied

OpenAI has a new ChatGPT model with an upgraded voice assistant and free users will get access to custom chatbots for the first time.


OpenAI is releasing a new model that will bring the same intelligence to all users, including those on ChatGPT’s free version as well as an impressive voice interface that could rival Amazon's Alexa.

In its so-called "spring update," OpenAI showcased its new ChatGPT-4o model in its first mainstream live event on Monday.

OpenAI chief technology officer Mira Murati told the audience that it is much faster than the earlier ChatGPT-4 model and that it improves text, video, and audio.

"It’s a huge step forward when it comes to ease of use," she said, demonstrating how it could instantaneously translate her Italian speech.

What else can ChatGPT-4o do?

The much-hyped event also unveiled a voice mode that can read body motion, such as how heavy a user breathes, and can generate voice in different emotive styles when asked, such as a robot or singing voice.

It also replies to comments in a human-like way, such as when it was complimented for being "useful and amazing," it replied: "Oh stop it, you’re making me blush".

"Talking to a computer has never felt really natural for me; now it does. As we add (optional) personalisation, access to your information, the ability to take actions on your behalf, and more, I can really see an exciting future where we are able to use computers to do much more than ever before," OpenAI CEO and co-founder Sam Altman said in a blog.

The company said that, unlike earlier versions, users can interrupt the model and it can reply in real-time, cutting the 2-3 second time lag.

ChatGPT is also now capable of detecting emotion by looking at a face through the camera. During the demo, they showed a smiling face and the AI asked "Want to share the reason for your good vibes?"

ChatGPT is also launching a desktop app with voice and vision capabilities.

Available to all and faster

Another big update is that the model is being brought to Artificial Intelligence Application Programming Interface (API), meaning that developers can start building their model at a 50 per cent cheaper price and two times faster.

ChatGPT-4o is also available in 50 languages, covering 97 per cent of the world’s population.

Data protection and ethics?

OpenAI did not say at the event if it would protect user data. On previous versions, ChatGPT could use your conversations for training unless you opt-out.

Generative AI has come into the spotlight for bias and AI hallucinations (incorrect and misleading responses). The company did not mention how or if it would improve this in the new model.

OpenAI currently uses a method called reinforcement learning from human feedback (RLHF), which is when humans review bias in the chatbot’s responses.

When will it be available?

OpenAI said ChatGPT-4o will be available to ChatGPT users in the next few weeks.

Share this articleComments

You might also like