Make some room ChatGPT. Rival AI chatbot AutoGPT has landed. This is what we know about it so far

AutoGPT, unlike ChatGPT, barely requires any human input
AutoGPT, unlike ChatGPT, barely requires any human input Copyright AP Photo
By Sarah Palmer
Share this articleComments
Share this articleClose Button

Artificial Intelligence has levelled up again. But what is AutoGPT - and should we be worried about this latest development in the "AI race"?

ADVERTISEMENT

Gentle reminder: it was only a few months ago that ChatGPT became a thing, which may seem hard to believe given the impact it has had on the world of artificial intelligence (AI) since.

Fast forward to now and the rate at which 2023 is going, we might well be on the brink of what could be coined the Artificial Intelligence Revolution in years to come.

Less than six months since ChatGPT launched, chatbots just got smarter again.

This time, it’s the new kid on the block, AutoGPT. And chances are, it will be coming to a device near you very soon.

What is AutoGPT?

GPT stands for generative pre-trained transformer. In its simplest terms, it’s technology that attempts to replicate a human brain.

Some would argue that this time round, it’s a little too maverick, able to onboard and store information, learn and improve from it and ultimately complete tasks.

The key difference with AutoGPT compared to recently launched, AI-driven chatbots is that it doesn’t need that much human input.

Where tools like ChatGPT and Microsoft’s new and improved Bing require prompts provided by the human at the other end, AutoGPT reckons it can do the research itself, learn from its mistakes and consequently adapt its workload. Needless to say, it promises to be considerably quicker and operate more effectively than any human brain is capable of doing.

Sounds terrifying? Well, it actually is. If the technology is as good as it claims, there’s potentially a real threat to professionals working in areas like customer services or journalism, for example.

How does AutoGPT work?

There are four main elements to AutoGPT that make it as efficient and productive as it claims to be.

Firstly, the language model it employs. GPT-4 is the latest generation of GPT technology from OpenAI, released in mid-March this year. That’s the part that helps it “think”.

Secondly, it’s programmed to use autonomous iterations, which is the aspect that supports the chatbot in correcting any mistakes - and then proceed to learn from them.

Thirdly, its memory storage solution. AutoGPT has been integrated with vector databases which enable the tech to store information and improve its decision-making process in the future.

And finally, its multifunctionality. It can browse the world wide web in a matter of moments, retrieve and store data and then edit data files if required.

Computer engineer Varun Mayya put the process in action, explaining on Twitter how, "autogpt was trying to create an app for me, recognised I don't have Node, googled how to install Node, found a stackoverflow article with link, downloaded it, extracted it, and then spawned the server for me.

“My contribution? I watched.”

What are the main concerns around programmes like AutoGPT?

We’ve already seen the repercussions from ChatGPT emerging over the course of the past few months. But with the introduction of AutoGPT, suddenly high school kids cheating on their test papers feels like a relatively minimal concern.

When Elon Musk steps in to warn people of the risks of technology like this, you can probably gauge that some feel the "AI race" is getting a little out of hand.

ADVERTISEMENT

An open letter was published at the end of last month, which has since been signed by thousands of people - including industry leaders like Musk and Steve Wozniak, a co-founder of Apple.

While it points out that, "humanity can enjoy a flourishing future with AI," it also asks Big Tech companies hurtling through endless updates and increasingly intelligent software to "give society a chance to adapt".

The letter asks, "should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilisation?"

If anything, these existential questions are somewhat reminiscent of Bing’s integration breakdown earlier this year.

That said, not all users see it as all bad, as Twitter users and AI enthusiasts Rahul and Brian O'Connor point out:

ADVERTISEMENT

It’s undoubtedly going to be an interesting few weeks of people putting the platform’s capabilities to the test.

But the real concerns are there. If people thought four TV channels were brain-numbing in the '80s and '90s, the prospect of technology that can essentially do our thinking for us is everything 'The Matrix' warned us about.

Share this articleComments

You might also like