Open source vs closed source AI: What’s the difference and why does it matter?

Key to the debate is how the tech is democratised, but safety and profit take precedence in the dispute.
Key to the debate is how the tech is democratised, but safety and profit take precedence in the dispute. Copyright Canva
Copyright Canva
By Pascale Davies
Share this articleComments
Share this articleClose Button

Regulators, start-ups and Big Tech are split into two camps in the open versus closed source artificial intelligence debate, where safety and profit take precedence.

ADVERTISEMENT

The battle between generative artificial intelligence (AI) companies is underway with two competing camps: open source software versus closed source.

Key to the debate is how the tech is democratised, but safety and profit take precedence in the dispute.

Generally speaking, open-source software is where the source code is available to everyone in the public domain to use, modify, and distribute. It encourages creativity and innovation as developers can build on AI algorithms and pre-trained models to alter their own products and tools.

Closed source AI, on the other hand, means the source code is restricted to private use and can not be altered or built upon by users; only the company that owns it can. But funding these open-source companies is easier, meaning that they have greater capital to innovate.

The definition of what makes a company open-source is also not that clear-cut.

The definition of open source technology

The Open Source Initiative (OSI) is the steward of the definition of open source technology.

It states that “open source doesn’t just mean access to the source code,” and that it must comply with 10 criteria, including having a well-publicised means of obtaining the source code at a reasonable cost or for free, not being discriminatory, and the license not restricting other software.

But complying with all of the OSI’s requirements is a rarity and most open source companies are only partly open source, such as the French AI champion Mistral. It open sources its model weights - the numerical parameters that influence how an AI model performs - but not the data or training process.

Technology for all

AI companies that say they are open source argue that they are making technology more accessible to all.

Open source helps democratise the tech economy, Alex Combessie, CEO of Co-founder French open source AI company Giskard, told Euronews Next.

“It levels the playing field because if it's open source it's free, at least in general. It means that smaller players can compete with bigger players because they have access to products for free,” he said.

“And that's very important, especially in a world that's so dominated by monopolies in tech”.

Open source also equalises the political field, allowing anyone to audit the code as it is visible to regulators, whereas, with closed source, the details are not transparent.

Safety concerns

But closed source AI companies, such as ChatGPT maker OpenAI (despite its name), argue that open source threatens our safety.

The OpenAI logo appears on a mobile phone in front of a computer screen with random binary data, Thursday, March 9, 2023
The OpenAI logo appears on a mobile phone in front of a computer screen with random binary data, Thursday, March 9, 2023Michael Dwyer/Copyright 2023 The AP.

OpenAI was originally founded to produce open-source AI systems, but the company said in 2019 that it was too dangerous to continue releasing its GPT language model to the public as it looked too similar to human speech and if placed in the wrong hands, it could generate high-quality fake news.

With closed source AI, there are also usage policies that will politely refuse requests, such as instructions to make a bomb or design a more deadly coronavirus.

You can technically jailbreak the policy, meaning you can hack it to bypass an AI’s safeguards. But these vulnerabilities are often quickly fixed.

Open source AI systems will most likely have safeguards in place or a responsible use guide. But try as they might to tackle discrimination or danger, once a copy of the AI model is accessible, whoever has it can change the safeguards themselves.

ADVERTISEMENT

A prime example of this was when Meta released its open source large language model (LLM) Llama 2 in July last year.

Just days later, people released their own uncensored Llama 2 versions, asking questions about how to build a nuclear bomb, which the LLM could then answer.

Once someone releases an “uncensored” version of an AI model, there is largely nothing that the original AI maker can do, especially as it will have already been downloaded by other users.

Big Tech business

In December, Meta and IBM launched a group called the AI Alliance, which puts it at odds with the closed source giants OpenAI, Google, and Microsoft.

The AI Alliance advocates for an “open science” approach to AI and involves 74 companies, including start-ups, large companies, and non-profits.

ADVERTISEMENT
Lawmakers vote on the Artificial Intelligence act Wednesday, June 14, 2023 at the European Parliament in Strasbourg.
Lawmakers vote on the Artificial Intelligence act Wednesday, June 14, 2023 at the European Parliament in Strasbourg.Jean-Francois Badias/Copyright 2023 The AP.

“An open, responsible, and transparent approach to AI benefits everyone – industry, academia, and society at large,” said Alessandro Curioni, IBM Fellow, vice president of Europe and Africa and director of IBM Research.

Asked about safety, he said the idea behind open and transparent AI is to minimise fears.

“Open, transparent, and responsible AI will help advance AI safety, ensuring that the open community of developers and researchers addresses the right risks of AI and mitigates them with the most appropriate solutions,” he told Euronews Next.

“It doesn’t mean that all technology should be open – and crucially, open does not mean not governed. Governance is equally important whether it comes to open or proprietary AI,” he clarified.

The funding problem

While the open source argument that developers can freely build upon the open source AI models and push boundaries, money is of course at the heart of the debate.

ADVERTISEMENT

Some investors may be deterred from an open source model as they may feel more at ease having a secure internet protocol (IP).

“A conversation with a large language model in real-time also uses a lot of computing power and links up with computational cloud solutions, which is not really cheap,” said Dr Theodore Chen, a data scientist at Deloitte.

“At this point in time, I can see that it is really based upon monetisation investments. So the more cutting edge technology does need funding for that,” he told Euronews Next.

“The monetisation is also used for development as well, which could be used to build more accurate models, which benefits the industry overall to move forward”.

But moving the technology forward can also be done with open source models as developers can be creative and build new uses for AI.

ADVERTISEMENT

“If more people have access to the raw materials where they can really do creative and very interesting things, we could see things spreading in directions that you probably or no one would guess,” Tiago Cardoso, product manager at Hyland, told Euronews Next.

Regulation

The open vs closed source debate has been a sticking point for regulators in Europe and the United States who have opposing views.

In Europe, the EU AI Act is expected to get final parliamentary approval in April. But the text that is waiting for approval says that the Act will not apply to open-source AI systems unless they are prohibited or classified as high-risk AI systems.

This is welcome news to France and Germany, who throughout negotiations pushed to protect its open source start-ups.

Meanwhile, in the United States, President Biden’s sweeping executive order at the end of last year said when open source models - which he called dual-use foundation models - are “publicly posted on the Internet, there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model”.

ADVERTISEMENT

Biden has given Commerce Secretary Gina Raimondo until July to talk to experts and come back with recommendations.

Share this articleComments

You might also like