'No excuse' for AI developers to get data privacy wrong, warns UK data regulator

The UK’s data watchdog is warning AI developers there is “no excuse” for getting data privacy wrong
The UK’s data watchdog is warning AI developers there is “no excuse” for getting data privacy wrong Copyright Canva
Copyright Canva
By Euronews
Share this articleComments
Share this articleClose Button

The UK's Information Commissioner's Office has called on AI developers to take a step back and ensure they are getting data privacy right.


AI developers have “no excuse” for getting data privacy wrong, one of the heads of the UK’s data regulator has said, warning those who don’t follow the law on data protection will face consequences.

The Information Commissioner's Office (ICO) enforces data protection in the UK. Speaking amid the explosion of interest in generative AI, especially Large Language Models like the one that powers OpenAI’s ChatGPT, Stephen Almond, the ICO’s executive director of regulatory risk, warned LLMs posed a risk for data security.

Writing in a blog post, he argued it is time to "take a step back and reflect on how personal data is being used".

He noted that Sam Altman, the CEO of ChatGPT creator OpenAI, has himself declared his own worries about AI advances and what they could mean.

And referencing a letter published last week which was signed by tech leaders and AI experts that called for an immediate pause to AI development, Almond recounted a conversation he had with ChatGPT himself.

"Generative AI, like any other technology, has the potential to pose risks to data privacy if not used responsibly," the chatbot wrote.

"And it doesn’t take too much imagination to see the potential for a company to quickly damage a hard-earned relationship with customers through poor use of generative AI," Almond said.

He added that while the technology might be new, the principles of data protection law are the same.

8 questions for AI developers

“Organisations developing or using generative AI should be considering their data protection obligations from the outset, taking a data protection by design and by default approach,” he said, adding that this “isn’t optional…it’s the law”.

He then listed eight questions AI developers who use personal data need to ask themselves, adding that the ICO will be asking them, and acting where organisations are not following the law.

The questions are:

  • What is your lawful basis for processing personal data?
  • Are you a controller, joint controller or a processor?
  • Have you prepared a Data Protection Impact Assessment (DPIA)?
  • How will you ensure transparency?
  • How will you mitigate security risks?
  • How will you limit unnecessary processing?
  • How will you comply with individual rights requests?
  • Will you use generative AI to make solely automated decisions?

"There really can be no excuse for getting the privacy implications of generative AI wrong," he said, warning that his organisation will be “working hard” to ensure organisations get things right.

Share this articleComments

You might also like