‘My concerns are more about people, institutions than the tech’: Signal’s Meredith Whittaker on AI

Meredith Whittaker, President, Signal, on Centre Stage during day one of Web Summit Rio 2023
Meredith Whittaker, President, Signal, on Centre Stage during day one of Web Summit Rio 2023 Copyright Piaras O Midheach / Web Summit Rio / Sportsfile/SPORTSFILE
Copyright Piaras O Midheach / Web Summit Rio / Sportsfile/SPORTSFILE
By Pascale Davies
Share this articleComments
Share this articleClose Button

Signal president Meredith Whittaker spoke to Euronews Next about Europe’s regulation of AI, privacy, and mental health in this new age of artificial intelligence (AI).

ADVERTISEMENT

Meredith Whittaker, the president of the not-for-profit Signal Foundation which operates the encrypted Signal messenger app, is one of the early thinkers in artificial intelligence (AI), having led a research group on the technology in 2006 for Google.

She organised a walkout at Google over its attitude to sexual harassment and was forced out of the company in 2019.

She then co-founded the AI Now Institute and was an adviser to the US Federal Trade Commission and still has her concerns about Big Tech and how it controls our data in this new age of AI.

She spoke to Euronews Next about AI in Europe and regulation at this year’s Web Summit in Lisbon.

Euronews Next: You’ve spoken a lot about your fears around AI and this AI hype we are going through. What is it in particular that you are concerned about?

Meredith Whittaker: I think my fears are less about the technology itself and more about the fact that it is developed and controlled by a handful of large corporations whose interests are, of course, the interests of the corporation and profit and growth and the pleasing of shareholders; not necessarily the social good.

And that the AI that they are selling, that they are licensing, that they are creating, that they are deploying requires huge amounts of data, requires huge amounts of computational power, and effectively reinforces and expands the surveillance business model, which is at the heart of so many of the harms that most of the world are concerned about when it comes to unaccountable technology.

I think my fears and my concerns are more about the people, institutions and incentives that are shaping AI than they are about the technology itself, or the idea that it could somehow become sentient or God-like.

Let's talk about the governance and the regulation of AI. Does regulation go far enough? And more specifically, as Europe prepares its own AI mandate, how is Europe faring?

[Europe’s] AI act is not complete yet. We don't quite know what the final form will be. I think we need to suspend judgment for a minute. I think Europe has already done a lot that could be used to powerfully check some of the harms of AI.

One example I would bring forward is the fact that GDPR is interpreted strictly and enforced. Strictly could easily be used to ban surveillance advertising, and banning surveillance advertising would have a huge and fairly radical restructuring effect on the incentives that are driving these corporations.

Meredith Whittaker, leads Google's Open Research Group, addresses hundreds of Google employees during a protest rally, 2018
Meredith Whittaker, leads Google's Open Research Group, addresses hundreds of Google employees during a protest rally, 2018AP Photo/Bebeto Matthews

It would mean that suddenly that huge amount of data that's been used to create demographic profiles that are sold to advertisers, to micro-targeting, target different types of people would no longer be permissible.

And that would of course affect AI because that data source that, you know, everything we post on social media, our location data, our social graph, all of that would suddenly become much less useful.

And in a strict interpretation, perhaps impermissible for these centralised corporations to collect and maintain. So, that's one example of where Europe already has standing on this. The question is really political will.

Let’s dive into political will, do you feel that maybe more so in the US there's this idea of let's be more corporate about this, let's make more profit?

Well, of course. A large number of these corporations are based in the US. That's their jurisdiction and the US is well aware of that. It's these corporations that partner with the US government.

Often they provide infrastructure for the government. And we're in a very messy political situation in the US where it's been 20 years, 30 years, depending on where you count, and we don't have a federal privacy law.

One thing that does give me hope is that it's very difficult to build these large-scale global-sized systems in a bespoke way for each jurisdiction.

ADVERTISEMENT

So powerful regulation from Europe or from another large market has a shaping effect on these corporations, even if it doesn't come from the jurisdiction in which they're housed.

How much of a danger will AI “sweatshops” that collect data be?

I think the issue there is really the labour question. There was an estimate recently that 100 million people are employed in very precarious positions, effectively being the human stewards, providing the human intelligence that AI and AI companies take credit for.

So that is data labelling. That is data curation, that is calibration. Telling the system what is acceptable and unacceptable. That is content moderation; that is error control and management of the issues that these AI systems create when they behave erratically.

This is a huge number of people.

ADVERTISEMENT

And I think there are real issues around the quality of that work. There are real issues around the harm that that work does to people's mental health. This is not controversial.

The job of repeatedly viewing or being confronted with horrifying images or texts has a documented and very clear, harmful effect on people's mental health. We have testimony from people who've lost jobs, lost relationships, and lost their ability to be in the world because of how disturbing this was.

And I think there is a bigger question we need to step back from just if these systems require that scale of harm, which they do, there is no other way to make these systems work and make them sort of acceptable for business and popular use.

Are we comfortable with that?

Do you believe that privacy would be among the more significant casualties as an increasing number of people use generative AI software, such as Chat GPT?

ADVERTISEMENT

Privacy has been a casualty along the road of the tech business model for a long time.

Signal will continue providing a truly private safe haven from the voracious data collection that defines the rest of the industry.

But certainly, this race to bigger is better. AI poses a huge privacy risk. It calls for the creation and collection of increasingly intimate, increasingly pervasive, increasingly invasive data about us, about our communities.

And it is a force for further centralising that data in the hands of a handful of companies, which, again, is an unaccountable power. It's very difficult to find an analogue to in history.

You've criticised the UK's Online Safety Act with regard to ending end-to-end encryption. How do you feel about that and the EU's data rules?

ADVERTISEMENT

[The UK's Online Safety Act] contained what I would actually consider sort of a product or at least partially a product of this AI hype, which was a very misguided clause that implied, was based on the belief that AI systems could somehow scan everyone's personal communication.

Matching that against a database of permissible expression in a way that was private and secure and of course, longstanding technological consensus. That entire expert community knows this is not true.

And so we fought against that because as Signal, we rely on end-to-end encryption to provide a meaningful option for truly private communication in a world riddled with surveillance. As we were very concerned about that law.

Now, Europe has a similar law moving forward. It looks like they just adapted it to exempt Internet-encrypted messaging from scanning, which is great. But of course, the trialogues are coming up and there's a complex political process. We're not done yet.

But we are happy that by bringing the human rights community to the fore, bringing real technical expertise, not the marketing departments of AI companies to the forefront, we were able to make clear that this was magical thinking and actually extraordinarily dangerous for rights and liberties.

ADVERTISEMENT
Share this articleComments

You might also like