Euroviews. Cambridge Analytica scandal: Users shouldn't have to leave social networks to protect themselves | View

Facebook
Facebook Copyright REUTERS/Yves Herman/File Photo
By Euronews
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Digital rights activists from Tactical Tech discuss what Facebook users can do to safeguard their personal data and other things you need to know about the controversy over Cambridge Analytica's use of the tech giant's data.

ADVERTISEMENT

Facebook is embroiled in a controversy over the alleged abuse of personal data after a former contractor for the UK firm Cambridge Analytica claimed the tech giant's data was used to manipulate the results of the last US presidential election. Separately, the data firm is also being investigated over its role in the UK's Brexit referendum. Further revelations suggesting Cambridge Analytica engaged in apparently unethical tactics to sway political campaigns are continuing to emerge, as authorities in Europe and the US begin to take steps to investigate the allegations.

The data was harvested in 2014 from a Facebook quiz which asked users to learn their personality type. Developed by an academic at the University of Cambridge, Aleksandr Kogan, the Thisisyourdigitallife app gained access not only to personal data of people taking the quiz, but that of their friends too. This practice was permitted by Facebook at the time, but has since been banned.

Through these friend networks, the data of 50 million users - mostly in the US - was gathered without their knowledge, even though only 270,000 people took the quiz and had consented to their data being used for academic purposes, according to Christopher Wylie, who worked as a data analyst for Cambridge Analytica. The firm bought the data, he says, and used it to psychologically profile individuals and send them targeted content in support of Donald Trump during the 2016 presidential election campaign.

Cambridge Analytica denies it used the data during its work on the Trump campaign.

Facebook's terms at the time did not authorise app developers to share user data with third parties, like Cambridge Analytica. When they discovered the breach two years ago, Facebook says, they removed the app and told the developer to delete the data and certify that it had done so.

Whether Cambridge Analytica destroyed the data is disputed. The firm says it deleted the data when asked to do so by Facebook (and claims the information was never used for political purposes), while Wylie claims copies of the data still exist. And The New York Times says it recently saw some of the raw data that Kogan shared with Cambridge Analytica.

The Leave.EU campaign has also claimed it worked with Cambridge Analytica in the lead up to the Brexit referendum, although the firm has denied this.

Euronews spoke to Varoon Bashyakarla and Stephanie Hankey from Tactical Tech, a Berlin-based technology activist organization working on digital security, privacy and the ethics of data, about the scandal.

What makes this apparent attempt to influence voters different from targeted political marketing tactics of the past?

Facebook normally provides a matching service, enabling advertisers to find people to advertise to on the platform, just like Google and other platforms who don’t actually give away data, but sell access to it. What this means in the case of Facebook, is that they have services that allow political campaigns to upload voter lists to Facebook, and Facebook will then search the lists and display ads to those users who are also on Facebook.

Instead of the usual matching service, in this case Facebook gave away user data. Even if it was presumed to be for academic research, the volume of data handed over was unprecedented (50 million users). This, in turn, enabled the data to be shared with Cambridge Analytica.

This handing over of data matters for both privacy and legal reasons, but people are particularly concerned because of how invasive and personal the resulting profiling efforts were – attempts to predict individuals’ personal details (their levels of openness, contentiousness, extraversion, agreeableness, and neuroticism) to ultimately influence their votes.

Is this likely to be a one-off abuse of Facebook user data or something that happens often?

This is something that happens often with personal data, but one of the few instances where there has been detailed documentation. Usually this information is hard to come by due to the closed way Facebook and other similar companies do business. One of the important aspects of this breach of trust was the manner in which users of the Thisisyourdigitallife app unwittingly exposed their friends’ data in the process. At the same time, these users opted in, and few people actually read the terms and conditions of tech services before clicking the “I agree” box. As none of us really do.

Users need to take a closer look at what they’re agreeing to. Every time you use a new service and have the option of signing in with Facebook or signing in another way, signing in with Facebook gives Facebook access to your data in the process.

What can regular Facebook users do to safeguard their personal data other than deactivating their account?

The question for many people is really how can they exercise more control over what Facebook knows about them:

  • It’s important to remember, even deactivating Facebook doesn’t erase the history of data Facebook has already collected on you, and the more Facebook was used, the more data it collected.
  • One of the best ways to manage what data you’re handing over is to review what apps you have connected to your Facebook account. You can also really think about what information you are posting and how you are using Facebook beyond actually looking at your timeline – for example signing into other services through Facebook.
  • Concerned users can use Facebook minimally. For many of us, leaving Facebook is like leaving our social networks, but users can treat Facebook as a directory of contacts and choose to move more conversations offline. This is valuable because the most precious data Facebook harvests from you is behavioral. It’s all about our social networks, our friends, our hobbies, etc., and using that information to guess what our preferences are. We’re generally okay with advertising firms doing this when selling us shoes, but not when political campaigns are trying to influence our votes.

This is a reversal of the way users’ relationship with Facebook should work. Why should users extract themselves from their social networks to protect themselves and their data, especially when the data users feed Facebook is Facebook’s lifeline?

ADVERTISEMENT

Tactical Tech’s Data Detox Kit, and other similar resources, are useful for anyone trying to learn more about how to control their data on such platforms.

What can Facebook do to rebuild trust with its users?

Between ongoing investigations into Facebook’s involvement in US and European elections, fake news, bots and other scandals from the last year, Facebook needs to do more than just promise to hire more people, make appearances at formal hearings, and offer apologies. In the light of these scandals, and others around violent content online, hatespeech, radical content, online harrassement and their inability to control it, Facebook needs to fundamentally reconsider its business model and also the checks and balances it has in place to deal with the political complexity of what goes on within their platform. For a company now connecting over a quarter of the planet, Facebook is no longer a neutral, irrelevant actor. It has a great responsibility to realise this fact. This means that small tweaks are no longer sufficient. They are one of the most profitable technology companies in the world and they have to invest some of their profits in solving these problems. Even former investors in Facebook are now saying they have to reconsider their business model.

What measures should regulators be considering in response to this revelation?

Facebook should be transparent about what political actors (parties, associated businesses) are paying for in terms of political targeting and advertising and how much money they are spending in the process. Additionally, spending on digital ads can be regulated, as ads on TV, radio, and print already are.

ADVERTISEMENT

There should also be transparency around how platforms handle political content such as political advertising. In some parts of the world, as in Europe, labeling a political ad as such is forbidden because it’s protected under freedom of speech, but online its quite difficult for most people to tell the difference.

There’s an urgent need for citizens to understand how technology is affecting the democratic process. We at Tactical Tech are conducting an international related research study and publishing a report on this in June with the Oxford Internet Institute exploring how personal information is being used for political influence. We have analysed two dozen companies around the world doing work similar to Cambridge Analytica and are working with researchers in ten different countries to understand how major issues in technology have affected their local elections.

_Varoon Bashyakarla is a data scientist and Stephanie Hankey is a co-founder of Tactical Tech, a Berlin-based organisation working at the intersection of technology, human rights and civil liberties.
_

Opinions expressed in View articles do not reflect those of Euronews.

Share this articleComments

You might also like

From bank records to browsing history, not all data are equal when it comes to privacy | View

Mad that Facebook has your data? Here's why you can't sue to get it back | View

Facebook shares tumble in Cambridge Analytica data breach