New Times,
New Thinking.

  1. Science & Tech
10 April 2018updated 01 Jul 2021 12:15pm

These are the five key questions Facebook’s Mark Zuckerberg still needs to answer

As the site’s embattled CEO Mark Zuckerberg prepares to testify before US legislators, the biggest questions facing his company remain unaddressed.

By Nicky Woolf

For the first time ever, the Facebook CEO Mark Zuckerberg will testify before both houses of the US Congress.

On Tuesday, he will testify before a joint sitting of two Senate committees – the commerce, science, and transportation committee, and the judiciary committee. On Wednesday, he will then testify before the House energy and commerce committee. His opening testimony to the House committee has already been released, which you can read in full here.

His appearance comes in the wake of revelations surrounding the abuse of millions of users’ personal data by the political consulting firm Cambridge Analytica. It also comes as the ongoing investigation by US special prosecutor Robert Mueller continues to shed light on the vast scale of Russian government-backed operations to sow discord and fake news on social media platforms, especially Facebook, during the 2016 presidential election.

Zuckerberg has come a long way since the day when, shortly after Donald Trump won the presidential election in November 2016, he said: “the idea that fake news on Facebook … influenced the election in any way is a pretty crazy idea.”

What a difference a year makes. “We didn’t take a broad enough view of our responsibility, and that was a big mistake,” Zuckerberg will tell the committee on Wednesday. “It was my mistake, and I’m sorry.” That in itself doesn’t mean much. Congressional testimony is often an opportunity for political theatre and showboating, rather than an attempt to seek genuine insight.

But even if Zuckerberg’s testimony ends up being little more than a circus, here are the five core questions that Facebook really needs to answer, and why:

Can Facebook guarantee that companies like Cambridge Analytica can’t get hold of user data like this again?

Give a gift subscription to the New Statesman this Christmas from just £49

In his testimony, Zuckerberg will explain that the data gathered by Cambridge Analytica originated from a single personality quiz app, created in 2013 by a researcher at Cambridge University named Aleksandr Kogan. That app, which was downloaded by around 300,000 people, could also access the data of the friends of those who installed it – giving Cambridge Analytica, with whom Kogan later shared his data, the personal information of 87m users.

Zuckerberg says that in 2014 Facebook made changes to “dramatically restrict the amount of data that developers can access”, which he said would “prevent any app like Kogan’s from being able to access as much Facebook data today”.

However, it is still unclear how many applications from Facebook’s early days might have had access to similarly vast data sets, let alone how many outside organisations like Cambridge Analytica might have been able to access them. Facebook appears to have spent some years operating what was effectively an honour system: Kogan’s app was banned only in 2015, when Facebook says it learned from journalists at The Guardian that data had been shared.

That implies that there was no internal system for tracking and protecting user data, outside of the steps individual users could take with their privacy settings. Worse, it suggests that Facebook then simply asked Kogan to “formally certify” that the data had been deleted, and, according to Zuckerberg’s testimony, only learned last month that it had been lied to.

Ultimately, while it is good that Zuckerberg is finally apologising for shutting the stable door far too late, we still don’t know how many horses bolted or where they went. It is possible that Facebook doesn’t even know that itself.

Why did Facebook take so long to respond after they learned of what Cambridge Analytica did?

Facebook is instigating changes that remove developers’ access to data if an app remains unused for three months, reducing the data given to an app, and also instigating a “contract” that developers will now have to sign imposing “strict requirements” if they want their app to request other user data.

But it is only imposing them now, three years after Zuckerberg says the company learned that the data had been shared with Cambridge Analytica. Why did the company not take firmer action at the time to prevent user data being used without its users’ consent?

At the moment, it looks as if Facebook was caught on the back foot by the response to the Cambridge Analytica story. It seems clear that it simply didn’t think it was a big deal at the time.

Will there be any meaningful changes to how Facebook operates in the long run?

Sorry definitely does not seem to be the hardest word in Silicon Valley. In fact, the unofficial catchphrase of the tech industry is “move fast and break things”. Facebook has long embodied this aphorism, and Zuckerberg has demonstrated time and again that he prefers to ask for forgiveness than permission.

He may be contrite now, in the face of massive public opprobrium, but he has been here many times before. “We’ve made a lot of mistakes building this feature, but we’ve made even more with how we’ve handled them. We simply did a bad job with this release, and I apologize for it,” Zuckerberg wrote after users objected to Beacon, Facebook’s first targeted-advertising system, which pulled in data from third-party sites and was instigated without users’ permission in 2007.

There was more outrage in 2014, when 700,000 users discovered they had been used as part of a creepy psychological experiment into whether Facebook could use different content to manipulate their emotions. In this case, Zuckerberg didn’t even bother to respond himself; instead, Sheryl Sandberg, Facebook’s chief operating officer and Zuckerberg’s effective number two, made a breathtaking non-apology apology. “It was poorly communicated… and for that communication we apologise,” she said, adding: “We never meant to upset you.”

Those are just a few of many times Facebook has had to grovel to its users over the years, but its behaviour never seems to change. Given this history of responding to criticism only when it becomes too noisy to ignore, and then resuming business as usual once attention dies down, it is crucial that Zuckerberg is pressed on maintaining accountability in the long term.

How is it possible Facebook was unaware of the Russian activity on its platform?

This question, in various forms, is of course fairly likely to be asked, especially by the Democrats on the committee.

In October, then-senator Al Franken asked it of Colin Stretch, Facebook’s general counsel, during another committee hearing: “How did Facebook, which prides itself on being able to process billions of data points and instantly transform them into personal connections for its users, somehow not make the connection that electoral ads paid for in roubles were coming from Russia? Those are two data points! American political ads and Russian money: roubles. How could you not connect those two dots?”

The answer is likely that Facebook didn’t care about where its money was coming from, or the content of the advertisements.

A particularly bleak illustration of that: an investigation by ProPublica in September 2017 found that Facebook’s tool presented advertisers with specific options for targeting users with key-phrases including “Jew hater” or “How to burn Jews”.

Zuckerberg has pledged to increase the size of the team working on security and content review from 15,000 to 20,000, and also changed the policies around political advertising to mean that anyone running a political ad must be “authorised” to do so, and he will reiterate that pledge in his testimony.

But that itself won’t necessarily fix the problem. The actions of organisations like the notorious troll farm called the Internet Research Agency, which was singled out for indictment by Mueller in February, weren’t limited to just political advertising; they were able to use Facebook in a much broader way to sow random discord around issues like guns and race in America.

With Facebook, Zuckerberg built a tool for the manipulation of public opinion more powerful than perhaps any other in history. He knows this, because Facebook has built an advertising business worth $40bn a year off the back of it. If you build a tool like that, you don’t get to act surprised when people use it for nefarious ends.

Will Facebook take action to address the fact that the users aren’t the customers – they’re the product?

This is the greatest challenge for Facebook. Its executives, Zuckerberg included, talk a good game about the “responsibility” they face as the administrators of the “community”. But that is directly at odds with the company’s business model, for which all communities on the site are little more than a useful way of packaging and commodifying users for further targeting by advertisers. Selling the ability to target its users isn’t some small side-project, it’s the primary business model.

The most breathtaking line in Zuckerberg’s testimony is the line “I’ve directed our teams to invest so much in security – on top of the other investments we’re making – that it will significantly impact our profitability going forward.” That he would even make such a complaint about the effect on Facebook’s profit-margin even is perhaps more illuminating than he intended it to be.

In order to be able to tell whether Zuckerberg is serious or not in this act of performative public contrition, it is crucial to know whether or not he is serious about protecting his users. That means an apology alone isn’t good enough; he needs to act to make clear that protecting users is a real priority for Facebook, not merely a nuisance to be occasionally mollified with an apology and then safely ignored.

That would mean not only acting to protect users from not only the publically known dangers to their privacy or safety, but also taking clear steps to protect them from possible unknown and future threats as well. That would also mean working with lawmakers on building a safe regulatory environment for Facebook and platforms like it, no matter what that might mean for the profit-margin. He has floated the idea of an independent “Supreme Court” to settle disputes over independent speech, which would be a good start, but he has given no details as to how such a structure would work.

It seems that the only language Zuckerberg understands is that of shame, so maybe the public ritual he is about to endure before Congress will finally convince him to change his cavalier attitude. But it seems unlikely.

Content from our partners
Pitching in to support grassroots football
Putting citizen experience at the heart of AI-driven public services
Skills policy and industrial strategies must be joined up

Topics in this article :