Facebook asks to be regulated kinda like a newspaper, kinda like telco

The EU has been itching to regulate the internet, and that’s where Facebook has been this week: in Germany, asking to be regulated, but in a new, bespoke manner.

In fact, CEO Mark Zuckerberg is in Brussels right on time for the European Commission’s release of its manifesto on regulating AI – a manifesto due to be published on Wednesday that’s likely going to include risk-based rules wrapped around AI.

Don’t regulate us like the telco-as-dumb-pipe model, Zuckerberg proposed on Saturday, even though that’s once how he wanted us all to view the platform: as just a technology platform that dished up trash without actually being responsible for creating it.

No, not like a telco, but not like the newspaper model, either, he said.

Nobody ever really swallowed what Facebook once offered as a magic pill to try to ward off culpability for what it publishes – as in, that “we’re just a technology platform” mantra. Facebook gave up trying to hide behind that one long ago, somewhere amongst the outrage sparked by extremist content, fake news and misleading political advertising.

So now, Facebook has taken a different tack. During a Q&A session at the Munich Security Conference on Saturday, Zuckerberg admitted that Facebook isn’t the passive set of telco pipes he once insisted it was, but nor is it like a regular media outlet that produces news. Rather, it’s a hybrid, he said, and should be treated as such.

Reuters quoted Zuckerberg’s remarks as he spoke to global leaders and security chiefs, suggesting that regulators treat Facebook like something between a newspaper and a telco:

I do think that there should be regulation on harmful content …there’s a question about which framework you use for this.

Right now there are two frameworks that I think people have for existing industries – there’s like newspapers and existing media, and then there’s the telco-type model, which is ‘the data just flows through you’, but you’re not going to hold a telco responsible if someone says something harmful on a phone line.

I actually think where we should be is somewhere in between.

Zuckerberg says that following the 2016 US presidential election tampering, Facebook has gotten “pretty successful” at sniffing out not just hacking, but coordinated information campaigns that are increasingly going to be a part of the landscape. One piece of that is building AI that can identify fake accounts and network accounts that aren’t behaving in the way that people would, he said.

In the past year, Facebook took down around 50 coordinated information operations, including in the last couple of weeks, he said. In October 2019, it pulled fake news networks linked to Russia and Iran.

The CEO said that Facebook is now taking down more than one million fake accounts a day before they have a chance to sign up – including not just accounts devoted to disinformation, but also those of spammers.

As the internet giants – Facebook, Twitter and Google – come under increasing pressure to get better at keeping groups and governments from using their platforms to spread disinformation, Zuckerberg claims that Facebook is strenuously tackling the problem, having employed a veritable army of 35,000 people to review online content and implement security measures.

Nearly a year ago, Facebook put out a call for new internet regulation in four areas: harmful content, election integrity, privacy and data portability. What Zuckerberg said then:

It’s impossible to remove all harmful content from the internet, but when people use dozens of different sharing services – all with their own policies and processes – we need a more standardized approach.

What he called for on Tuesday, in an op-ed published by the Financial Times: “rules for the internet,” and more regulation for his platform. On Monday, Facebook published a whitepaper describing its recommendations for future regulation, including more accountability from companies that do content moderation, which, it argues, will be a strong incentive for firms to be more responsible.

Facebook suggests that regulations should “respect the global scale of the internet and the value of cross-border communications” and encourage coordination between different international regulators, as well as look to protect freedom of expression.

Facebook is also calling on regulators to allow tech firms to keep innovating, rather than issuing blanket bans on certain processes or tools. It also wants regulators to take into account the “severity and prevalence” of harmful content in question, its status in law, and efforts already underway to address the content.

We support the need for new regulation even though it’s going to initially hurt our profits, Zuckerberg said in the op-ed:

I believe good regulation may hurt Facebook’s business in the near term but it will be better for everyone, including us, over the long term.

These are problems that need to be fixed and that affect our industry as a whole. If we don’t create standards that people feel are legitimate, they won’t trust institutions or technology.

To be clear, this isn’t about passing off responsibility. Facebook is not waiting for regulation; we’re continuing to make progress on these issues ourselves.

But I believe clearer rules would be better for everyone. The internet is a powerful force for social and economic empowerment. Regulation that protects people and supports innovation can ensure it stays that way.

Monika Bickert, Facebook’s vice president of content policy, said that we can do regulation the right way, or we can do it the wrong way:

If designed well, new frameworks for regulating harmful content can contribute to the internet’s continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together. Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation.


Latest Naked Security podcast

go top