Report calls for web pre-screening to end UK’s child abuse ‘explosion’

A UK inquiry into child sexual abuse facilitated by the internet has recommended that the government require apps to pre-screen images before publishing them, in order to tackle “an explosion” in images of child sex abuse.

The No. 1 recommendation from the independent inquiry into child sexual abuse (IICSA) report, which was published on Thursday:

The government should require industry to pre-screen material before it is uploaded to the internet to prevent access to known indecent images of children.

While most apps and platforms require users (of non-kid-specific services) to be at least 13, their lackluster age verification is also undermining children’s safety online, the inquiry says. Hence, recommendation No. 3:

The government should introduce legislation requiring providers of online services and social media platforms to implement more stringent age verification techniques on all relevant devices.

The report contained grim statistics. The inquiry found that there are multiple millions of indecent images of kids in circulation worldwide, with some of them reaching “unprecedented levels of depravity.”

The imagery isn’t only “depraved”; it’s also easy to get to, the inquiry said, referring to research from the National Crime Agency (NCA) that found that you can find child exploitation images within three clicks when using mainstream search engines. According to the report, the UK is the third greatest consumer in the world of the live streaming of abuse.

The report describes one such case: that of siblings who were groomed online by a 57-year-old man who posed as a 22-year-old woman. He talked the two into performing sexual acts in front of a webcam and threatened to share graphic images of them online if they didn’t.

How do we stem the tide?

The NCA has previously proposed that internet companies scan images against its hash database prior to being uploaded. If content is identified as a known indecent image, it can then be prevented from being uploaded.

Apple, Facebook, Google, Dropbox and Microsoft, among others, automatically scan images (and sometimes video) uploaded to their servers. The NCA says that, as it understands it, they only screen content after it’s been published, thereby enabling abusive images to proliferate.

The thinking: why not stop the images dead in their tracks before the offense occurs?

One reason: it can’t be done without disabling the end-to-end encryption in WhatsApp, for example, or other privacy-minded services and apps, according to Matthew Green, cryptographer and professor at Johns Hopkins University. Green explains that the most famous scanning technology is based on PhotoDNA: an algorithm developed by Microsoft Research and Dr. Hany Farid.

PhotoDNA and Google’s machine-learning tool, which it freely released to address the problem, have a commonality, Green says:

They only work if providers […] have access to the plaintext of the images for scanning, typically at the platform’s servers. End-to-end encrypted [E2E] messaging throws a monkey wrench into these systems. If the provider can’t read the image file, then none of these systems will work.

Green says that some experts have proposed a way around the problem: providers can push the image scanning from the servers out to the client devices – i.e., your phone, which already has the cleartext data.

The client device can then perform the scan, and report only images that get flagged as CSAI [child sexual abuse imagery]. This approach removes the need for servers to see most of your data, at the cost of enlisting every client device into a distributed surveillance network.

The problem with that approach? The details of the scanning algorithms are private. Green suspects this could be because those algorithms are “very fragile” and could be used to bypass scanning if they fell into the wrong hands:

Presumably, the concern is that criminals who gain free access to these algorithms and databases might be able to subtly modify their CSAI content so that it looks the same to humans but no longer triggers detection algorithms. Alternatively, some criminals might just use this access to avoid transmitting flagged content altogether.

Cryptographers are working on this problem, but “the devil is in the [performance] details,” Green says.

Does that mean that the fight against CSAI can’t be won without forfeiting E2E encryption? As it is, the inquiry is recommending fast action, suggesting that some of its recommended steps be taken before the end of September – likely not enough time for cryptographers to figure out how to effectively prescreen imagery before it’s published, as in, before it slips behind the privacy shroud of encryption.

The inquiry’s report is only the latest of a string of scathing assessments of social media’s role in the spread of abuse imagery. According to the report, social media companies appear motivated to “avoid reputational damage” rather than prioritizing protection of victims.

Prof Alexis Jay, the chair of the inquiry:

The serious threat of child sexual abuse facilitated by the internet is an urgent problem which cannot be overstated. Despite industry advances in technology to detect and combat online facilitated abuse, the risk of immeasurable harm to children and their families shows no sign of diminishing.

Internet companies, law enforcement and government [should] implement vital measures to prioritise the protection of children and prevent abuse facilitated online.

The UK and the US are on parallel paths to battle internet-facilitated child sexual abuse, though, at least in the US, privacy advocates view recent political moves as ill-disguised attacks on encryption and privacy. The EARN-IT Act is a case in point: now making its way through Congress, the bill has been introduced by legislators who’ve used the specter of online child exploitation to argue for the weakening of encryption.

One of the problems of the EARN IT bill: the proposed legislation “offers no meaningful solutions” to the problem of child exploitation, as the Electronic Frontier Foundation (EFF) says:

It doesn’t help organizations that support victims. It doesn’t equip law enforcement agencies with resources to investigate claims of child exploitation or training in how to use online platforms to catch perpetrators. Rather, the bill’s authors have shrewdly used defending children as the pretense for an attack on our free speech and security online.

You can’t directly compare British and US legal rights. But at least in the US, legal analysts say that the EARN IT Act, which would compel internet companies to follow “best practices” lest they be stripped of Section 230 protections against being sued for publishing illegal content, would be in violation of the First and Fourth Constitutional Amendments protections for, respectively, free speech and unreasonable search.

Private companies like Facebook voluntarily scan for violative content because they’re not state actors. If they’re forced to screen, they become state actors, and then they (generally; case law differs) legally need to secure warrants to search digital evidence.

Thus, as argued by Riana Pfefferkorn, Associate Director of Surveillance and Cybersecurity at The Center for Internet and Society at Stanford Law School, forcing scanning could actually lead, ironically, to court suppression of evidence of the child sexual exploitation crimes targeted by the bill.

How would it work in the UK? I’m not a lawyer, but if you’re familiar with British law, please do add your thoughts to the comments section.

Naked Security’s Mark Stockley saw another wrinkle in the inquiry’s recommendations about prescreening content: It reminded him of Article 13 of the European Copyright Directive, also known as the Meme Killer. It’s yet another legal directive that critics say takes an “unprecedented step towards the transformation of the internet, from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users.”

The directive will force for-profit platforms like YouTube, Tumblr, and Twitter to proactively scan user-uploaded content for material that infringes copyright… scanning that’s been error-prone and prohibitively expensive for smaller platforms. It wouldn’t make exceptions, even for services run by individuals, small companies or non-profits.

EU member states have until 7 June 2021 to implement the new reforms, but the UK will have left the EU by then. As the BBC reported in January, Universities and Science Minister Chris Skidmore has said that the UK won’t implement the EU Copyright Directive after the country leaves the EU.

How about the inquiry’s call for web pre-screening? Will it make it into law?

If it does, we’ll let you know.


Latest Naked Security podcast

Open source bugs have soared in the past year

Open source bugs have skyrocketed in the last year, according to a report from open source licence management and security software vendor WhiteSource.

The number of open source bugs sat steady at just over 4,000 in 2017 and 2018, the report said, having more than doubled the number of bugs from pre-2017 figures that had never before broken the 2,000 mark.

Then, 2019’s numbers soared again, topping 6,000 for the first time, said WhiteSource, representing a rise of almost 50%.

By far the most common weakness enumeration (CWE – a broad classifier of different bug types) in the open source world is cross-site scripting (XSS). This kind of flaw accounted for almost one in four bugs and was the top for all languages except C. This was followed by improper input validation, buffer errors, out-of-bound reads, and information exposure. Use after free, another memory flaw, came in last with well under 5% of errors.

WhiteSource had some harsh words for the national vulnerability database (NVD), which it said only contains 84% of the open source vulnerabilities that exist. It adds that many of these vulnerabilities are reported in other places first and only make it into the NVD much later.

It also criticised the common vulnerability scoring system (CVSS), which was launched in 2005 and was recently upgraded to 3.1. It said that the system has changed the way it scores bugs over time, tending towards higher scoring. WhiteSource complained:

[…] how can we expect teams to prioritize vulnerabilities efficiently when over 55% are high-severity or critical?

FIRST, which organises CVSS, didn’t reply to our request for comment but we will update this article if they do.

Expect to see the number of bugs rise over time, predicted the report. It pointed to GitHub’s recently announced Security Lab as a key development in open source bug reporting. GitHub, which hosts many open source products, has an embedded disclosure process that will encourage project maintainers to report vulnerabilities, it said.

The 2017 bug spike isn’t specific to open source, which happens to be WhiteSource’s focus here. We saw a corresponding spike in general bugs as reported in the CVE database in 2017. However, the number of overall bugs reported as CVEs actually dipped below 2017 levels last year.


Latest Naked Security podcast

EARN IT Act threatens end-to-end encryption

While we’re all distracted by stockpiling latex gloves and toilet paper, there’s a bill tiptoeing through the US Congress that could inflict the backdoor virus that law enforcement agencies have been trying to inflict on encryption for years.

At least, that’s the interpretation of digital rights advocates who say that the proposed EARN IT Act could harm free speech and data security.

Sophos is in that camp. For years, Naked Security and Sophos have said #nobackdoors, agreeing with the Information Technology Industry Council that “Weakening security with the aim of advancing security simply does not make sense.”

The first public hearing on the proposed legislation took place on Wednesday. You can view the 2+ hours of testimony here.

Called the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT Act), the bill would require tech companies to meet safety requirements for children online before obtaining immunity from lawsuits. You can read the discussion draft here.

To kill that immunity, the bill would undercut Section 230 of the Communications Decency Act (CDA) from certain apps and companies so that they could be held responsible for user-uploaded content. Section 230, considered the most important law protecting free speech online, states that websites aren’t liable for user-submitted content.

Here’s how the Electronic Frontier Foundation (EFF) frames the importance of Section 230:

Section 230 enforces the common-sense principle that if you say something illegal online, you should be the one held responsible, not the website or platform where you said it (with some important exceptions).

EARN IT is a bipartisan effort, having been introduced by Republican Lindsey Graham, Democrat Richard Blumenthal and other legislators who’ve used the specter of online child exploitation to argue for the weakening of encryption. This comes as no surprise: in December 2019, while grilling Facebook and Apple, Graham and other senators threatened to regulate encryption unless the companies give law enforcement access to encrypted user data, pointing to child abuse as one reason.

What Graham threatened at the time:

You’re going to find a way to do this or we’re going to go do it for you. We’re not going to live in a world where a bunch of child abusers have a safe haven to practice their craft. Period. End of discussion.

One of the problems of the EARN IT bill: the proposed legislation “offers no meaningful solutions” to the problem of child exploitation, as the EFF says:

It doesn’t help organizations that support victims. It doesn’t equip law enforcement agencies with resources to investigate claims of child exploitation or training in how to use online platforms to catch perpetrators. Rather, the bill’s authors have shrewdly used defending children as the pretense for an attack on our free speech and security online.

If passed, the legislation will create a “National Commission on Online Child Sexual Exploitation Prevention” tasked with developing “best practices” for owners of Internet platforms to “prevent, reduce, and respond” to child exploitation online. But, as the EFF maintains, “Best practices” would essentially translate into legal requirements:

If a platform failed to adhere to them, it would lose essential legal protections for free speech.

The “best practices” approach came after pushback over the bill’s predicted effects on privacy and free speech – pushback that caused its authors to roll out the new structure. The best practices would be subject to approval or veto by the Attorney General (currently William Barr, who’s issued a public call for backdoors), the Secretary of Homeland Security (ditto), and the Chair of the Federal Trade Commission (FTC).

How would the bill end end-to-end encryption?

The bill doesn’t explicitly mention encryption. It doesn’t have to: policy experts say that the guidelines set up by the proposed legislation would require companies to provide “lawful access”: a phrase that could well encompass backdoors.

CNET talked to Lindsey Barrett, a staff attorney at Georgetown Law’s Institute for Public Representation Communications and Technology Clinic who said that the way that the bill is structured is a clear indication that it’s meant to target encryption:

When you’re talking about a bill that is structured for the attorney general to give his opinion and have decisive influence over what the best practices are, it does not take a rocket scientist to concur that this is designed to target encryption.

If the bill passes, the choice for tech companies comes down to either weakening their own encryption and endangering the privacy and security of all their users, or foregoing Section 230 protections and potentially facing liability in a wave of lawsuits.

Kate Ruane, a senior legislative counsel for the American Civil Liberties Union, had this to say to CNET:

The removal of Section 230 liability essentially makes the ‘best practices’ a requirement. The cost of doing business without those immunities is too high.

Tellingly, one of the bill’s lead sponsors, Sen. Richard Blumenthal, told the Washington Post that he’s unwilling to include a measure that would stipulate that encryption is off-limits in the proposed commission’s guidelines. This is what he told the newspaper:

I doubt I am the best qualified person to decide what best practices should be. Better-qualified people to make these decisions will be represented on the commission. So, to ban or require one best practice or another [beforehand] I just think leads us down a very perilous road.

The latest in an ongoing string of assaults on Section 230

The EARN IT Act joins an ongoing string of legal assaults against the CDA’s Section 230. Most recently, in January 2019, the US Supreme Court refused to consider a case against defamatory reviews on Yelp.

We’ve also seen actions taken against Section 230-protected sites such as those dedicated to revenge porn, for one.

In March 2018, we also saw the passage of H.R. 1865, the Fight Online Sex Trafficking Act (FOSTA) bill, which makes online prostitution ads a federal crime and which amended Section 230.

In response to the overwhelming vote to pass the bill – it sailed through on a 97-2 vote, over the protests of free-speech advocates, constitutional law experts and sex trafficking victims – Craigslist shut down its personals section.

But would it stop online child abuse?

Besides the proposed bill containing no tools to actually stop online child abuse, it would actually make it much harder to prosecute pedophiles, according to an analysis from The Center for Internet and Society at Stanford Law School. As explained by Riana Pfefferkorn, Associate Director of Surveillance and Cybersecurity, as it now stands, online providers proactively, and voluntarily, scan for child abuse images by comparing their hash values to known abusive content.

Apple does it with iCloud content, Facebook has used hashing to stop millions of nude children’s images, and Google released a free artificial intelligence tool to help stamp out abusive material, among other voluntary efforts by major online platforms.

The key word is “voluntarily,” Pfefferkorn says. Those platforms are all private companies, as opposed to government agencies, which are required by Fourth Amendment protections against unreasonable search to get warrants before they search our digital content, including our email, chat discussions, and cloud storage.

The reason that private companies like Facebook can, and do, do exactly that is that they are not the government, they’re private actors, so the Fourth Amendment doesn’t apply to them.

Turning the private companies that provide those communications into “agents of the state” would, ironically, result in courts’ suppression of evidence of the child sexual exploitation crimes targeted by the bill, she said.

That means the EARN IT Act would backfire for its core purpose, while violating the constitutional rights of online service providers and users alike.

Besides the EFF, the EARN IT bill is facing opposition from civil rights groups that include the American Civil Liberties Union and Americans for Prosperity, Access Now, Mozilla, the Center for Democracy & Technology, Fight for the Future, the Wikimedia Foundation, the Surveillance Technology Oversight Project, the Consumer Technology Association, the Internet Association, and the Computer & Communications Industry Association.

Earlier this month, Sen. Ron Wyden, who introduced the CDA’s Section 230, said in a statement that the “disastrous” legislation is a “Trojan horse” that will give President Trump and Attorney General Barr “the power to control online speech and require government access to every aspect of Americans’ lives.”

Wyden’s statement didn’t specifically mention encryption, but his office told Ars Technica that when “[the senator] discusses weakening security and requiring government access to every aspect of Americans’ lives, that is referring to encryption.”


Latest Naked Security podcast

Homeland Security sued over secretive use of face recognition

The American Civil Liberties Union (ACLU) is suing the Department of Homeland Security (DHS) over its failure to cough up details about its use of facial recognition at airports.

Along with the New York Civil Liberties Union, the powerful civil rights group filed the suit in New York on Thursday. Besides the DHS, the suit was also filed against US Customs and Border Protection (CBP), Immigration and Customs Enforcement (ICE), and the Transportation Security Administration (TSA).

The ACLU says that the lawsuit challenges the secrecy that shrouds federal law enforcement’s use of face recognition surveillance technology.

Ashley Gorski, staff attorney with the ACLU’s National Security Project, said in a release that pervasive use of face surveillance “can enable persistent government surveillance on a massive scale.”

The public has a right to know when, where, and how the government is using face recognition, and what safeguards, if any, are in place to protect our rights. This unregulated surveillance technology threatens to fundamentally alter our free society and is in urgent need of democratic oversight.

The ACLU had filed Freedom of Information Act (FOIA) requests to find out how the agencies are using the surveillance technologies at airports – requests that the agencies ignored.

In its suit, the ACLU demands that the agencies turn over records concerning:

  • Plans for further implementation of face surveillance at airports;
  • Government contracts with airlines, airports, and other entities pertaining to the use of face recognition at the airport and other ports of entry;
  • Policies and procedures concerning the acquisition, processing, and retention of our biometric information; and
  • Analyses of the effectiveness of facial recognition technology.

As the ACLU’s complaint tells it, in 2017, CBP began a program called the Traveler Verification Service (TVS) that involves photographing travelers during entry or exit from the country.

The program involves the use of facial recognition technology to compare the photographs with faceprints that the government already has – a huge collection of biometrics that just keeps getting bigger. In June 2019, the General Accountability Office (GAO) said that the FBI’s facial recognition office can now search databases containing more than 641 million photos, including 21 state databases (a number that’s ballooned from the 412 million images the FBI’s Face Services unit had access to at the time of a GAO report from three years prior).

CBP’s piece of that burgeoning pie: as of June 2019, the agency had processed more than 20 million travelers using facial recognition, the ACLU says.

Major airlines and airports have partnered with CBP on TVS. As of August 2019, 26 airlines and airports had committed to employing CBP’s face-matching technology, and several airlines have already incorporated it into boarding procedures for outbound international flights.

It’s being done behind closed doors, the ACLU says. The public knows little about the nature of these partnerships, nor about the policies and privacy safeguards governing the processing, retention, and dissemination of data collected or generated through TVS.

It’s certainly not the first time that CBP has kept details about its images to itself. In June 2019, hackers managed to steal photos of travelers and license plates from a CBP database. In violation of CBP policies, the database had been copied by a subcontractor to its own network. Then, the subcontractor’s network had been hacked.

Initial reports indicated that the breach involved images of fewer than 100,000 people in vehicles coming and going through a few specific lanes at a single port of entry into the US over the previous one-and-a-half months.

While the image data wasn’t immediately put up for sale on the Dark Web, the breach showed that this type of data is of interest to hackers… and that government agencies are capable of losing control of it.

Separately, the TSA has outlined a plan to implement face surveillance for both international and domestic travelers, the ACLU’s lawsuit says. The complaint points to a document published by the TSA entitled “TSA Biometrics Roadmap” that describes how the TSA intends to partner with CBP on face recognition for international travelers; apply face recognition to TSA PreCheck travelers; and ultimately expand face recognition to domestic travelers more broadly.

Congress has authorized the DHS to collect biometrics from certain categories of noncitizens at border crossings. It hasn’t expressly given the go-ahead to collect faceprints from citizens, though. Citizens were given the right to opt out of the facial scans after the DHS faced intense backlash over a proposed regulation change that would have allowed the technology to be used on all people coming in or leaving the US.

Despite the backlash, however, the DHS hasn’t given up on those plans, the ACLU says.

Running tally of pushback

Opposition to the government’s pervasive use of the technology continues to strengthen. As of Thursday, Washington’s state Senate and House were still debating a bill to rein in facial recognition.

In October, the state of California outlawed facial recognition in police bodycams. Some of its biggest cities have gone further still in restricting the controversial technology, including San Francisco, Berkeley, and Oakland.

Outside of California, government use of facial recognition has also been banned in three Massachusetts municipalities: Somerville, Northampton and Brookline. New York City tenants also successfully stopped their landlord’s efforts to install facial recognition technology to open the front door to their buildings.

The ACLU’s Gorski said that when it comes to finding out how this technology is being used, it shouldn’t have to come to lawsuits:

That we even need to go to court to pry out this information further demonstrates why we should be wary of weak industry proposals and why lawmakers urgently need to halt law and immigration enforcement use of this technology. There can be no meaningful oversight or accountability with such excessive, undemocratic secrecy.


Latest Naked Security podcast

Confessions app Whisper spills almost a billion records

Researchers who uncovered a data exposure from mobile app Whisper earlier this week have released more details about the incident.

Whisper is an app from MediaLab, a mobile app company that owns a host of other apps including the popular messaging service Kik. It offers a kind of anonymous social network service that allows people to post their innermost fears and desires, supposedly without risk.

Its users post everything from dark family secrets to stories of infidelity. It gathers these up and uses them for articles on its website, including “Naughty Nannies Confess To Sleeping With The Fathers They Work For”, “Alcoholism Runs In My Family”, and “I Married The Wrong Person”.

The problem, according to researcher Dan Ehrlich of cybersecurity consultancy Twelve Security, is that Whisper didn’t steward that data very well. He says that he and his colleague Matthew Porter accessed 900m records in a 5 TB database spanning 75 different servers, logged between the app’s release in 2012 and the present day. The data was stored in plain text on ElasticSearch servers and included 90 metadata points per account.

The Washington Post broke the story about the app on Monday 10 March, having worked with the researchers.

The records didn’t include real names, but did divulge their stated age, gender, ethnicity, home town, and nickname, the story said, adding that it also divulged access to groups that included intimate confessions.

Ehrlich has since followed this up with the first two of a planned five-blog series going into more depth and is dropping more details about the alleged exposure. He said:

… one has the geocoordinates of nearly every place they’ve visited, and the ability to log into their account with their password/credentials. Depending on when the account was created and how much the user engaged with the app, dozens and dozens of fields of metadata can be reviewed.

These amounted to 90 data points including some bizarre ones, according to Ehrlich’s posts, such as predator_probability and banned_from_high_schools. He added:

Sexual fetish groups, suicide groups, and hate group membership of users can all be seen. Whether or not a user is a predator, if they are banned from posting near high schools, and their private messages can all be viewed.

Worst of all perhaps is the disclosure of the exact coordinates of a user’s most recent post. This not only affects children posting highly sensitive information from schools but also service members on military bases and in US embassies around the world, they warned.

A MediaLab spokesperson responded:

[…] no personally identifiable data was exposed as Whisper does not collect any PII such as names, phone numbers or email addresses. The referenced data is all accessible to users from public API’s [sic] exposed within the app. The data is a consumer-facing feature of the application which users can choose to share or not share depending on which features of the application they wish to utilize.


Latest Naked Security podcast

go top