Category Archives: News

Update Firefox again – more RCEs and an Android “takeover” bug too

This weekend, we were urging you to check your Firefox version to make sure you were up to date…

…and now we’re urging you to check again.

The update that came out over the weekend was an emergency patch, issued for a security hole that was found because it was already in use by criminals in real life – what’s known in the trade as a zero day because there were zero days on which you could have patched in advance.

This one is a bit less dramatic, being a scheduled update of the sort you expect to see issued on a regular basis.

Regular readers will know that we used to call these Fortytwosdays, as an homage to HHGttG, because regular updates used to arrive every six weeks, and 6×7 = 42.

We’ll refer to this one a Fourthytuesday instead, now that Firefox has reduced its update wavelength to four weeks to get important-but-not-zero-day-critical fixes out just that bit more frequently.

You should be checking that you have 75.0, or 68.7.0esr if you or your organisation uses the Extended Support Release.

Those versions are bumped up from from 74.0.1 and 68.6.1esr that arrived urgently over the weekend.

Screenshots of how to verify your version can be found in our weekend article about the zero-day patch. (Hamburger > Help > About Firefox.)

It’s handy to know how to update verification at will, because merely checking that you’re up-to-date will give you a one-click option to get any patch that you might have missed out on.

Also, if your automatic update hasn’t happened yet, a manual check will let you “jump the queue” and get the update a bit sooner.

Perhaps the most interesting item on the list in this update is the bug denoted CVE-2020-6828, which is specific to Firefox for Android:

 CVE-2020-6828: Preference overwrite via crafted Intent from malicious Android application A malicious Android application could craft an Intent that would have been processed by Firefox for Android and potentially result in a file overwrite in the user's profile directory. One exploitation vector for this would be to supply a user.js file providing arbitrary malicious preference values. Control of arbitrary preferences can lead to sufficient compromise such that it is generally equivalent to arbitrary code execution.

Even though an exploit using this vulnerability wouldn’t strictly be a Remote Code Execution (RCE) attack, where program code is typically stuffed into memory by a crook and then executed right away, it’s a reminder that any bug in which a crook can remotely overwrite configuration files can be just as dangerous.

If I can reconfigure one of your apps to operate insecurely and then wait until the app restarts (or the device reboots) to exploit the hole I opened up, I might actually end up in a stronger position than crashing the app and running my malware at once.

An app that is suddenly provoked into misbehaving may draw attention to itself – and exploiting code execution vulnerabilities using what are essentially “controlled demolitions” is prone to failure, or might work reliably only on a few specific operating system builds or types of device.

But an app that starts up normally, just not in the security state you would choose for yourself, can be a gold mine for a crook who has the patience to wait for a restart or a reboot and then sneak in surreptitiously later on.

The good news is that this hole isn’t a zero day, so the crooks don’t seem to know about it yet.

In short: patch now!

Potential RCEs

For the non-Android versions of Firefox, Mozilla identified a number of memory mismanagement bugs that they assume could have been wrangled into exploitable RCE holes, given enough effort.

There are also some subtle bugs that give you some insight into why some security holes that are obvious when you know where to look never show up in testing, such as CVE-2020-6824:

 CVE-2020-6824: Generated passwords may be identical on the same site between separate private browsing sessions Initially, a user opens a Private Browsing Window and generates a password for a site, then closes the Private Browsing Window but leaves Firefox open. Subsequently, if the user had opened a new Private Browsing Window, revisited the same site, and generated a new password - the generated passwords would have been identical, rather than independent. 

As you can imagine, this is not the sort of workflow you’d imagine programming into an automated test (well, not until after the bug was found!), and it’s not the sort of thing you’re likely to do in real life very often.

Going to a website to change your password – after a breach notification, for example – is likely enough, but changing it twice in a row to a “random” password without exiting Firefox inbetween isn’t likely at all.

We’re guessing that until someone did this – perhaps even as a harmless mistake – and was surprised to see the website warn them that they’d chosen the same password as the time before, which ought to be as good as impossible with a correctly functioning random number generator.

(This is also a good reminder of why “randomness is hard“, because random numbers considered one-at-a-time don’t tell you anything about how good your randomiser is, and even a good randomiser is no good if you use the same “strong” random number twice in a row.)

What to do?

We said it above, we’ve said it before, and we’ll say it here: patch now!

The crooks don’t seem to have figured out these bugs for themselves yet, so get yourself an extra step ahead of them ASAP.


Latest Naked Security podcast

Microsoft project proposed to aid Linux IoT code integrity

Imagine a computer user from 2010 dreaming of a world in which Microsoft is not only an enthusiastic proponent of open source software but actively contributes to it with its own ideas.

It would have sounded fanciful and yet a decade on and this is exactly the world a growing number of Microsoft’s in-house developers find themselves working towards.

The latest twist in the romance arrived this week when the company published details of Integrity Policy Enforcement (IPE), a Linux Security Module (LSM) designed to check the authenticity of binaries at runtime.

The Linux kernel has long supported LSMs for different specialised purposes, but Microsoft has spotted a gap in the protections these offer in server environments, specifically its own Azure Sphere IoT platform.

Using IPE would allow admins to ensure that only authorised code has permission to execute using code signing and by checking software against its known properties.

While not for general Linux computing, use cases for the IPE would include embedded Internet of Things (IoT) systems, data center firewalls where the admins have full control over what should be running, and where binary code is “immutable”.

Ideally, for the highest level of security:

Platform firmware should verify the kernel and optionally the root filesystem (e.g. via U-Boot verified boot). This allows the entire system to be integrity verified.

Importantly, however, the verification carried out by IPE doesn’t rely on filesystem metadata, which could be unreliable.

Microsoft lists three categories of threat as the motivation for its interest in IPE:

  • Linker hijacking (DLL injection, a way of hijacking the memory space of software to run something else).
  • Binary rewriting (presumably, redirecting or hooking functions within code to do something malicious)
  • Malicious binary execution/loading (replacing a binary with something malicious)

But what might this amount to in less abstract terms?

A generic example is the Cloud Snooper malware identified by Sophos Labs, a rootkit which could be deployed on almost any Linux server, including those hosted in the cloud.

There is some complexity to Cloud Snooper, but the key point is that it hides in plain site by deploying a kernel-level driver.

Although a general rootkit, the same principle applies to embedded devices and firewalls. These should not be running rogue code but knowing they aren’t isn’t as easy as it could be.

Other examples of Linux malware popping up where it shouldn’t include the GoLang cryptomining malware.

However, the biggest hazard is there is a lot of Linux around which is all too easy to spin up without considering security, especially when it comes to IoT systems. These make inviting targets.

Up for discussion as an RFC (Request for Comments), Microsoft’s interest in proposing IPE underlines how much the software world has changed in a decade. Under the direction of CEO Satya Nadella, the initiative also shows much Microsoft has changed too.


Latest Naked Security podcast

As if the world couldn’t get any weirder, this AI toilet scans your anus to identify you

Yes, your continuous health monitoring Internet of Things (IoT) wrist wrapper well may track your sleep quality and how many calories you burn, but answer me this: does it stick artificial intelligence (AI) sensors up in your business to capture your urine flow and the Sistine Chapel-esque glory of the unique-as-a-fingerprint biometric that is your anus?

Doubtful. The world has never seen a smart toilet like this, which is described in a new study from Stanford University that was published in Nature Biomedical Engineering on Monday.

Sure, you can get a “smart” toilet that offers ambient colored lighting, wireless Bluetooth music sync capability, heated seat, foot warmer, and automatic lid opening and closing, but regular, not-all-THAT-smart-after-all toilets can’t diagnose disease.

This one can, Stanford scientists claim. It uses an array of sensors to measure your excreta, which means that yes, it can tell when your waterworks are on and will happily extend a dip stick into your babbling brook in order to conduct uranalysis. It will also use AI to scan and analyze images of your stools.

In fact, it will capture both your pee and your stools on video and process them with algorithms that Stanford News says “can distinguish normal ‘urodynamics’ (flow rate, stream time and total volume, among other parameters) and stool consistencies from those that are unhealthy.”

Isn’t Urodynamics the name of a boy band?

Your urine can reveal multiple disorders. The dip sticks can be used to analyze white blood cell count, consistent blood contamination, and certain levels of proteins, among other parameters that can signify a spectrum of diseases, from infection to bladder cancer to kidney failure. The study’s senior author, Dr. Sanjiv “Sam” Gambhir, says that at this stage of development, the toilet can measure 10 different biomarkers.

The research comes out of the lab of Dr. Gambhir – a PhD, Stanford professor and chair of radiology. He told Stanford News that the toilet is a perfect continuous health monitoring devic – better than wearables because, unlike your smart watch, you can’t avoid it:

The thing about a smart toilet … is that unlike wearables, you can’t take it off. Everyone uses the bathroom—there’s really no avoiding it—and that enhances its value as a disease-detecting device.

You won’t have to go broke buying this smart toilet whenever it may become commercially available. Gambhir envisions it as part of an average home bathroom, with the sensors being an add-on that’s easily integrated into “any old porcelain bowl.”

It’s sort of like buying a bidet add-on that can be mounted right into your existing toilet. And like a bidet, it has little extensions that carry out different purposes.

Gambhir says that the upcoming number two version of the toilet will integrate molecular stool analysis and refine the technologies that are already working. His team is also working to customize the toilet’s tests so as to fit a user’s individual needs. For example, a diabetic’s smart toilet could monitor glucose in the urine. Another example: those with a family history of bladder or kidney cancer could benefit by having a smart toilet that monitors for blood.

The Stanford researchers tested the toilet with 21 participants over the course of several months. To gauge how well users may accept it, the team also surveyed 300 prospective users. About 37% said they were “somewhat comfortable” with the idea, and 15% said they were “very comfortable” with the idea of “baring it all in the name of precision health.”

We are unique snowflakes in so many ways

You can imagine many reasons why people might feel uncomfortable about test strips being automatically extended and inserted into their flows and images being taken of their nether regions. In fact, the toilet has a built-in identification system that scans your anus: a biometric that turns out to be like fingerprints or iris prints, Gambhir said:

We know it seems weird, but as it turns out, your anal print is unique.

The toilet uses both analprints and fingerprints to identify users, which the scientists say is done purely as a recognition system to match users to their specific data. The scans will not be silkscreened and mounted on a wall, Stanford says: No one, not you or your doctor, will see the images.

Why not just identify users via the fingerprint sensors embedded in its flush handle? Because fingerprints aren’t foolproof, the team realized. One user might use the toilet while someone else flushes, or else the toilet may be self-flushing.

You’re storing WHAT in the cloud???

Gambhir says that this toilet isn’t meant to replace a doctor or even a diagnosis. In fact, in many cases, individual users won’t even get to see the data. If the sensors detect something questionable, such as blood in the urine, an app would alert the user’s healthcare team.

… and the data would be stored with “privacy protections” in what the scientists say is a “secure, cloud-based system,” or what we prefer to call “somebody else’s computer.” Unfortunately, when it comes to cloud storage, you know next to nothing about the quality of that computer, or the ethics of the person operating it.

Let us pray that whoever develops a final version of the smart toilet doesn’t royally screw up the storage part, like so many other leaky-storage (no pun intended) providers have done.

One would assume that anus images wouldn’t lend themselves to identity theft or mass surveillance, nor that the FBI would amass an analprint database as massive as the one it has of facial recognition images.

But then, when it comes to biometrics collection, would it be all that surprising? As we recently learned, work’s being done to identify prisoners by their tattoo images, for example.

Gambhir says that data protection is crucial to the research, both in terms of identification and sample analyses:

We have taken rigorous steps to ensure that all the information is de-identified when it’s sent to the cloud and that the information – when sent to health care providers – is protected under [the Health Insurance Portability and Accountability Act, or HIPAA], which restricts the disclosure of health care records.

We would be remiss were we to not point out what has been demonstrated time and time again: that Big Data can be dissected, compared and contrasted to look for patterns from which to draw inferences about individuals. In other words, it’s not hard to re-identify people from anonymized records, be they records pertaining to location tracking, faceprints or, one imagines, anuses.


Latest Naked Security podcast

Twitter warns users – Firefox might hold on to private messages

A bit of a brouhaha erupted at the end of last week – it wasn’t quite an argument between Twitter and Firefox, but it did get confusing pretty quickly.

The issue had to do with how long your browser might hang on to local copies of private data such as direct messages, even after they’d actually been posted.

Twitter published an blog article tagged “Privacy” that stated:

We recently learned that the way Mozilla Firefox stores cached data may have resulted in non-public information being inadvertently stored in the browser’s cache. This means that if you accessed Twitter from a shared or public computer via Mozilla Firefox and took actions like downloading your Twitter data archive or sending or receiving media via Direct Message, this information may have been stored in the browser’s cache even after you logged out of Twitter.

We’re guessing that this problem was submitted to Twitter as a bug report, presumably by someone who just happened to look through their Firefox cache files and was surprised to see what showed up there.

In computer science jargon, the word cache means rather the opposite of what it means in the military or to pirates.

In the piratical context, a cache might a secret store of important supplies that the pirates could get at if they were stranded or encountered an emergency, such as gold coins and weapons buried in an unlikely location in case armed attackers took them unawares.

In computing, a cache is a place to store items you expect to need again soon in a place that’s easy and quick to access – precisely so that you don’t need to go and find your hiding place and dig out the data from last time you stashed it.

For example CPUs (processors) have a small amount of super-high-speed internal data storage to cache the values in memory (RAM) that you’ve used most recently, because CPUs are faster than RAM chips; disk drives have RAM chips to cache recently used disk sectors, because RAM is faster than disk, even if it’s solid state flash storage…

…and browsers have a cache of local disk files to hold onto recently used web content, because reading from disk is generally much faster than going out across the internet all over again.

So you would expect your browser’s cache to contain plenty of hints about what you’ve been up to lately, but not everything you’ve seen or sent.

What gets cached?

We started Firefox with a totally empty cache, browsed to twitter.com, and then grabbed a copy of the files Firefox had chosen to keep for later in its cache directory. (We used Linux, where the files can generally be found in the directory /home/[yourname]/.cache/mozilla/firefox/[uniqueID]-default/cache2/entries)

After loading just one page, we already had 42 web items cached, including images, JavaScript files, web certificate checks and Twitter data.

A sample of filename and types is shown here, where the filenames are computer-generated 20-byte numbers turned into hexadecimal characters:

782EEDFE807FB787884BC52467159DF43C35D8DF: image/jpeg
91CAAE3C18091A616AB712DEDE722A8DFB081C97: image/jpeg
[. . 13 more images . .]
7E5141AFAB454D5000F81591D6DE2FBE2BC278BF: text/javascript
7FF939CD17A49227D6AA500DC3AF0AEC0216A117: text/javascript
[. . 19 more JavaScript files . .]
F7F250DC5A7B70C65829D50FA26D6FF48336584E: text/json
991BE611B1027BD55542203536CCB91E8F5ACF60: binary/certificate-status
BB9B09D4FC25316CB73AC20F2A9622383F9402BA: binary/certificate-status
[. . 2 more certificate checks . .]
E715BD4129016B499FBD4666D453A665EBD3EBBC: binary/twitter-data

The JSON file looks like a list of more than 1700 Twitter ad campaigns, judging by the first few entries:

campaignName:"DisneyBlackWidow_Emoji_YelenaBelova", hashtag:""
campaignName:"100Thieves_2020", hashtag:"100T"
campaignName:"100Thieves_2020", hashtag:"100Thieves", campaignName:"100Thieves_2020", hashtag:"100WIN"
campaignName:"AoyamaOfficial_2020_Spring", hashtag:"17文字のありがとう"
[. . 1727 more items . .]
Images cached by Firefox when visiting main Twitter page

Sometimes, browsers know implicitly not to cache pages – for example, because they are one-offs and therefore won’t be needed again – but the only way that they can know explicitly not to cache content is if the website serving up the data says so.

A server can inform a browser what sort of caching it may do by including the HTTP header Cache-Control in its reply along with the data, for example like this:

$ curl -I https://nakedsecurity.sophos.com
HTTP/2 200 date: Mon, 06 Apr 2020 23:18:21 GMT
content-type: text/html; charset=UTF-8
content-length: 59419
cache-control: max-age=300, must-revalidate
strict-transport-security: max-age=31536000

Here, nakedsecurity.sophos.com is telling the program downloading our main page that for the next five minutes (max-age=300 seconds), it can re-use the content of this reply before checking back to see if it’s changed (must-revalidate).

But if a reply should not be cached, either because to do so would be waste of time or a needless risk to privacy, the server should state:

cache-control: no-store

What went wrong with Twitter?

Why did Firefox cache data that Twitter surely didn’t want it to, as stated by Twitter in the blog post quoted above?

The answer is a curious one – according to Firefox, Twitter forgot to send the no-store instruction to forbid caching explicitly:

In this case, Twitter did not include a ‘no-store’ directive for direct messages. The content of direct messages is sensitive and so should not have been stored in the browser cache.

Apparently, having set an old-fashioned header to say Pragma: no-cache, which doesn’t quite mean the same thing, Twitter observed that other browsers didn’t save the replies anyway.

From this, Twitter seems to have inferred – understandably if incorrectly – that it had done enough to prevent any web client from holding onto data such as private messages.

This inference was not, however, correct for Firefox (and, we assume, derived products such as Tor Browser).

Inquisitive users might indeed trip over old copies of private messages in the cache that they’d reasonably have assumed wouldn’t be there.

As far as we can tell, the issue has been sorted out amicably, with Twitter now unambiguously telling Firefox to no-store the offending data, and Firefox accordingly not storing it.

What to do?

There’s not a big risk here, unless perhaps you share your computer account and your browser with someone else to manage two different Twitter accounts.

But if you’re sharing a computer account and your browser with someone else, what’s left in your Firefox cache by mistake is probably the least of your worries.

We recommend creating separate accounts with different passwords to keep your digital lives apart – not because you don’t trust the other person but simply to prevent accidents and misunderstandings.

There’s also a tiny risk that if you were infected by malware or hacked after using Twitter, a crook might, just might, be able to recover data from your Firefox cache that Twitter ought to have suppressed from being there.

But any data left in your browser cache increases the risk of malware figuring out what you’ve been up to do even after you did it, so regularly clearing the data your browser keeps about you is a great idea.

If you don’t mind logging back into all your websites every time you exit and reload Firefox (it’s a small hassle for a big security reward), we recommend telling Firefox to clean up every time you exit.

Go to Preferences > Privacy & Security > History > Clear history when Firefox closes > [Settings…] to decide what to clear every time:

Autoclear data on close with all options turned on

Lastly, if you’re a web developer, and you are serving up data you know you don’t want the recipient to hang onto, don’t leave them to guess – use the Cache-Control: no-store header and make your requirements crystal clear.


Latest Naked Security podcast

Two schoolkids sue Google for collecting biometrics

Two schoolchildren have sued Google, alleging that it’s illegally collecting their voiceprints, faceprints and other personally identifiable information (PII).

The students were identified only as HK and JC in the complaint, which was filed on Thursday in San Jose, CA, in the US District Court of Northern California. The children are suing through their father, Clinton Farwell.

The complaint notes that Google has infiltrated the country’s primary and secondary school systems by distributing its Chromebook laptops, which come pre-installed with its G Suite for Education platform. That suite includes student versions of Gmail, Calendar, Drive, Docs, Sheets, and other Google apps.

In order to use those apps, the kids had to speak into the laptop’s audio recording device so Google could record their voices, and they had to look into the laptop’s camera so Google could scan their faces.

According to the lawsuit, over half of the nation’s school children use Google’s education products, including those in Illinois, most of whom are under the age of 13.

Illinois comes into play because it’s got the strictest biometrics privacy law in the land: the Biometric Information Privacy Act (BIPA). BIPA requires private entities – like Google – to first get our informed consent before collecting our biometrics, including faceprints and voiceprints.

The complaint alleges that Google’s violating both BIPA and the nation’s strictest federal online children’s privacy law, the Children’s Online Privacy Protection Act (COPPA). COPPA requires websites and online services to fully and clearly disclose their data collection, use, and disclosure practices and that they obtain verifiable parental consent before collecting, using, or disclosing the data they collect from children younger than 13.

“Incredibly,” the complaint says, Google’s violating both of those privacy protection laws at the same time. The lawsuit says that besides faceprints and voiceprints, Google’s also illegally creating, collecting, storing and using students’ PII, including:

  • their physical locations;
  • the websites they visit;
  • every search term they use in Google’s search engine (and the results they click on);
  • the videos they watch on YouTube;
  • personal contact lists;
  • voice recordings;
  • saved passwords; and
  • other behavioral information.

…all without verifiable parental consent. From the complaint:

Google has complete control over the data collection, use, and retention practices of the ‘G Suite for Education’ service, including the biometric data and other personally identifying information collected through the use of the service, and uses this control not only to secretly and unlawfully monitor and profile children, but to do so without the knowledge or consent of those children’s parents.

The plaintiffs are requesting a jury trial. They want Google to stop collecting the data and to destroy whatever data it has. The suit is also seeking $5,000 per student for each of Google’s alleged “intentional or reckless” violations, and $1,000 for each “negligent” violation.

Not the first time

Even before COVID-19 sent schools reeling into a crash course on remote learning and an embrace of the tools companies offer to make it happen, Google was looking at legal action over the privacy implications of students using its free G Suite for Education-loaded Chromebooks.

In February, New Mexico Attorney General Hector Balderas sued Google over alleged data slurping with the laptops. Like the BIPA lawsuit filed last week, Balderas accused Google of secretly collecting information including students’ geolocation information, internet history, terms that students have searched for on Google, videos they’ve watched on YouTube, personal contact lists, saved passwords, voice recordings, and more, in violation of COPPA.

Google had already been fined over blowing kids’ privacy a few months prior to New Mexico’s suit. In September 2019, the Federal Trade Commission (FTC) fined the company $170 million for illegally sucking up kids’ data so it could target them with ads.

In response to the FTC fine, Google’s YouTube subsidiary decided to sit out the thorny task of verifying age, instead passing the burden on to content creators, leaving them liable for being sued over COPPA violations, even if the creators themselves think that their content is meant for viewers over the age of 13.

According to the New Mexico lawsuit, Google Education is now used by more than 80 million educators and students in the US, including more than 25 million who use Chromebooks in school. To drive up adoption in schools, Google has publicly promised that it takes students’ privacy seriously and that it will never mine student data for its own commercial purposes, the New Mexico lawsuit says.

It’s broken those promises, the lawsuit says, pointing to Google’s response to a Congressional inquiry into the privacy practices associated with Google Education, in which it admitted to using students’ data – extracted and stored in profiles – for “product improvement and product development.”

Add that all up and multiply it by COVID-19

As state after state has issued stay-at-home orders, usage of Google’s tools to help manage classrooms has exploded.

The free service wasn’t particularly popular before the world woke up to the pandemic. According to AppBrain, which tracks app popularity over time, Google’s Classroom app wasn’t even in the top 100 in early March. As of 28 March, it had sailed past 50 million downloads.

In reporting about the surge of Google’s education tools, Android Police expressed gratitude – as many of us have – that there are free platforms, readily available, to keep kids’ education from completely going off the rails.

But it’s worth noting that the spotlight now shining on online collaboration tools is illuminating some warts – for example, the ZoomBombing wart that taught us all what happens when hosts neglect to disallow screen-sharing by default.

What happens, you might ask, if you haven’t already read about about that one? Nasty happens. Please do check out our article on how to make Zooming safer, lest you wind up apologizing to your boss, your colleagues, your fellow Zumba dancers or your parents for something both avoidable and regrettable.

We’ll keep you updated on the lawsuits against Google over its education apps as they progress.


Latest Naked Security podcast

go top