Category Archives: News

Firefox 88 patches bugs and kills off a sneaky JavaScript tracking trick

Over the past two months or so, Mozilla’s Firefox browser has had a lot less media attention than Google’s Chrome and Chromium projects…

…but Mozilla probably isn’t complaining this time, given that the last three mainstream releases of Chrome have included security patches for zero-day security holes.

A zero-day is where the crooks find an exploitable security hole before the good guys do, and start abusing that bug to do bad stuff before a patch exists.

The name reflects the annoying fact that there were zero days that you could possibly have been ahead of the crooks, even if you are the sort of accept-no-delays user who always patches on the very same day that software updates first come out.

To be fair to the Chromium team, the most recent zero-day hole, patched in version 90 of the Chrome and Chromium projects, is best described as half-a-hole. You have to go out of your way to run the browser with its protective sandbox turned off, something that you will probably not do by choice, and are unlikely to do by mistake.

What’s in a name?

Happily, this month’s Firefox update (actually, Mozilla’s updates come out every four weeks, always on a Tuesday, rather than once a calendar month) has attracted attention more for a new privacy feature it has included than for the security holes it has removed.

The “problem child” that Firefox just addressed is a lesser-known JavaScript variable called window.name.

When a browser page opens a new window or tab, it can give that new page a name (a tag or a moniker, if you like), by which to refer to the new tab later on as a target for opening additional content.

Here’s an example of a legitimate use for the window.name property.

In our first attempt, we’ve referred to the target tab NEWTAB in the link on our page, and we’ve created a new tab using window.open(), but we haven’t set a window.namevalue for the new tab:

We get a page with the Naked Security site in a second tab, together with a link in the first tab to to open a third site, namely example.com:

However, because we haven’t set a window name for either of the two tabs already open, our link just opens in a third tab of its own, sandwiched between the previous two:

Let’s make a small change and try again.

This time, we’ve included a line of JavaScript to set the name property of the Naked Security tab when we open it, so we can explicitly reference that second tab in the future, using the moniker NEWTAB:

The main tab looks similar to last time:

Specifying an existing tab name in the target of the link means that we can re-use the second tab for our new content, so that the example.com page opens up in the same NEWTAB tab, replacing the Naked Security content and avoiding the creation of a third tab.

We end up with just two tabs, not three like last time:

This sort of behaviour can be useful in content management systems where you want a single “preview” page that keeps getting updated as you edit your content, rather than leaving you with a new open tab for every page you preview.

Window names considered harmful

Unfortunately, the window.name property doesn’t follow the so-called Same-Origin Policy (SOP), where only cookies and JavaScript variables set by website X can be read back in by website X.

The SOP is a fundamental part of web security, because it stops site Y, which might be an unscrupulous marketing page or a phishing site run by crooks, from getting at personal data stored by site X.

After all, data commonly stored in site-specific JavaScript variables or cookies can include details such as your username, your login secret (effectively the password for the current session), your profile and preferences, the current contents of your shopping cart, and much more.

So the SOP exists not only to stop personal web data from leaking inadvetently between different websites, but also to stop companies from sneakily tracking you by sharing data via innocent-looking JavaScript variables that you wouldn’t otherwise worry about.

And the window.name value was, at least until Firefox 88, one of those innocent-looking but open-to-abuse JavaScript settings.

The window.name property could surreptitiously be misused to bypass the SOP because it didn’t get cleared between different sites.

We can see that behaviour for outselves, using the handy developer tools in the current [2021-04-20T13:00Z] version of Edge (based on Chromium).

Here, we’ve opened the special web page about:blank, which is simply an empty HTML page with a domain name that won’t match any other website, and we’ve used the JavaScript console to set the window.name variable to the value pass-it-on-to-the-next-site:

Now, we’ve opened up a page from a completely different domain, namely example.com, yet we can see that the old value of window.name has been carried through to the new page, even though you might expect the Same-Origin Policy to prevent that from happening:

In other words, the unassuming window.name variable can be used as a sneaky way of passing messages between different domains, bypassing the SOP, and therefore sharing tracking codes from site to site when you would not expect it.

Explpoited for years

According to Mozilla, web tracking companies have been exploiting this loophole for years:

Since the late 1990s, web browsers have made the window.name property available to web pages as a place to store data. Unfortunately, data stored in window.name has been allowed by standard browser rules to leak between websites, enabling trackers to identify users or snoop on their browsing history. […]

Tracking companies have been abusing this property to leak information, and have effectively turned it into a communication channel for transporting data between websites. Worse, malicious sites have been able to observe the content of window.name to gather private user data that was inadvertently leaked by another website., and has decided to put a stop to this.

From Firefox 88 onwards, things have changed:

To close this leak, Firefox now confines the window.name property to the website that created it.

Here’s the difference – we repeated the above activity in the developer console, this time using the new Firefox 88.

Like before, we set the window.name property when our domain name was about:blank:

But when we switched to example.com, the value from before had been wiped out, and the window.name variable came back as an empty string:

In even better news, Mozilla reports that the other mainstream browser platforms are making the same sort of change, thus removing this tracking trick across the board:

Firefox isn’t alone in making this change: web developers relying on window.name should note that Safari is also clearing the window.name property, and Chromium-based browsers are planning to do so. Going forward, developers should expect clearing to be the new standard way that browsers handle window.name.

It’s a small change, to be sure, but it’s nice to see the browser makers agreeing to chip away in unison at “features” of this sort that are easily abused by websites that don’t care about privacy.

Lots of bug fixes

As you’d expect from a four-weekly Firefox release, there are also numerous security fixes in the 88.0 version.

None of them are rated critical, presumably because no one has yet figured out how to turn the more dangerous looking bugs into actual, working epxloits.

Nevertheless, several of the bugs deal with potentially dangerous and exploitable mismanagement of memory, including a buffer overflow (where you write to the wrong part of memory) and two use-after-free bugs (where you write to memory that has already been turned over for use elsewhere).

Following Mozilla’s usual terminology, the Firefox developers have documented all these bugs with an admission that “we presume that with enough effort some of these could have been exploited to run arbitrary code.

Rather than wait until someone – hopefully a cybersecurity researcher willing to disclose new exploits reposnsibly, rather than simply to sell them on the open market – proved that the bugs really were dangerous, the team patched them anyway.

Other bugs patched included so-called “presentation” bugs, where a user might think they were on site X when they weren’t.

As you can imagine, phishers love this sort of bug because it helps them to pass off fake content as real, even to users who are keeping an eye out to ensure they are on the website they expect.

What to do?

If you’re on Windows or Mac, go to Help > About Firefox or to Firefox > About and check if you are up-to-date.

If you aren’t, doing the version check will offer to do the update for you right away.

If you’re on Linux, your Firefox version may be managed as part of your distro, so Help > About may simply show you the version you are on, without doing an explicit update check. (As at 2021-04-20T13:00Z, you are looking for Firefox 88.0.)

Check back with your distro’s package manager to get the latest version.

On iOS and Android, you can update from the App Store or Google Play respectively, but note that on an iPhone, Firefox uses Apple’s browser core (which won’t yet have the window.name fix), and on Android, the latest version number may vary from device to device.

Naked Security Live – To hack or not to hack?

We investigate the controversy that was stirred up recently when the FBI in the US used malware to fight malware.

The Feds accessed remote access webshells left behind after the recent Hafnium attacks to remove the webshells themselves, after a court order said they could.

As helpful and as community-minded as this sounds, not everyone agreed that it was a good idea:

[embedded content]

Watch directly on YouTube if the video won’t play here.
Click the on-screen Settings cog to speed up playback or show subtitles.

Why not join us live next time?

Don’t forget that these talks are streamed weekly on our Facebook page, where you can catch us live every Friday.

We’re normally on air some time between 18:00 and 19:00 in the UK (late morning/early afternoon in North America).

Just keep an eye on the @NakedSecurity Twitter feed or check our Facebook page on Fridays to find out the time we’ll be live.

Serious Security: Rowhammer is back, but now it’s called SMASH

Remember Rowhammer?

Well, it’s back, and this time it’s called SMASH.

Rowhammering is a reliability problem that besets many computer memory chips, notably including the sort of RAM in your laptop or mobile phone.

Simply put, rowhammering means that if you read the same memory addresses over and over and over again, millions of times…

…the repeated nanoscopic electrical activity in the part of the chip where your data is actually stored may cause enough interference to affect the values in neighbouring memory cells.

Typically, each data bit in RAM is stored physically in a tiny silicon capacitor (an electronic component that can hold electrical charge), where a charged-up capacitor denotes a binary 1, and a capacitor without any charge signals 0.

The faster and more aggressively you charge and discharge the capacitors in one part of a RAM chip, the more likely it is that electrons will leak across into, or leak away from, next-door cells.

This can cause unexpected “bitflips”, where memory cells that haven’t been accessed nevertheless leak out enough electrons to flip from 1 to 0, or pick enough stray charge to flip from 0 to 1.

Bluntly put: using a rowhammer attack, you can make modifications, albeit hapazardly, to memory that has nothing to do with you, just by reading repetitively from memory that’s allocated to your program

Illegal writes simply by performing legal reads!

Why the “row” in rowhammer?

You’d need an enormous number of internal control connections on the chip to construct RAM where you could read exactly one bit (or even one byte) at time.

So, electrically at least, that’s not how most RAM chips work.

Instead, the cells storing the individual bits are arranged in a series of rows that can only be read out one full row at a time, like a string of fairy lights that are controlled by a single switch:

Turning on the transistors in row 3 causes all the capacitors in that row to discharge.
This means their their values can be read out on the column lines.

To read cell C3 above, for example, you would tell the row-selection chip to apply power along row wire 3, which would discharge the capacitors A3, B3, C3 and D3 down column wires A, B, C and D, allowing their values to be determined.

Bits without any charge will read out as 0; bits that were storing a charge as 1.

You’ll therefore get the value of all four bits in the row, even if you only wanted to know one of them.

(The above diagram is enormously simplified: in real life, contemporary laptop RAM chips typically have rows from 16kbits to 256kbits long.)

Incidentally, reading a row wipes out the value of all its bits by discharging it, so immediately after any read, the row is refreshed by saving the extracted data back into it, so it’s ready to be accessed again.

In other words, reading even a single byte of your program’s memory causes a whole row of RAM to be discharged and then recharged by writing back the same data to it.

And it’s these repeated row-by-row rewrites that may occasionally trigger bitflips in adjacent rows on the physical chip.

What about caching and memory refresh?

In day-to-day use of your computer, several factors combine to make bitflips caused by rowhammering an unusual and unlikely problem.

The first mitigating factor is that modern CPUs automatically keep local copies of memory addresses that you access frequently

Reading data out of special internal storage called a cache, located physically on the CPU itself, is much faster than reading from RAM.

In other words, reading the same memory address over and over doesn’t automatically cause the RAM circuitry to be activated over and over again, because the cached values are used for the second and subsequent accesses instead.

The second mitigating factor is that almost all computer RAM today is what’s known as DRAM, where the D stands for dynamic.

This means that the capacitors used as memory cells need recharging regulary whether they’ve been accessed or not, otherwise their charge leaks away, causing them to “go flat” and lose their value.

This cycle, called DRAM refresh, happens about 16 times a second, and involves redundantly reading every memory row, thus immediately and automatically rewriting its data to refresh its charge.

Freshly re-written memory capacitors are much less likely to bitflip, because each bit has a charge that is either close enough to full voltage or close enough to zero that its charge level can unambiguously be detected as 0 or 1.

So, the CPU cache reduces the number of times that any row is typically impinged upon by its neighbouring rows between refreshes, reducing the likelihood of bitflips caused by overzealous memory reads between each DRAM refresh.

In other words, rowhammering is not much of a problem in an ideal world.

Could this ever be exploited?

Of course, we don’t live in an ideal world, and if you provide cybercrooks with any trick that might deliberately cause your computer hardware to misbehave, you can be sure that they’ll try it out.

Nevertheless, even if attackers deliberately set out to provoke memory access patterns to cause bitflips on purpose, you might imagine that actively exploiting those bitflips to run malware or steal data would be enormously complicated.

The attackers would need not only to bypass the CPU cache in order to force fast and repetitive access to the RAM chip itself, but also to trick the operating system into allocating memory in a predictable way to ensure that the RAM assigned to their code landed up in a suitable place on the physical chip.

Additionally, modern DRAM chips include built-in hardware known as TRR, short for for target row refresh, which automatically refreshes DRAM rows that are next to rows that have been accessed repeatedly.

At a small cost in inefficiency (a few extra row refreshes that aren’t strictly needed), TRR helps to prevent at-risk memory capacitors from reaching an ambiguous charge level where their data can’t be trusted.

What about browser attacks?

As for exploiting the rowhammer issue in a browser, where you have to rely on code written in JavaScript and therefore have no direct control over allocating memory at all, you might think that it would be impossible.

Browser code can’t directly control the CPU cache, and isn’t even able to measure elapsed time accurately these days, because all major browsers have now deliberately and synthetically reduced both the precision and the accuracy of the timing functions available to JavaScript programs.

Even the authors of the SMASH paper admit:

[Existing … rowhammer] attacks require frequent cache flushes, large physically contiguous regions, and certain access patterns to bypass in-DRAM TRR, all challenging in JavaScript.

Timing plays a part in rowhammer attacks not only because of the 64-millisecond “DRAM refresh clock” (about 16 times a second) that is always ticking in the background, but also because timing memory accesess lets you differentiate cached memory access from uncached access, which leaks information about what data lives where in RAM, helping you to organise your data layout for the attack.

Never say never

Of course, when it comes to cybersecurity, you should never say never.

If nothing else, confidently saying that a cybersecurity problem “can’t happen” – unless you have a formal mathematical proof – is an invitation both to crooks and to hackers to prove you wrong.

Indeed, having come up last year with an attack that bypassed the protection afforded by TRR, researchers at the Vrije Universiteit (VU) Amsterdam and ETH Zurich have done it again.

Last time, they called their attack TRRespass (like many hackers, they seem to enjoy puns and speaking like pirates).

This time they have dubbed their attack SMASH, short for Synchronized Many-sided Rowhammer Attacks from JavaScript.

(We’d have gone the whole nine yards and called it SMASHAFROJ, but perhaps they thought that would be OTT, even for a BWAIN.)

You can read about SMASH in an overview article on the VU website, or delve into the (note: long and jargon-rich) full academic paper, which will be presented at a Usenix conference later in 2021.

Greatly simplified, when using Firefox 81.0.1 (admittedly now six months old) on a Linux 4.15 kernel (no longer officialy supported by the Linux team), they were able to:

  • Allocate suitably-aligned blocks of RAM by using specific JavaScript array functions inside the browser, thus allocating RAM in such a way that they could reliably predict where bitflips were likely to happen.
  • Bypass the mitigating effects of CPU caching by using memory access sequences that forced the CPU to keep running out of cache space, thus forcing it to reload data from RAM and thereby provoking the rowhammering effect that caching usually prevents.
  • Bypass the TRR hardware in the RAM chip by using techniques from their TRRespass research to access rows of RAM in a special pattern, thus causing the TRR hardware to lose track of which memory rows needed refreshing.
  • Modify write-protected JavaScript data via bitflipping in such a way as to provoke exploitable changes inside the browser itself, thus avoiding the need to escape from the JavaScript sandbox to identify and attack other processes in the system.

What to do?

As we said when we wrote about rowhammering in 2020:

Fortunately, rowhammering doesn’t seem to have become a practical problem in real-life attacks, even though it’s widely known and has been extensively researched.

The SMASH research is a masterpiece of hard-core hacking, but each attack would probably need to be tailored for the type of CPU you have, the vendor of the RAM chips you’re using, the browser and operating system you’re using, and then might not work reliably or even at all…

…so we’re not surprised that cybercriminals have stuck to attack vectors that they know can be exploited reliably.

However, the SMASH researchers did find a useful mitigation for their new attack.

In their research, they relied on a Linux computer configured to use what are known Transparent Huge Pages (THP).

Linux THP means that when a program asks for memory, the operating system can choose to allocate it either in chunks of 4KB each (“small” memory pages) or of 2MB (“huge” pages).

The SMASH attack relies on a 2MB JavaScript buffer allocated all in one “huge” memory page, so that the attackers can be sure in advance that it will be assigned to one contiguous block of memory cells on the RAM chip itself, and will therefore span multiple adjacent DRAM rows.

So, if you turn off THP on your Linux laptop, you might notice or be able to measure a tiny loss in performance (we didn’t and couldn’t)…

…but you will neutralise the currently documented SMASH attacks altogether.

To turn off THP, run this command as root:

 # echo never > /sys/kernel/mm/transparent_hugepage/enabled # 

To see the current setting of THP, print out the abovementioned THP “file” from /sys:

 $ cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never] $

The square brackets show you which of the three valid options is currently selected. (Most Linux distros are set to [always] or [madvise] by default.)

Always means that the feature is enabled for every app; madvise means it’s off by default but apps can opt in; and never means that all kernel memory allocation will be done in 4KB “small” pages.

Don’t forget, however, that turning off THP isn’t a generic and future-proof defence against rowhammering attacks, merely a defence that seems to protect your browser against the current state of the art.

Small pages are efficient for programs that do lots of small allocations, but have a much higher memory management overhead when a program needs a big chunk of memory for a single purpose, because each 4KB block in the chunk has to be accounted for separately. Huge pages are efficient for large allocations, but waste space whenever a block less than 2MB is needed. Linux THP therefore aims to provide a “best of both worlds” approach to memory management.


S3 Ep28.5: Hacking back – is attack an acceptable form of defence? [Podcast]

Sophos cybersecurity expert Chester Wisniewski provides excellent, topical and timely commentary on the FBI’s recent use of a malware-like method to forcibly clean up hundreds of servers still infected in the Hafnium aftermath.

With Paul Ducklin and Chester Wisniewski

Intro and outro music by Edith Mudge.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher, Overcast and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.

S3 Ep28: Pwn2Own hacks, dark web hitmen and COVID-19 privacy [Podcast]

We look at the big-money hacks from the 2021 Pwn2Own competition. We investigate the difficulties of hiring an assassin via the dark web. We wrestle with some of the privacy issues relating to COVID-19 infection tracking apps.

With Kimberly Truong, Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher, Overcast and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.

go top