More MOVEit mitigations: new patches published for further protection

Even if you’re not a MOVEit customer, and even if you’d never heard of the MOVEit file sharing software before the end of last month…

…we suspect you’ve heard of it now.

That’s because the MOVEit brand name has been all over the IT and mainstream media for the last week or so, due to an unfortunate security hole dubbed CVE-2023-34362, which turned out to be what’s known in the jargon as a zero-day bug.

A zero-day hole is one that cybercriminals found and figured out before any security updates were available, with the outcome that even the most avid and fast-acting sysadmins in the world had zero days during which they could have patched ahead of the Bad Guys.

Regrettably, in the case of CVE-2023-34362, the crooks who got there first were apparently members of the infamous Clop ransomware crew, a gang of cyberextortionists who variously steal victims’ data or scramble their files, and then menace those victims by demanding protection money in return for suppressing the stolen data, decrypting the ruined files, or both.

Trophy data plundered

As you can imagine, because this security hole existed in the web front-end to the MOVEit software, and because MOVEit is all about uploading, sharing and downloading corporate files with ease, these criminals abused the bug to grab hold of trophy data to give themselves blackmail leverage over their victims.

Even companies that are not themselves MOVEit users have apparently ended up with private employee data exposed by this bug, thanks to outsourced payroll providers that were MOVEit customers, and whose databases of customer staff data seem to have been plundered by the attackers.

(We’ve seen reports of breaches affecting tens or hundreds of thousands of staff at a range of operations in Europe and North America, including organisations in the healthcare, news, and travel sectors.)


SQL INJECTION AND WEBSHELLS EXPLAINED


Patches published quickly

The creators of the MOVEit software, Progress Software Corporation, were quick to publish patches once they knew about the existence of the vulnerability.

The company also helpfully shared an extensive list of so-called IoCs (indicators of compromise), to help customers look for known signs of attack even after they’d patched.

After all, whenever a bug surfaces that a notorious cybercrime crew has already been exploiting for evil purposes, patching alone is never enough.

What if you were one of the unlucky users who had already been breached before you applied the update?

Proactive patches too

Well, here’s a spot of good but urgent news from the no-doubt beleaguered developers at Progress Software: they’ve just published yet more patches for the MOVEit Transfer product.

As far as we know, the vulnerabilities fixed this time aren’t zero-days.

In fact, these bugs are so new that at the time of writing [2023-06-09T21:30:00Z] they still hadn’t received a CVE number.

They’re apparently similar bugs to CVE-2023-34362, but this time found proactively:

[Progress has] partnered with third-party cybersecurity experts to conduct further detailed code reviews as an added layer of protection for our customers. [… We have found] additional vulnerabilities that could potentially be used by a bad actor to stage an exploit. These newly discovered vulnerabilities are distinct from the previously reported vulnerability shared on May 31, 2023.

As Progress notes:

All MOVEit Transfer customers must apply the new patch, released on June 9. 2023.

For official information about these additional fixes, we urge you to visit the Progress Overview document, as well as the company’s specific advice about the new patch.

When good news follows bad

By the way, finding one bug in your code and then very quickly finding a bunch of related bugs isn’t unusual, because flaws are easier to find (and you’re more inclined to want to hunt them down) once you know what to look for.

So, even though this means more work for MOVEit customers (who may feel that they have enough on their plate already), we’ll say again that we consider this good news, because latent bugs that might otherwise have turned into yet more zero-day holes have now been closed off proactively.

By the way, if you’re a programmer and you ever find yourself chasing down a dangerous bug like CVE-2023-34362…

…take a leaf out of Progress Software’s book, and search vigorously for other potentially related bugs at the same time.


THREAT HUNTING FOR SOPHOS CUSTOMERS


MORE ABOUT THE MOVEIT SAGA

Learn more about this issue, including advice for programmers, in the latest Naked Security podcast. (The MOVEit section starts at 2’55” if you want to skip to it.)

Thoughts on scheduled password changes (don’t call them rotations!)

We’re all still using passwords on many, perhaps most, of our accounts, because we’re all still using plenty of online services that don’t offer any other sort of login system.

Just today, for instance, I paid membership fees to a cycling-related group that asked for my postal address so it could send me my membership card, which I thought was a delightfully simple and old-school way of letting me retrieve my membership number in future while out on the road.

In the sort of cold and soggy weather you get for much of the year in England, digging out a mobile phone, waiting for a signal, taking off your gloves (they’re not much fun to put back on when you’re winter-waterlogged), and fiddling around with apps, websites, passwords, 2FA codes and more…

…well, it’s just not as easy as finding a waterproof, crash-proof, no-batteries-required, plastic card with your basic details on it.

But along with my payment confirmation, informing me that my membership card was on its way, was a reminder that if ever I wanted to renew my membership, or to request a replacement waterproof, crash-proof, no-batteries-required, plastic card (sadly, they aren’t loss-proof), I’d need to create an account on the group website, so why not choose a password right now?

Simply put, to avoid the need for a password in the first place, I’d need to create one in the second place.

And whenever passwords come up, a long-running question comes up too:

Should you change all your passwords all the time to make them fast-moving targets for cybercriminals, or lock in really complex ones to start with, and then leave well alone?

Indeed, that was the issue facing a long-term Naked Security reader this very morning, whose own IT team were on the horns of this very dilemma, possibly because of a cyberinsecurity near-miss that they’d just experienced first hand.

Which is better?

Complex passwords or passphrases that may not get changed often, or poorly-chosen passwords that are changed regularly?

Thoughts and cogitations

Our thoughts on the matter are as follows:

  • Changing passwords regularly isn’t an alternative to choosing and using strong ones. If you want to change your password every month, that’s your choice, but it’s not an excuse for starting with your cat’s name and using minor variants of it every few weeks.
  • Forcing people to change their passwords routinely may lull them into bad habits. Many users simply adopt a predictable mechanism, such as adding -01, -02, -03 and so on to satisfy the letter (but not the spirit) of your password replacement rules. Attackers can figure out that sort of behaviour.
  • Scheduling password changes may delay emergency responses. If you always change your password every few weeks, there’s less incentive to change it right away if you think you might have been phished. After all, you’ll be changing it “soon” anyway.

Regularly changing your password doesn’t magically make it a better password.

Only choosing a better password in the first place makes it a better password! (This is where password managers can help.)

In other words, we suggest that you first address the problem of helping your users to choose decent passwords, then encourage them to recognise cases where they should change their passwords right away, without needing a timetable to tell them to do so…

…and only then should you worry about whether you really need a “regular changes regardless” password policy as well.

The risks of rote behaviour

Demanding password changes every month when you simply don’t need to is just inviting people to save their new passwords insecurely, or to choose new passwords sloppily, or to rotate through a repeating sequence of N related passwords, or of only ever updating their passwords every 30 days, even in emergencies.

Having said that, locking out users who haven’t accessed specific company accounts for a certain time is a good idea. (This also guards modestly against forgotten accounts, because they eventually expire automatically.)

Locking users out for inactivity is more intrusive than simply forcing them to reset their passwords regularly, and therefore unpopular.

But if someone has a company account login that they aren’t using, why not push them to justify in person why they still need it after they haven’t used it for, say, six months or a year?

After all, if it’s a login for a product or service that charges a per-user fee… you may even be able to save the cost of their subscription.

And if they genuinely don’t need the account any more, you’re helping them to stay out of trouble by preventing rogues and cybercrooks from doing bad things in their name.


S3 Ep138: I like to MOVEit, MOVEit

BACKDOORS, EXPLOITS, AND LITTLE BOBBY TABLES

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Backdoors, exploits, and the triumphant return of Little Bobby Tables.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth, and he is Paul Ducklin.

Paul, how do you do?


DUCK.  I think he’s probably “Mr. Robert Tables” now, Douglas. [LAUGHTER]

But you’re right, he has made an infamous return.


DOUG.  Great, we will talk all about that.

But first, This Week in Tech History.

On 7 June 1983, Michael Eaton was granted a patent for the AT command set for modems.

To this day, it’s still a widely used communication protocol for controlling modems.

It stands for ATTENTION, and is named after the command prefix used to initiate modem communication.

The AT command set was originally developed for Hayes modems, but has become a de facto standard and is supported by most modems available today.

Paul, how many technology things do we have that have survived since 1983 and are still in use?


DUCK.  Errr…

MS-DOS?

Oh, no, sorry! [LAUGHTER]

ATDT for “Attention, Dial, Tone”.

ATDP [P FOR PULSE] if you didn’t have a tone-dialling exchange…

…and you’d hear the modem.

It had a little relay going click-click-click-click-click, click-click-click, click-click.

You could count your way through to check the number it was dialling.

And you’re right: still used to this day.

So, for example, on Bluetooth modems, you can still say things like AT+NAME= and then the Bluetooth name you want to display.

Amazingly long-lived.


DOUG.  Let’s get into our stories.

First, we kept an eye on this update… what’s going on with KeePass, Paul?

Serious Security: That KeePass “master password crack”, and what we can learn from it


DUCK.  If you remember, Doug, we spoke about a bug (that was CVE-2023-32784).

That bug was where, as you typed in your password, the strings of blobs that indicated the number of password characters already entered inadvertently acted as sort of flags in memory that said, “Hey, those five blob characters that show you’ve already typed five characters of the password? Right near them in memory is the single character (that would otherwise be lost in time and space) that is the sixth character of your password.”

So the master password was never collected together in one place – the characters were littered all over memory.

How would you ever put them together?

And the secret was that you looked for the markers, the blob-blob-blob-blob, etc.

And the good news is that the author of KeePass promised that he would fix this, and he has.

So if you’re a KeePass user, go and get KeyPass 2.54.


DOUG.  Yessir!

Alright, we will cease to keep an eye on this.

Unless it crops up again, in which case we will cast a new eye on it. [LAUGHTER]

Let’s get into our list of stories.

Paul, we’ve got a good old-fashioned SQL injection attack that heralds the return of our friend Little Bobby Tables.

What’s going on here?

MOVEit zero-day exploit used by data breach gangs: The how, the why, and what to do…


DUCK.  To quote the Original Mad Stuntman [dance artist Mark Quashie], “I like to move it, move it!”

It’s a surprisingly widely used file sharing-and-management product/service.

There are two flavours of it.

There’s MOVEit Transfer and MOVEit Cloud; they come from a company called Progress Software Corporation.

It’s a file sharing tool that includes, amongst other things, a web front end that makes it easy for you to access files that are shared in your team, department, company, maybe even in your supply chain.

Problem… in the web front-end part, as you say, there was a SQL injection bug (dubbed CVE 2023-34362, if you want to track this one down).

And what that meant is somebody who could access your web interface without logging in could trick the server, the back-end server, into running some commands of their choice.

And amongst the things that they could do would be: finding out the structure of your internal databases, so they know what stored where; perhaps downloading and messing with your data; and, optionally for the crooks, injecting what’s known as a webshell.

That’s basically a rogue file that you stick in the web server part so that when you go back to it later, it doesn’t serve up a web page to you, the visitor with an innocent looking browser.

Instead, it actually triggers arbitrary commands on the server.

And unfortunately, because this was a zero-day, it has apparently been fairly widely used to steal data from some very large organisations, and then blackmail them into paying money to have the data suppressed.

In the UK, we’re talking about hundreds of thousands of employees affected who were essentially hacked because of this MOVEit bug, because that was the software that their common payroll provider had chosen to use.

And you imagine, if you can’t break into XYZ Corp directly, but you can break into XYZ Corp’s outsourced payroll provider, you’ll probably end up with amazing amounts of personally identifiable information about all the staff in those businesses.

The kind of information that is, unfortunately, really easy to abuse for identity theft.

So you’re talking things like Social Security numbers, National Insurance numbers, tax file numbers, home addresses, phone numbers, maybe bank account numbers, pension plan upload information, all of that stuff.

So, apparently, that seems to be the harm that was done in this case: companies who use companies that use this MOVEit software that have been deliberately, purposefully, targeted by these crooks.

And, according to reports from Microsoft, it appears that they either are, or are connected to, the notorious Clop ransomware gang.


DOUG.  OK.

It was patched quickly, including the cloud-based version, so you don’t have to do anything there… but if you’re running an on-premises version, you should patch.

But we’ve got some advice about what to do, and one of my favourites is: Sanitise thine inputs if you’re a programmer.

Which leads us to the Little Bobby Tables cartoon.

If you’ve ever seen the XKCD cartoon (https://xkcd.com/327), the school calls a mom and says, “We’re having some computer trouble.”

And she says, “Is my son involved.”

And they say, “Well, kind-of, not really. But did you name your son Robert Drop Table Students?”

And she says, “Oh, yes, we call him Little Bobby Tables.”

And of course, inputting that command into an improperly sanitised database will delete the table of students.

Did I get that right?


DUCK.  You did, Douglas.

And, in fact, as one of our commenters pointed out, a few years ago (I think it was back in 2016) there was the famous case of somebody who deliberately registered a company with Companies House in the UK called SEMICOLON (which is a command separator in SQL) [LAUGHTER] DROP TABLE COMPANIES SEMICOLON COMMENT SIGN LIMITED.

Obviously, that was a joke, and to be fair to His Majesty’s Government’s website, you can actually go to that page and display the name of the company correctly.

So it doesn’t seem to have worked in that case… it looks like they were sanitising their inputs!

But the problem comes when you have web URLs or web forms that you can send to a server that include data *that the submitter gets to choose*, that then gets injected into a system command that is sent to some other server on your network.

So it’s rather an old-school mistake, but it’s rather easy to make, and it’s kind of quite hard to test for, because there are so many possibilities.

Characters in URLs and in command lines… things like single quote marks, double quote marks, backslash characters, semicolons (if they’re statement separators), and in SQL, if you can sneak a dash-dash (--) character sequence in there, then that says, “Whatever comes next is a comment.”

Which means, if you can inject that into your now malformed data, you can make all the stuff that would be a syntax error at the end of the command disappear, because the command processor says, “Oh, I’ve seen dash-dash, so let me disregard it.”

So, sanitising thine inputs?

You absolutely must do it, and you really have to test for it…

…but beware: it’s really hard to cover all the bases, but you have to, otherwise one day someone will find out the base you forgot.


DOUG.  Alright, and as we mentioned…

Good news, it’s been patched.

Bad news, it was a zero-day.

So, if you’re a MOVEit user, make sure that this has been updated if you’re running anything other than the cloud version.

And if you can’t patch right now, what can you do, Paul?


DUCK.  You can just turn off the web-based part of the MOVEit front end.

Now, that may break some of the things that you’ve come to rely on in your system, and it means that people for whom the web UI is the only way they know to interact with the system… they will get cut off.

But it does seem that if you use the numerous other mechanisms, such as SFTP (Secure File Transfer Protocol) for interacting with the MOVEit service, you won’t be able to trigger this bug, so it’s specific to the web service.

But patching is really what you need to do if you have an on-premises version of this.

Importantly, as with so many attacks these days, it’s not just that the bug existed and you’ve now patched it.

What if the crooks did get in?

What if they did something nasty?

As we’ve said, where the alleged Clop ransomware gang people have been in, tt seems there are some telltale signs that you can look for, and Progress Software has a list of those on its website (what we call Indicators of Compromise [IoCs] that you can go and search for).

But, as we’ve said so many times before, absence of evidence is not evidence of absence.

So, you need to do your usual post-attack threat hunting.

For example, looking for things like newly created user accounts (are they really supposed to be there?), unexpected data downloads, and all sorts of other changes that you might not expect and now need to reverse.

And, as we’ve also said many times, if you don’t have the time and/or the expertise to do that by yourself, please don’t be afraid to ask for help.

(Just go to https://sophos.com/mdr, where MDR, as you probably know, is short for Managed Detection and Response.)

It’s not just knowing what to look for, it’s knowing what it implies, and what you should do urgently if you find that it’s happened…

…even though what happened might be unique in your attack, and other people’s attacks might have unfolded slightly differently.


DOUG.  I think we will keep an eye on this!

Let’s stick with exploits, and talk next about an in-the-wild zero-day affecting Chromium based browsers, Paul.

Chrome and Edge zero-day: “This exploit is in the wild”, so check your versions now


DUCK.  Yes, all we know about this one… it’s one of those times where Google, which normally likes to tell big stories about interesting exploits, is keeping its cards very close to its chest, because of the fact that this is a zero-day.

And the Google update notice to Chrome says simply, “Google is aware that an exploit for CVE-2023-3079 exists in the wild.”

That’s a step above what I call the two degrees of separation that companies like Google and Apple often like to trot out, that we’ve spoken about before, where they say, “We’re aware of reports that suggest that other people claim that they may have seen it.” [LAUGHTER]

They’re just saying, “There’s an exploit; we’ve seen it.”

And that’s not surprising, because apparently this was investigated and uncovered by Google’s own threat analysis team.

That’s all we know…

…that, and the fact that it’s what’s known as a type confusion in V8, which is the JavaScript engine, the part of Chromium that processes and executes JavaScript inside your browser.


DOUG.  I sure wish I knew more about type confusion.

I’m confused about type confusion.

Maybe someone could explain it to me?


DUCK.  Ooooh, Doug, that’s just kind of segue I like! [LAUGHS]

Simply explained, it’s where you provide data to a program and you say, “Here’s a chunk of data I want you to treat it as if it were, let’s say, a date.”

A well written server will go, “You know what? I’m not going to blindly trust the data that you’re sending to me. I’m going to make sure that you’ve sent me something realistic”…

…thus avoiding the Little Bobby Tables problem.

But imagine if, at some future moment in the execution of the server, you can trick the server into saying, “Hey, remember that data that I sent you that I told you was a date? And you’ve verified that the number of days was not greater than 31, and that the month was not greater than 12, and that the year was between, say, 1920 and 2099, all of those error checks you’ve done? Well, actually, forget that! Now, what I want you to do is to take that data that I supplied, that was a legal date, but *I want you to treat it as if it were a memory address*. And I want you to start executing the program that runs there, because you’ve already accepted the data and you’ve already decided you trust it.”

So we don’t know exactly what form this type confusion in V8 took, but as you can imagine, inside a JavaScript engine, there are lots of different sorts of data that JavaScript engines need to deal with and process at different times.

Sometimes there’ll be integers, sometimes there’ll be character strings, sometimes there’ll be memory addresses, sometimes there’ll be functions to execute, and so on.

So, when the JavaScript engine gets confused about what it’s supposed to do with the data it’s looking at right now, bad things can happen!


DOUG.  The fix is simple.

You just need to update your Chromium-based browser.

We have instructions about how to do that for Google Chrome and Microsoft Edge.

And last, but certainly not least, we’ve got a so-called Windows “backdoor” that’s affecting Gigabyte motherboard owners.

The devil, as you like to say, is in the details, however, Paul.

Researchers claim Windows “backdoor” affects hundreds of Gigabyte motherboards


DUCK.  [SIGH] Oh dear, yes!

Now, let’s start at the end: the good news is that I’ve just seen Gigabyte has put out a patch for this.

The problem was that it is quite a handy feature, if you think about it.

It was a program called GigabyteUpdateService.

Well, guess what that did, Douglas?

Exactly what it said on the tin – the feature is called APP Center (that’s Gigabyte’s name for this).

Great.

Except that the process of doing the updates was not cryptographically sound.

There was still some old-time code in there… this was a C# program, a .NET program.

It had, apparently, three different URLs it could try to do the download.

One of them was plain old HTTP, Doug.

And the problem, as we’ve known since the days of Firesheep, is that HTTP downloads are [A] trivial to intercept and [B] trivial to modify along the way such that the recipient can’t detect you tampered with them.

The other two URLs did use HTTPS, so the download couldn’t easily be tampered with.

But there was no attempt on the other end to do even the most basic HTTPS certificate verification, which means that anybody could set up a server claiming that it had a Gigabyte certificate.

And because the certificate did not need to be signed by a recognised CA (certificate authority), like GoDaddy or Let’s Encrypt, or someone like that, it means that anybody who wanted to, at a moment’s notice, could just mint their own certificate that would pass muster.

And the third problem was that after downloading the programs, Gigabyte could have, but didn’t, check that they were signed not only with a validated digital certificate, but with a certificate that was definitely one of theirs.


DOUG.  OK, so those three things are bad, and that’s the end of the bad things, right?

There’s no more to it.

That’s all we have to worry about? [LAUGHTER]


DUCK.  Well, unfortunately, there’s another level to this which makes it even worse.

The Gigabyte BIOS, their firmware, has a super-cool special feature in it.

(We’re not sure whether it’s on by default or not – some people are suggesting it’s off for some motherboards by default, and other commenters have said, “No, I bought a motherboard recently and this feature was on by default.”)

This is a feature in the firmware itself that activates the APP Center automatic update process.

So you may have this software installed, and activated, and running, even though you didn’t install it yourself.

And worse, Doug, because it’s orchestrated by the firmware itself, that means if you go into Windows and say, “So, I’ll just rip this thing out”…

…the next time you boot your computer, the firmware itself essentially injects the update thing back into your Windows folder!


DOUG.  If we welcome in a bit early our Comment of the Week… we had an anonymous commenter on this article tell us:

I just built a system with a Gigabyte ITX board a few weeks ago, and the Gigabyte APP Center was on out of the box (i.e. on by default).

I even deleted it a few times before I found out it was hidden in the BIOS settings. I’m not a fan of those shenanigans.

So this person’s deleting this APP Center, but it just keeps coming back, and coming back, and coming back.


DUCK.  It’s a little bit more complicated than I may have suggested.

You imagine. “Oh, well, the firmware just goes online, downloads a file, and sticks it into your Windows folder.”

But don’t most computers have BitLocker these days, or at least on corporate computers, don’t people have full disk encryption?

How on earth does your firmware, which runs before it even knows whether you’re going to run Windows or not…

…how does the firmware inject a new file into a Windows C: drive that’s encrypted?

How on earth does that work?

And for better or for worse, Microsoft Windows actually has… I think it’s a feature, though when you hear how it works, you might change your mind. [LAUGHER]

It’s called WPBT.

And it stands for… [CAN’T REMEMBER]


DOUG.  Windows Platform Binary Table.


DUCK.  Ah, you remembered better than I did!

I almost can’t believe that it works like this….

Basically, the firmware goes, “Hey, I’ve got a I’ve got an executable; I’ve got a program buried in my firmware.”

It’s a Windows program, so the firmware can’t run it because you can’t run Windows programs during the UEFI firmware period.

But what the firmware does is that it reads the program into memory, and tells Windows, “Hey, there’s a program lying around in memory at address 0xABCDEF36C0, or whatever it is. Kindly implant this program into yourself when you’ve unlocked the drive and you’ve actually gone through the Secure Boot process.”


DOUG.  What could possibly go wrong? [LAUGHTER]


DUCK.  Well, to be fair to Microsoft, its own guidelines say the following:

The primary purpose of WPBT is to allow critical software to persist even when the operating system has changed or been reinstalled clean. One use case is to enable anti-theft software, which is required to persist in case a device has been stolen, formatted or reinstalled.

So you kind of see where they’re coming from, but then they notice that:

Because this feature provides the ability to persistently execute system software in the context of Windows, it is critical that these solutions are as secure as possible…

(It’s not boldfaced; I’m speaking like it’s boldfaced.)

…and do not expose Windows users to exploitable conditions. In particular, these solutions must not include malware, i.e. malicious software, or unwanted software installed without adequate user consent.

And the consent, in this case, as our commenter said, is that there is a firmware option, a BIOS option on Gigabyte motherboards.

And if you dig around in the options long enough, you should find it; it’s called APP Center Download and Install.

If you turn that option off, then you get to decide whether you want this thing installed, and then you can update it yourself if you want.


DOUG.  OK, so the big question here…

…is this really a backdoor?


DUCK.  My own opinion is that the word “backdoor” really ought to be reserved for a very particular class of IT shenanigans, namely, more nefarious cybersecurity behaviours.

Things like: deliberately weakening encryption algorithms so they can be broken by people in the know; deliberately building in hidden passwords so people can log in even if you change your password; and opening up undocumented pathways for command-and-control.

Although you might not realise that this APP Center command-and-control pathway existed, it’s not exactly undocumented.

And there is an option, right there in the BIOS, that lets you turn it on and off.

Take yourself over to the Gigabyte website, to their news site, and you will find out about the latest version.


DOUG.  I want to thank that anonymous commenter.

That was very helpful information that helped round out the story.


DUCK.  Indeed!


DOUG.  And I want to remind everyone: if you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


Firefox 114 is out: No 0-days, but one fascinating “teachable moment” bug

Firefox’s latest major update is out, following Mozilla’s usual every-fourth-Tuesday release cycle.

The list of security fixes this month (like full moons, there are sometimes two Firefox releases in a calendar month, but most months only have one) is splendidly short, and there aren’t any critical bugs or zero-days in the list.

But there’s a fascinating bug that acts as a reminder that it’s hard to write responsive, user-friendly browser code that’s also strong against deliberate trickery.

That bug, designated CVE-2023-34414, is rated High, and is described with the somewhat mysterious words: Clickjacking certificate exceptions through rendering lag.

Deconstructing the jargon

Let’s deconstruct the jargon in this bug report.

Clickjacking, very simply put, is where an attacker lures you to a part of the screen that looks safe (or even desirable) to click on, and tricks you into clicking your mouse or tapping your finger on the spot marked X…

…only to have your click sent to a component in the web page that you definitely wouldn’t have clicked on if only you’d known where your click was really going.

For example, a rogue online ad-seller might try mashing up clickable ads with unrelated images that look like harmless [OK] buttons, but that actually allow the click to activate the ad, thus co-opting you into ad fraud.

Another popular abuse of clickjacking, back when it was a big thing in the early 2010s, was to hover an invisible social media “Like” button over some entirely unrelated content (which could even be a fake [Cancel] button that well-informed users would be keen to click).

In this way, you could end up getting tricked into endorsing even outrageous content under the misapprehension that you were rejecting or refusing it instead.

Fortunately, browser makers quickly started detecting and avoiding this sort of clickjacking treachery, making it less and less useful to cybercriminals.

The technical name user interface redress attack appeared in the jargon for a while. But the ambiguity of the word “redress”, which can mean both RE-dress in the sense of dress again by draping in new clothing, and re-DRESS in the sense of set right a wrong, made this fancy-sounding expression hard to understand. The word clickjacking was not only much shorter, but also much clearer and cooler to use, so that’s the word that stuck.

Certificate exceptions relate to those warnings that your browser shows you when you visit a website that might not be what it seems, such as a server called example.com that identifies itself as unknown.invalid; a server with a web certificate that hasn’t been renewed for ages; or a certificate that hasn’t been vouched for by a known certificate authority.

For example, like this:

And rendering lag is the delay between the moment that your browser receives instructions to present new content, and the point at which it has done the necessary HTML, CSS, graphics and JavaScript processing to have the content ready for display.

According to Mozilla, the CVE-2023-34414 bug could be triggered by an attacker who got the balance (or perhaps we mean the imbalance) just right (or wrong) in the following sequence:

  • Serve up content as a lure, showing a button or something of that sort that you’d probably want to click on.
  • Introduce just enough, but not too much, extra CPU load on the browser by supplying new content designed to eat up rendering resources.
  • Hope that your click arrives just late enough to end up on the Potential Security Risk page instead of on the fake content, but just soon enough for you not to have seen the warning page popping up first.

We’ve all done this sort of thing by mistake in other contexts: moving the mouse cursor to the button we wanted to press, for example, such as confirming that we wanted to answer an important incoming voice call right this moment…

…then looking away when we shouldn’t have, and accidentally clicking on the very location where some other urgent dialog had popped up that we hadn’t noticed, such as approving an immediate and lengthy reboot to apply updates instead.

With the right timing…

In the CVE-2023-34414 case, an attacker could orchestrate the timing of the subterfuge so that you could be tricked even if you didn’t let your attention wander, and even if you carefully didn’t click without looking:

If a malicious page elicited user clicks in precise locations immediately before navigating to a site with a certificate error, and made the renderer extremely busy at the same time, it could create a gap between when the error page was loaded and when the display actually refreshed.

With the right timing the elicited clicks could land in that gap and activate the button that overrides the certificate error for that site.

Mozilla says it has redressed this bug (in the latter sense of redress we gave above!) by controlling the timing more carefully, thus ensuring the correct activation delay that Firefox “uses to protect prompts and permission dialogs from attacks that exploit human response time delays.”

In other words, clicks from a previous, innocent-looking page no longer get delayed or left over for long enough to activate an all-important security dialog that needs genuine attention before accepting your input.

What to do?

  • If you’re a Firefox user, head to the About Firefox menu option to check what version you have. If your browser hasn’t yet updated automatically, you should be asked if you want to fetch the latest version right away. You should end up with 114.0 or later if you’re using the regular flavour of Firefox, or ESR 102.12 if you’re using the Extended Support Release (the ESR includes all needed security fixes, but delays the addition of new features, in case any of them inadvertently add new bugs).
  • If you’re a programmer, try to design and regulate your user interface so that critical decisions can’t be triggered by mouse clicks or keystrokes that were buffered up earlier by a user who didn’t (or couldn’t) anticipate popups that might appear in the near future, but hadn’t shown up yet.

Chrome zero-day: “This exploit is in the wild”, so check your version now

Google’s latest Chrome update is out, and this time the company hasn’t minced its words about one of the two security patches it includes:

Google is aware that an exploit for CVE-2023-3079 exists in the wild.

There’s no two-degrees-of-separation verbiage, as we’ve often seen from Google before, to say that the company “is aware of reports” of an exploit.

This time, it’s “we are aware of it all by ourselves”, which translates even more bluntly into “we know that crooks are abusing this as we speak”, given that the bug report came directly from Google’s own Threat Research Group.

As usual, this implies that Google was investigating an active attack (whether against Google itself, or some external organisation, we don’t know) in which Chrome had been pwned by a previously unknown security hole.

The bug is described simply as: Type Confusion in V8. (Understandably, Google’s not saying more than that at this stage.)

As we’ve explained before, a type confusion bug happens when you supply a program with a chunk of data that it’s supposed to parse, validate, process and and act upon in one way…

…but you later manage to trick the program into interpreting the data in a different, unauthorised, unvalidated, and potentially dangerous way.

Type confusion dangers explained

Imagine that you’re writing a program in C. (It doesn’t matter whether you know C or not, you can just follow along anyway.)

In C, you usually declare variables individually, thus not only reserving memory where they can be stored, but also signalling to the program how those variables are supposed to be used.

For example:

 long long int JulianDayNumber; signed char* CustomerName;

The first variable declaration reserves 64 bits for storing a plain old integer value representing the astromonomical day number. (In case you’re wondering, this afternoon is JDN 23157 – Julian Days start at noon, not midnight, because astronomers often work at night, with midnight being the middle of their working day.)

The second reserves 64 bits for storing a memory address where the text string of a customer’s name can be found.

As you can imagine, you’d better not mix up these two values, because a number that makes sense, and is safe, to use as a day number, such as 23157, would almost certainly be unsafe to use as a memory address.

As you can see from this memory dump of a running Windows program, the lowest memory address that’s allocated for use starts at 0x00370000, which is 3,604,480 in decimal, way larger than any sensible day number.

The actual memory addresses used by Windows vary randomly over time, to make your memory layout harder for crooks to guess, so if you were to run the same program, you’d get values, but they’ll nevertheless be similar:

And (although it’s off the bottom of the image above) the memory addresses of the runtime user data section when this program ran from 0x01130000 to 0x01134FFF, representing the unlikely date range of 22 July 44631 to 16 August 44687.

Indeed, if you try to mix those two variables up, the compiler should try to warn you, for example like this:

 JulianDayNumber = CustomerName; CustomerName = JulianDayNumber; warning: assignment makes integer from pointer without a cast warning: assignment makes pointer from integer without a cast

Now, if you’ve ever programmed in C, you’ll know that for convenience, you can declare variables with multiple different interpretations using the union keyword, like this:

 union { long long int JulianDayNumer; signed char* CustomerName; } data;

You can now reference exactly the same variable in memory in two different ways.

If you write data.JulianDayNumber, you magically interpret the stored data as an integer, but writing data.CustomerName tells the compiler you’re referencing a memory address, even though you’re accessing the same stored data.

What you’re doing, more or less, is admitting to the compiler that you’ll sometimes be treating the data you’ve got as a date, and at other times as a memory address, and that you’re taking responsibility for remembering which interpretation applies at what moment in the code.

You might decide to have a second variable, known as a tag (typically an integer) to go along with your union to keep track of what sort of data you’re working with right now, for example:

 struct { int tag; union { long long int JulianDayNumer; signed char* CustomerName; } data; } value;

You might decide that when value.tag is set to 0, the data isn’t initialised for use yet, 1 means you’re storing a date, 2 means it’s a memory address, and anything else denotes an error.

Well, you’d better not let anyone else mess with that value.tag setting, or your program could end up misbehaving dramatically.

A more worrying example might be something like this:

 struct { int tag; // 1 = hash, 2 = function pointers union { unsigned char hash[16]; // either store a random hash struct { void* openfunc; // or two carefully-validated void* closefunc; // code pointers to execute later } validate; } } value;

Now, we’re overloading the same block of memory so we can sometimes use it to store a 16-byte hash, and sometimes to store two 8-byte pointers to functions that our program will call upon later.

Clearly, when value.tag == 1, we’d be happy to let our software store any 16-byte string at all into the memory allocated for the union, because hashes are pseudorandom, so any collection of bytes is equally likely.

But when value.tag == 2, our code would need to be extra-super careful not to allow the user to provide unvalidated, untrusted, unknown function addresses to execute later.

Now imagine that you could submit a value to this code while tag was set to 1, so it didn’t get checked and validated…

…but later, just before the program actually used the stored value, you were able to trick the code into switching the tag to 2.

The code would then accept your unvalidated function addresses as “known and already verified safe” (even though they weren’t), and would trustingly dispatch program execution to a rogue location in memory that you’d sneakily choosen in advance.

And that is what happens in a type confusion bug, albeit using a contrived and simplified example,

Memory that would be safe to consume if if were handled one way is maliciously delivered to the program to process in an alternative, unsafe way.

What to do?

Make sure you have the latest version of Chrome or Chromium.

You want Chrome 114.0.5735.106 or later on Mac and Linux, and 114.0.5735.110 or later on Windows.

Microsoft Edge, which is based on Chromium, is also affected by this bug.

Microsoft has so far [2023-06-06T16:25:00Z] noted that

Microsoft is aware of the recent exploits existing in the wild. We are actively working on releasing a security patch.

Edge is currently at version 114.0.1823.37, so anything numbered later than that should include Microsoft’s CVE-2023-3079 patches.

To check your version and force an update if there is one that you haven’t received yet:

  • Google Chrome. Three-dot menu (⋮) > Help > About Chrome.
  • Microsoft Edge. Settings and more (…) > Help and feedback > About Microsoft Edge.

You’re welcome.


go top