S3 Ep132: Proof-of-concept lets anyone hack at will

2FA, HACKING, AND PATCHING

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Remote code execution, remote code execution, and 2FA codes in the cloud.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

[IRONIC] Paul, happy Remote Code Execution Day to you, my friend.


DUCK.  Day, week, month, year, it seems, Doug.

Quite a cluster of RCE stories this week, anyway.


DOUG.  Of course…

But before we get into that, let us delve into our Tech History segment.

This week, on 26 April 1998, the computing world was ravaged by the CIH virus, also known as SpaceFiller.

That SpaceFiller name is probably most apt.

Instead of writing extra code to the end of a file, which is a tell-tale signature of virulent activity, this virus, which clocked in at about 1KB, instead filled in gaps in existing code.

The virus was a Windows executable that would fill the first megabyte of hard disk space with zeros, effectively wiping out the partition table.

A second payload would then try to write to the BIOS in order to destroy it.

Seems malevolent, Paul!

20 years ago today! What we can learn from the CIH virus…


DUCK.  It certainly does.

And the fascinating thing is that 26 April was the one day when it actually *wasn’t* a virus – the rest of the year it spread.

And, indeed, not only, as you say, did it try and wipe out the first chunk of your hard disk…

…you could probably or possibly recover, but it took out your partition table and typically a big chunk of your file allocation table, so certainly your computer was unbootable without serious help.

But if it managed to overwrite your BIOS, it deliberately wrote garbage right near the start of the firmware, so that when you turned your computer on next time, the second machine code instruction that it tried to execute on power-up would cause it to hang.

So you couldn’t boot your computer at all to recover the firmware, or to reflash it.

And that was just about the beginning of the era that BIOS chips stopped being in sockets, where you could pull them out of your motherboard if you knew what you were doing, reflash them, and put them back.

They were soldered onto the motherboard.

If you like, “No user serviceable parts inside.”

So quite a few unlucky souls who got hit not only had their data wiped out and their computer made physically unbootable, but they couldn’t fix it and basically had to go and buy a new motherboard, Doug.


DOUG.  And how advanced was this type of virus?

This seems like a lot of stuff that maybe either people hadn’t seen before, or that was really extreme.


DUCK.  The space-filling idea was not new…

…because people learned to memorise the sizes of certain key system files.

So you might memorise, if you were a DOS user, the size of COMMAND.COM, just in case it increased.

Or you might memorise the size of, say, NOTEPAD.EXE, and then you could look back at it every now and then and go, “It hasn’t changed; it must be OK.”

Because, obviously, as a human anti-virus scanner, you weren’t digging into the file, you were just glancing at it.

So this trick was quite well known.

What we hadn’t seen before was this deliberate, calculated attempt not just to wipe out the contents of your hard disk (that was surprisingly, and sadly, very common in those days as a side effect), but actually to zap your whole computer, and make the computer itself unusable.

Unrecoverable.

And to force you to go to the hardware shop and replace one of the components.


DOUG.  Not fun.

Not fun at all!

So, let’s talk about something a little bit happier.

I would like to back up my Google Authenticator 2FA code sequences to Google’s Cloud…

…and I’ve got nothing to worry about because they’re encrypted in transit, right, Paul?

Google leaking 2FA secrets – researchers advise against new “account sync” feature for now


DUCK.  This is a fascinating story, because Google Authenticator is very widely used.

The one feature it’s never had is the ability to backup your 2FA accounts and their so-called starting seeds (the things that help you generate the six-digit codes) into the cloud so that if you lose your phone, or you buy a new phone, you can sync them back to the new device without having to go and set up everything all over again.

And Google recently announced, “We’re finally going to provide this feature.”

I saw one story online where the headline was Google Authenticator adds a critical, long-awaited feature after 13 years.

So everyone was terribly excited about this!

[LAUGHTER]

And it is quite handy.

What people do is…

…you know, those QR codes that come up that let you establish the seed in the first place for an account?


DOUG.  [LAUGHS] Of course, I take pictures of mine all the time.


DUCK.  [GROANS] Yessss, you point your camera at it, it scans it in, then you think, “What if I need it again? Before I leave that screen, I’m going to snap a photo of it, then I’ve got a backup.”

Well, don’t do that!

Because it means that somewhere in amongst your emails, in amongst your photos, in amongst your cloud account, is essentially an unencrypted copy of that seed.

And that is the absolute key to your account.

So it would be a little bit like writing your password down on a piece of paper and taking a photo of it – probably not a great idea.

So for Google to build this feature (you’d hope securely) into their Authenticator program at last was seen by many as a triumph.

[DRAMATIC PAUSE]

Enter @mysk_co (our good friend Tommy Mysk, whom we’ve spoken about several times before on the podcast).

They figured, “Surely there’s some kind of encryption that’s unique to you, like a passphrase… yet when I did the sync, the app didn’t ask me for a passcode; it didn’t offer me the choice to put one in, like the Chrome browser does when you sync things like passwords and account details.”

And, lo and behold, @mysk_co found that when they took the app’s TLS traffic and decrypted it, as would happen when it arrived at Google…

…there were the seeds inside!

It is surprising to me that Google didn’t build in that feature of, “Would you like to encrypt this with a password of your choice so even we can’t get at your seeds?”

Because, otherwise, if those seeds get leaked or stolen, or if they get seized under a lawful search warrant, whoever gets the data from your cloud will be able to have the starting seeds for all your accounts.

And normally that’s not the way things work.

You don’t have to be a lawless scoundrel to want to keep things like your passwords and your 2FA seeds secret from everybody and anybody.

So their advice, @mysk_co’s advice (and I would second this) is, “Don’t use that feature until Google comes to the party with a passphrase that you can add if you wish.”

That means that the stuff gets encrypted by you *before* it gets encrypted to be put into the HTTPS connection to send it to Google.

And that means that Google can’t read your starting seeds, even if they want to.


DOUG.  Alright, my favourite thing in the world to say on this podcast: we will keep an eye on that.

Our next story is about a company called PaperCut.

It is also about a remote code execution.

But it’s really more a tip-of-the-cap to this company for being so transparent.

A lot going on in this story. Paul… let’s dig in, and see what we can find.

PaperCut security vulnerabilities under active attack – vendor urges customers to patch


DUCK.  Let me do a mea culpa to PaperCut-the-company.

When I saw the words PaperCut, and then I saw people talking, “Ooohh, vulnerability; remote code execution; attacks; cyberdrama”…


DOUG.  [LAUGHS] I know where this is going!


DUCK.  … I thought PaperCut was a BWAIN, a Bug With An Impressive Name.

I thought, “That’s a cool name; I bet you it has to do with printers, and it’s going to be like a Heartbleed, or a LogJam, or a ShellShock, or a PrintNightmare – it’s a PaperCut!”

In fact, that is just the name of the company.

I think the idea is that it’s meant to help you cut down on waste, and unnecessary expense, and ungreenness in your paper usage, by providing printer administration in your network.

The “cut” is meant to be that you’re cutting your expenses.

Unfortunately, in this case, it meant that attackers could cut their way into the network, because there were a pair of vulnerabilities discovered recently in the admin tools in their server.

And one of those bugs (if you want to track it down, it is CVE-2023-27350) allows for remote code execution:

This vulnerability potentially allows for an unauthenticated attacker to get remote code execution on a Papercut application server. This could be done remotely and without the need to log in.

Basically, tell it the command you would like to run and it will run it for you.

Good news: they patched both of these bugs, including this super-dangerous one.

The remote code execution bug… they patched at the end of March 2023.

Of course, not everybody has applied the patches.

And, lo and behold, in the middle of about April 2023, they got reports that somebody was onto this.

I’m assuming that the crooks looked at the patches, figured out what had changed, and thought, “Oooh, that’s easier to exploit than we thought, let’s use it! What a convenient way in!”

And attacks started.

I believe the earliest one they found so far was 14 April 2023.

And so the company has gone out of its way, and even put a banner on the top of its website saying, “Urgent message for our customers: please apply the patch.”

The crooks have already landed on it, and it’s not going well.

And according to threat researchers in the Sophos X-Ops team, we already have evidence of different gangs of crooks using it.

So I believe we’re aware of one attack that looks like it was the Clop ransomware crew; another one that I believe was down to the LockBit ransomware gang; and a third attack where the exploit was being abused by crooks for cryptojacking – where they burn your electricity but they take the cryptocoins.

And even worse, I got notification from one of our threat researchers just this morning [2023-04-26] that somebody, bless their hearts, has decided that “for defensive purposes and for academic research”, it’s really important that we all have access to a 97-line Python script…

…that lets you exploit this at will, [IRONIC] just so you can understand how it works.


DOUG.  [GROAN] Aaaaargh.


DUCK.  So if you haven’t patched…


DOUG.  Please hurry!

That sounds bad!


DUCK.  “Please hurry”… I think that’s the calmest way of putting it, Doug.


DOUG.  We’ll stay on the remote code execution train, and the next stop is Chromium Junction.

A double zero-day, one involving images, and one involving JavaScript, Paul.

Double zero-day in Chrome and Edge – check your versions now!


DUCK.  Indeed, Doug.

I’ll read these out in case you want to track them down.

We’ve got CVE-2023-2033, and that is, in the jargon, Type confusion in V8 in Google Chrome.

And we have CVE-2023-2136, Integer overflow in Skia in Google Chrome.

To explain, V8 is the name of the open-source JavaScript “engine”, if you like, at the core of the Chromium browser, and Skia is a graphics handling library that is used by the Chromium project for rendering HTML and graphics content.

You can imagine that the problem with triggerable bugs in either the graphics rendering part or the JavaScript processing part of your browser…

…is that those are the very parts that are designed to consume, process and present stuff that *comes in remotely from untrusted websites*, even when you just look at them.

And so, just by the browser preparing it for you to see, you could tickle not one, but both of these bugs.

My understanding is that one of them, the JavaScript one, essentially gives remote code execution, where you can get the browser to run code it’s not supposed to.

And the other one allows what’s generally known as a sandbox escape.

So, you get your code to run, and then you jump outside the strictures that are supposed to constrain code running inside a browser.

Although these bugs were discovered separately, and they were patched separately on 14 April 2023 and 18 April 2023 respectively, you can’t help but wonder (because they’re zero-days) if they were actually being used in combination by somebody.

Because you can imagine: one lets you break *into* the browser, and the other lets you break *out* of the browser.

So you’re in the same sort of situation that you were when we were talking recently about those Apple zero-days, where one was in WebKit, the browser renderer, so that meant that your browser could get pwned while you were looking at a page…

…and the other was in the kernel, where code in the browser could suddenly leap out of the browser and bury itself right in the main control part of the system.

Apple zero-day spyware patches extended to cover older Macs, iPhones and iPads

Now, we don’t know, in the Chrome and Edge bug cases, whether these were used together, but it certainly means that it is very, very well worth checking that your automatic updates really did go through!


DOUG.  Yes, I would note that I checked my Microsoft Edge and it updated automatically.

But it could be that there’s an update toggle that’s off by default – if you have metered connections, which is if your ISP has a cap, or if you’re using a mobile network – such that you won’t get the updates automatically unless you proactively toggle that on.

And the toggle doesn’t take effect until you restart your browser.

So if you’re one of those people that just keeps your browser open constantly, and never shuts it down or restarts it, then…

…yes, it is worth to check!

Those browsers do a good job with automatic updates, but it’s not a given.


DUCK.  That’s a very good point, Doug.

I hadn’t thought about that.

If you’ve got that metered connections setting off, you might not be getting the updates after all.


DOUG.  OK, so the CVEs from Google are a little vague, as they often are from any company.

So, Phil (one of our readers) asked… he says that part of the CVE says is that something can come “via a crafted HTML page.”

He’s saying this is still too vague.

So, in part, he says:

I guess I should assume, since V8 is where the weakness lies, JavaScript-plus-HTML, and not just some corrupted HTML by itself, can get hold of the CPU instruction pointer? Right or wrong?

And then he goes on to say the CVEs are “useless to me, so far, in getting a clue on this.”

So Phil is a little confused, as are probably many of the rest of us here.

Paul?


DUCK.  Yes, I think that’s a great question.

I understand in this case why Google doesn’t want to say too much about the bugs.

They are in the wild; they are zero days; crooks already know about them; let’s try and keep it under our hat for a while.

Now, I presume the reason they just said a “crafted HTML page” was not to suggest that HTML alone ( pure play “angle bracket/tag/angle bracket” HTML code, if you like) could trigger the bug.

I think what Google is trying to warn you about is that simply looking – “read-only” browsing – can nevertheless get you into trouble.

The idea of a bug like this, because it’s remote code execution, is: you look; the browser attempts to present something in its controlled way; it should be 100% safe.

But in this case, it could be 100% *dangerous*.

And I think that’s what they’re trying to say.

And unfortunately, that idea of “the CVEs being “useless to me”, sadly, I find that is often the case.


DOUG.  [LAUGHS] You are not alone, Phil!


DUCK.  They’re just a couple of sentences of cybersecurity babble and jargon.

I mean, sometimes, with CVEs, you go to the page and it just says, “This bug Identifier has been reserved and details will follow later,” which is almost worse than useless. [LAUGHTER]

So what this is really trying to tell you, in a jargonistic way, is that *simply looking*, simply viewing a web page, which is supposed to be safe (you haven’t chosen to download anything; you haven’t chosen to execute anything; you haven’t authorised the browser to save a file)… just the process of preparing the page before you see it could be enough to put you in harm’s way.

That’s, I think, what they mean by “crafted HTML content.”


DOUG.  All right, thank you very much, Paul, for clearing that up.

And thank you very much, Phil, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


Google leaking 2FA secrets – researchers advise against new “account sync” feature for now

The Google Authenticator 2FA app has featured strongly in cybersecurity news stories lately, with Google adding a feature to let you backup your 2FA data into the cloud and then restore it onto other devices.

To explain, a 2FA (two-factor authentication) app is one of those programs that you run on your mobile phone or tablet to generate one-time login codes that help to secure your online accounts with more than just a password.

The problem with conventional passwords is that there are numerous ways that crooks can beg, steal, or borrow them.

There’s shoulder-surfing, where a rogue in your midst peeks over your shoulder while you’re typing it in; there’s inspired guesswork, where you’ve used a phrase that a crook can predict based on your personal interests; there’s phishing, where you are lured into handing over your password to an imposter; and there’s keylogging, where malware already implanted on your computer keeps track of what you type and secretly starts recording whenever you visit a website that looks interesting.

And because conventional passwords typically stay the same from login to login, crooks who figure out a password today can often simply use it over and over at their leisure, often for weeks, perhaps for months, and sometimes even for years.

So 2FA apps, with their one-time login codes, augment your regular password with an additional secret, usually a six-digit number, that changes every time.

Your phone as a second factor

The six-digit codes commonly generated by 2FA apps get calculated right on your phone, not on your laptop; they’re based on a “seed” or “starting key” that’s stored on your phone; and they’re protected by the lock code on your phone, not by any passwords you routinely type in on your laptop.

That way, crooks who beg, borrow or steal your regular password can’t simply jump straight in to your account.

Those attackers also need access to your phone, and they need to be able to unlock your phone to run the app and get the one-time code. (The codes are usually based on the data and time to the nearest half-minute, so they change every 30 seconds.)

Better yet, modern phones include tamper-proof secure storage chips (Apple calls theirs Secure Enclave; Google’s is known as Titan) that keep their secrets even if you manage to detach the chip and try to dig data out of it offline via miniature electrical probes, or by chemical etching combined with electron microscopy.

Of course, this “solution” brings with it a problem of its own, namely: how do you back up those all-important 2FA seeds in case you lose your phone, or buy a new one and want to switch over to it?

The dangerous way to back up seeds

Most online services require you to set up a 2FA code sequence for a new account by entering a 20-byte string of random data, which means laboriously typing in either 40 hexadecimal (base-16) characters, one for every half-byte, or by carefully entering 32 characters in base-32 encoding, which uses the characters A to Z and the six digits 234567 (zero and one are unused because they look like O-for-Oscar and I-for-India).

Except that you usually get the chance to avoid the hassle of manually tapping in your starting secret by scanning in a special sort of URL via a QR code instead.

These special 2FA URLs have the account name and the starting seed encoded into them, like this (we limited the seed here to 10 bytes, or 16 base-32 characters, to keep the URL short):

You can probably guess where this is going.

When you fire up your mobile phone camera to scan in 2FA codes of this sort, it’s tempting to snap a photo of the codes first, to use as a backup…

…but we urge you not to do that, because anyone who gets hold of those pictures later (for example from your cloud account, or because you forward it by mistake) will know your secret seed, and will trivially be able to generate the right sequence of six-digit codes.

How, therefore, to backup your 2FA data reliably without keeping plaintext copies of those pesky multi-byte secrets?

Google Authenticator on the case

Well, Google Authenticator recently, if belatedly, decided to start offering a 2FA “account sync” service so that you can back your 2FA code sequences up into the cloud, and later restore them to a new device, for example if you lose or replace your phone.

As one media outlet described it, “Google Authenticator adds a critical long-awaited feature after 13 years.”

But just how safely does this account sync data transfer take place?

Is your secret seed data encrypted in transit to Google’s cloud?

As you can imagine, the cloud upload part of transferring your 2FA secrets is indeed encrypted, because Google, like every security-conscious company out there, has used HTTPS-and-only-HTTPS for all its web-based traffic for several years now.

But can your 2FA accounts be encrypted with a passphrase that’s uniquely yours before they even leave your device?

That way, they can’t be intercepted (whether lawfully or not), subpoenaed, leaked, or stolen while they’re in cloud storage.

After all, another way of saying “in the cloud” is simply “saved onto someone else’s computer”.

Guess what?

Our indie-coder and cybersecurity-wrangling friends at @mysk_co, whom we have written about several times before on Naked Security, decided to find out.

What they reported doesn’t sound terribly encouraging.

As you can see above, @mysk_co claimed the following:

  • Your 2FA account details, including seeds, were unencrypted inside their HTTPS network packets. In other words, once the transport-level encryption is stripped off after the upload arrives, your seeds are available to Google, and thus, by implication, to anyone with a search warrant for your data.
  • There’s no passphrase option to encrypt your upload before it leaves your device. As the @mysc_co team point out, this feature is available when syncing information from Google Chrome, so it seems strange that the 2FA sync process doesn’t offer a similar user experience.

Here’s the concocted URL that they generated to set up a new 2FA account in the Google Authenticator app:

 otpauth://totp/Twitter@Apple?secret=6QYW4P6KWAFGCUWM&issuer=Amazon

And here’s a packet grab of the network traffic that Google Authenticator synced with the cloud, with the transport level security (TLS) encryption stripped off:

Note that the highlighted hexadecimal characters match the raw 10 bytes of data that correspond to the base-32 “secret” in the URL above:

 $ luax Lua 5.4.5 Copyright (C) 1994-2023 Lua.org, PUC-Rio __ ___( o)> \ <_. ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Added Duck's favourite modules in package.preload{} > b32seed = '6QYW4P6KWAFGCUWM' > rawseed = base.unb32(b32seed) > rawseed:len() 10 > base.b16(rawseed) F4316E3FCAB00A6152CC

What to do?

We agree with @mysk_co’s suggestion, which is, “We recommend using the app without the new syncing feature for now.”

We’re pretty sure that Google will add a passphrase feature to the 2FA syncing feature soon, given that this feature already exists in the Chrome browser, as explained in Chrome’s own help pages:

Keep your info private

With a passphrase, you can use Google’s cloud to store and sync your Chrome data without letting Google read it. […] Passphrases are optional. Your synced data is always protected by encryption when it’s in transit.

If you’ve already synced your seeds, don’t panic (they weren’t shared with Google in a way that makes it easy for anyone else to snoop them out), but you will need to reset the 2FA sequences for any accounts you now decide you probably should have kept to yourself.

After all, you may have 2FA set up for online services such as bank accounts where the terms and conditions require you to keep all login credentials to yourself, including passwords and seeds, and never to share them with anyone, not even Google.

If you’re in the habit of snapping photos of the QR codes for your 2FA seeds anyway, without thinking too much about it, we recommend that you don’t.

As we like to say on Naked Security: If in doubt / Don’t give it out.

Data that you keep to yourself can’t leak, or get stolen, or subpoenaed, or shared onwards with third parties of any sort, whether deliberately or by mistake.


Update. Google has responded on Twitter to @mysk_co’s report by admitting that it intentionally released the 2FA account sync feature without so-called end-to-end encryption (E2EE), but claimed that the company has “plans to offer E2EE for Google Authenticator down the line.” The company also stated that “the option to use the app offline will remain an alternative for those who prefer to manage their backup strategy themselves”. [2023-04-26T18:37Z]


PaperCut security vulnerabilities under active attack – vendor urges customers to patch

We’ll be honest, and admit that we hadn’t heard of the printer management software PaperCut until this week.

In fact, the first time we heard the name was in the context of cybercriminality and malware attacks, and we naively assumed that “PaperCut” was what we like to call a BWAIN.

A BWAIN is our satirical term for any Bug With An Impressive (and media-savvy) Name, like Heartbleed or Shellshock back in the day, and we thought that this one referred to a vulnerability or an exploit of some sort.

We’ll apologise, therefore, to the company PaperCut – the name is meant to be a metaphor for cutting back on your paper usage by helping you to manage, control and charge fairly for the printing resources in your business.

We’ll further point out that PaperCut iself is not putting out this vulnerability alert for PR reasons, because actively seeking media coverage for bugs in your own products is not something that companies usually go out of their way to do.

But hats off to PaperCut in this case, because the company really is trying to make sure that all its customers know about the importance of two vulnerabilities in its products that it patched last month, to the point that it’s put a green-striped shield at the top of its main web page that says, “Urgent security message for all NG/MF customers.”

We’ve seen companies that have admitted to unpatched zero-day vulnerabilities and data breaches in a less obvious fashion than this, which is why we’re saying “Good job” to the Papercut team for what cybersecurity jargon would probably praise with the orotund phrase an abundance of caution.

Patched, but not necessarily updated

The problem, it seems, is a pair of bugs dubbed CVE-2023-27350 and CVE-2023-27351 that were patched by PaperCut at the end of March 2023.

The first bug is described by PaperCut as follows:

The [CVE-2023-27350 vulnerability potentially] allows for an unauthenticated attacker to get Remote Code Execution (RCE) on a PaperCut Application Server. This could be done remotely and without the need to log in.

So, even if your PaperCut application server isn’t directly reachable over the internet, an attacker who already had a basic foothold in your network, for example as a guest user on someone’s infected laptop, could exploit this bug to pivot, or move laterally (which are fancy jargon words for “make the jump”), to a more privileged and powerful position inside your business.

The second bug doesn’t hand over remote code execution powers, but it does allow attackers to scrape out personally identifiable information that could be useful for subsequent social engineering attacks against both your company as a whole, and your staff as individuals:

The [CVE-2023-27351 vulnerability] allows for an unauthenticated attacker to potentially pull information about a user stored within PaperCut MF or NG – including usernames, full names, email addresses, office/department info and any card numbers associated with the user. The attacker can also retrieve the hashed passwords for internal PaperCut-created users only […]. This could be done remotely and without the need to log in.

Although patches have been out for almost a month already, it seems that not all customers have applied these patches, and cybercrooks have apparently started using the first of these bugs in real-life attacks.

PaperCut says that it was first alerted to an attack against an unpatched server at 2023-04-17T17:30Z, and has now worked through its logs and suggests that the earliest attack so far known happened four days before that, at 2023-04-13T15:29Z.

In other words, if you patched before 2023-04-13 (the Thursday before last at the time of writing), you’d almost certainly have been ahead of the criminals, but if you haven’t patched yet, you really need to.

PaperCut notes that it is trying hard “to compile a list of unpatched PaperCut MF/NG servers that have ports open on the public internet”, and then going out of its way to try to contact those obviously-at-risk customers.

But PaperCut can’t scan your internal networks in order to warn you about unpatched servers that aren’t visible across the internet.

You will need to do that yourself, in order to ensure that you haven’t left loopholes through which attackers who have already hacked into your network “just a bit” can extend their rogue access to “quite a lot”.

What to do?

  • Read PaperCut’s detailed summary of which products are affected, and how to update them.
  • If you have PaperCut MF or PaperCut NG, you need to make sure you have one of the following versions installed: 20.1.7, 21.2.11, or 22.0.9.
  • If you think you might be at risk, because you use these products and you hadn’t patched before 2023-04-13, when the first so-far-known exploits showed up, check out PaperCut’s FAQs to help you look for known Indicators of Compromise (IoCs).

Remember, of course, that the IoCs shared by PaperCut are, of necessity, limited to those they’ve already seen in attacks they already know about, so absence of evidence isn’t evidence of absence.

If you’re unsure of what to look for, or how to look for it, consider getting a Managed Detection and Response (MDR) team in to help you.


Short of time or expertise to take care of cybersecurity threat response? Worried that cybersecurity will end up distracting you from all the other things you need to do?

Learn more about Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response  ▶


Double zero-day in Chrome and Edge – check your versions now!

If you’re a Google Chrome or Microsoft Edge browser fan, you’re probably getting updates automatically and you’re probably up to date already.

However…

…just in case you’ve missed any updates recently, we suggest you go and check right now, because the Chromium browser core, on which both Edge and Chrome are based, has patched not one but two zero-day remote code execution (RCE) bugs recently.

Google is keeping the details of these bugs quiet for the time being, presumably because they’re easy to exploit if you know exactly where to look.

After all, a needle is easy to find even in a giant haystack if someone tells you which bale it’s in before you start.

Browser-based security vulnerabilities that lead to remote code execution are always worth taking seriously, especially if they’re already known to, and in use by, cybercriminals.

And zero-days, by definition, are bugs that the Bad Guys found first, so that there were zero days on which you could have patched proactively.

RCE considered harmful

RCE means just what it says: someone outside your network, outside your household, outside your company – perhaps even on the other side of the world – can tell your device, “Run this program of my choosing, in the way I tell you to, without giving anything away to any users who are currently logged in.”

Usually, when you’re browsing and a remote website tries to foist potentially risky content on you, you will at least receive some sort of warning, such as a Do you want to download this file? dialog or a popup asking you Are you really sure (Yes/No)?

Sometimes, depending on the browser settings that you’ve chosen, or based on restrictions that have been applied for you by your IT sysadmins, you might even get a notification along the lines of, Sorry, that option/file/download isn't allowed.

But a browser RCE bug generally means that simply by looking at a web page, without clicking any buttons or seeing any warnings, you might provide attackerswith a security loophole through which they could trick your browser into running rogue program code without so much as a by-your-leave.

Common ways that this sort of security hole can be triggered include: booby-trapped HTML content; deliberately malconstructed JavaScript code; and malformed images or other multimedia files that the browser chokes on while trying to prepare the content for display.

For example, if an image appeared to need only a few kilobytes of memory, but later turned out to include megabytes of pixel data, you’d hope your browser would reliably detect this anomaly, and not try to stuff those megabytes of pixels into kilobytes of memory space.

That would cause what’s known as a buffer overflow, corrupting system memory in a way that a well-prepared attacker might be able to predict and exploit for harm.

Likewise, if JavaScript code arrived that told your browser, “Here’s a string representing a time and date that I want to you remember for later,” you’d hope that your browser would only ever allow that data to be treated as a block of text.

But if the JavaScript system could later be tricked into using that very same data block as if it were a memory address (in C or C++ terminology, a pointer) that denoted where the program should go next, a well-prepared attacker might be able to trick the browser into treating what arrived as harmless data as a remotely-supplied mini-program to be executed.

In the jargon, that’s known as shellcode, from time-honoured Unix terminology in which code refers to a sequence of program instructions, and shell is the general name for a control prompt where you can run a sequence of commands of your choice.

Imagine opening the Terminal app on a Mac, or a PowerShell prompt on Windows – that’s the sort of power that cybercriminal typically gets over you and your network if they’re able to use an RCE hole to pop a shell, as it’s jocularly called in the trade, on your device.

Worse still, a “popped” remote shell of this sort generally runs entirely in the background, invisible to anyone currently sitting in front of the computer, so there are few or no tell-tale signs that a rogue operator is poking around and exploiting your device behind your back.

A two-pack of zero-days

When we gave our RCE examples above, we didn’t choose booby-trapped image files and rogue JavaScript code by chance.

We highlighted those as examples because the two zero-day Chrome bugs fixed in the past few days are as follows:

  • CVE-2023-2033: Type confusion in V8 in Google Chrome prior to 112.0.5615.121. A remote attacker could potentially exploit heap corruption via a crafted HTML page. Chromium security severity: High.
  • CVE-2023-2136: Integer overflow in Skia in Google Chrome prior to 112.0.5615.137. A remote attacker who had compromised the renderer process could potentially perform a sandbox escape via a crafted HTML page. Chromium security severity: High.

In case you’re wondering, V8 is the name of Chromium’s open-source JavaScript engine, where JavaScript embedded into web pages gets processed.

And Skia is an open-source graphics library created by Google and used in Chromium to turn HTML commands and any embedded graphical content into the on-screen pixels that represent the visual form of the page. (The process of turning HTML into on-screen graphics is known in the jargon as rendering a page.)

A type confusion bug is one that works similarly to the text-treated-as-a-pointer example we presented above: a chunk of data that ought to be handled under one set of security rules inside the JavaScript process ends up being used in an unsafe way.

That’s a bit like getting a guest pass at the reception desk of a building, then finding that if you hold up the pass with your thumb in just the right place to obscure the “I am only a guest” label, you can trick the security guards inside the building into letting you go where you shouldn’t, and doing things you’re not supposed to.

And an integer overflow is where an arithmetic calculation goes awry because the numbers got too big, in the same sort of way that the time wraps round once or twice a day on your clock.

When you put an analog clock forward an hour from, say, 10-past-12 o’clock, the time wraps around to 10-past-1 o’clock, because the clock face is only marked from 1 to 12; similarly, when a digital clock gets to midnight, it flips back from 23:59 to 00:00, because it can’t count as far as 24.

What to do?

Wouldn’t it be handy if there were a single version number you could check for in every Chromium-based browser, and on every supported platform?

Sadly, there isn’t, so we’ve reported whay we found below.

At the time of writing [2023-04-24T16:00Z], the official laptop versions of Chrome seem to be: 112.0.5615.137 or 112.0.5615.138 for Windows, 112.0.5615.137 for Mac, and 112.0.5615.165 for Linux.

Anything at or later than those numbers will include patches for the two zero-days above.

Edge on your laptop should be 112.0.1722.58 or later.

Unfortunately, Chrome and Edge on Android (we just updated ours) still seem to be 112.0.5615.136 and 111.0.1661.59 respectively, so we can only advise you to keep your eye out for updates over the next few days.

Likewise, on iOS, our just-updated versions of Chrome and Edge show up respectively as 112.0.5615.70 and 112.0.1722.49, so we assume those versions will soon get updated to ensure both these zero-days are patched.

  • Chrome on your laptop. Visiting the URL chrome://settings/help should show you the current version, then check for any missed updates, and attempt to get you up-to-date if you weren’t already.
  • Chrome on iOS. The URL chrome://version will show your current version. Go to the App Store app and tap on your account picture at the top right to see if any updates are available that still need to be installed. You can use Update all to do them all at once, or update apps individually from the list below if you prefer.
  • Chrome on Android. The URL chrome://version will show your current version. The three-dots menu should show an up-arrow if there is a Chrome update you don’t have yet. You will need to sign into your Google Play account to get the update.
  • Edge on your laptop. Visiting the URL edge://settings/help should show you the current version, then check for any missed updates, and attempt to get you up-to-date if you weren’t already.
  • Edge on iOS. The URL edge://version will show your current version. Go to the App Store app and tap on your account picture at the top right to see if any updates are available that still need to be installed. You can use Update all to do them all at once, or update apps individually if you prefer.
  • Edge on Android. The URL edge://version will show your current version. Open the Google Play app and tap on your account blob at the top right. Go into the Manage apps & device screen to look for any pending updates. You can use Update all to do them all at once, or tap through into See details to update them individually.

VMware patches break-and-enter hole in logging tools: update now!

Logging software has made cyberinsecurity headlines many times before, notably in the case of the Apache Log4J bug known as Log4Shell that ruined Christmas for many sysadmins at the end of 2021.

The Log4Shell hole was a security flaw in the logging process itself, and boiled down to the fact that many logfile systems allow you to write what almost amount to “mini-programs” right in the middle of the text that you want to log, in order to make your logfiles “smarter” and easier to read.

For example, if you asked Log4J to log the text I AM DUCK, Log4J would do just that.

But if you included the special markup characters ${...}, then by choosing carefully what you inserted between the squiggly brackets, you could as good as tell the logging server, “Don’t log these actual characters; instead, treat them as a mini-program to run for me, and insert the answer that comes back.”

So by choosing just the right sort of booby-trapped data for a server to log, such as a sneakily constructed email address or a fake surname, you could maybe, just maybe, send program commands to the logger disguised as plain old text.

Because flexibility! Because convenience! But not because security!

This time round

This time round, the logging-related bug we’re warning you about is CVE-2023-20864, a security hole in VMWare’s Aria Operations for Logs product (AOfL, which used to be known as vRealize Log Insight).

The bad news is that VMWare has given this bug a CVSS “security danger” score of 9.8/10, presumably because the flaw can be abused for what’s known as remote code execution (RCE), even by network users who haven’t yet logged into (or who don’t have an account on) the AOfL system.

RCE refers to the type of security hole we described in the Log4Shell example above, and it means exactly what it says: a remote attacker can send over a chunk of what’s supposed to be plain old data, but that ends up being handled by the system as one or more programmatic commands.

Simply put, the attacker gets to run a program of their own choice, in a fashion of their own choosing, almost as though they’d phoned up a sysadmin and said, “Please login using your own account, open a terminal window, and then run the following sequence of commands for me, without question.”

The good news in this case, as far as we can tell, is that the bug can’t be triggered simply by abusing the logging process via booby-trapped data sent to any server that just happens to keep logs (which is pretty much every server ever).

Instead, the bug is in the AOfL “log insight” service itself, so the attacker would need access to the part of your network where the AOfL services actually run.

We’re assuming that most networks where AOfL is used don’t have their AOfL services opened up to anyone and everyone on the internet, so this bug is unlikely to be directly accessible and triggerable by the world at large.

That’s less dramatic than Log4Shell, where the bug could, in theory at least, be triggered by network traffic sent to almost any server on the network that happened to make use of the Log4J logging code, including systems such as web servers that were supposed to be publicly accessible.

What to do?

  • Patch as soon as you can. Affected versions apparently include VMware Aria Operations for Logs 8.10.2, which needs to be updated to 8.12; and an older product flavour known as VMware Cloud Foundation version 4.x, which needs updating to version 4.5 first, and then upgrading to VMware Aria Operations for Logs 8.12.
  • If you can’t patch, cut down access to your AOfL services as much as you can. Even if this is slightly inconvenient to your IT operations team, it can greatly reduce the risk that a crook who already has a foothold somewhere in your network can reach and abuse your AOfL services, and thereby increase and extend their unauthorised access.

go top