Category Archives: News

S3 Ep134: It’s a PRIVATE key – the hint is in the name!

“PRIVATE KEY”: THE HINT IS IN THE NAME

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Bluetooth trackers, bothersome bootkits, and how not to get a job.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I’m Doug Aamoth.

He’s Paul Ducklin…

Enjoy this titbit from Tech History.

This week, on 11 May 1979, the world got its first look at VisiCalc, or Visible Calculator, a program that automated the recalculation of spreadsheets.

The brainchild of Harvard MBA candidate Daniel Bricklin and programmer Robert Frankston, VisiCalc effectively turned the Apple II into a viable business machine, and went on to sell north of 100,000 copies in the first year.


DUCK.  Incredible, Doug.

I remember the first time I saw a computerised spreadsheet.

I wasn’t at work… I was just a kid, and it sounded to me, from what I’d read about this, it was just a glorified, full-screen calculator.

But when I realised that it was a calculator that could redo everything, including all these dependencies, it was, to use a perhaps more contemporary term, “Mind blown”, Doug.


DOUG.  A very important application back in the early days of computing.

Let’s stick with applications as we get into our first story.

Paul, if I’m looking for a job in application security, I think the best thing I can do is to poison a popular application supply chain.

Is that right?

PHP Packagist supply chain poisoned by hacker “looking for a job”


DUCK.  Yes, because then you could modify the JSON file that describes the package, and instead of saying, “This is a package to help you create QR codes”, for example, you can say, “Pwned by me. I am looking for a job in Application Security.”

[LAUGHTER]

And who wouldn’t rush to employ you, Doug?


DOUG.  Yes!


DUCK.  But it is, sadly, yet another reminder that the supply chain is only as strong as its weakest link.

And if you’re allowing those links to be decided, and satisfied, entirely automatically, you can easily get stitched up by something like this.

The attacker… let’s call him that.

(Was it really a hack? I suppose it was.)

They simply created new repositories on GitHub, copied legitimate projects in, and put in the “Hey, I want a job, guys” message.

Then they went to PHP Packagist and switched the links to say, “Oh, no, don’t go to the real place on GitHub. Go to the fake place.”

So it could have been a lot worse.

Because, of course, anyone doing that… if they can modify the JSON file that describes the package, then they can modify the code that’s in the package to include things like keyloggers, backdoors, data stealers, malware-installing malware, and so on.


DOUG.  OK, so it sounds like the hackiest part of this is that he guessed some usernames and passwords for some old inactive accounts, and then redirected the traffic to these packages that he’d cloned, right?


DUCK.  Correct.

He didn’t need to hack into GitHub accounts.

He just went for packages that people seem to like and use, but where the developers either haven’t needed or wanted to bother with them in a while, haven’t logged in, probably haven’t changed their password or added any kind of 2FA in the last few years.

And that is, indeed, how he got in.

And I think I know where you’re going, Doug, because that leads nicely to the kind of tips that you like.


DOUG.  Exactly!

There are several tips… you can head over to the article to read all of them, but we’ll highlight a couple of them, starting with my favourite: Don’t do this.


DUCK.  Yes, I think we’ve gone through why it is not going to get you a job.

[LAUGHTER]

This case… it might not be quite enough to land you in prison, but certainly I would say, in the US and in the UK, it would be an offence under our respective Computer Fraud and Misuse Acts, wouldn’t it?

Logging into somebody else’s account without permission, and fiddling with things.


DOUG.  And then perhaps a slightly more tangible piece of advice: Don’t blindly accept supply chain updates without reviewing them for correctness.

That’s a good one.


DUCK.  Yes.

It’s one of those things, isn’t it, like, “Hey, guys, use a password manager; turn on 2FA”?

Like we went through on Password Day… we have to say those things because they do work: they are useful; they are important.

No matter where the future is taking us, we have to live in the present.

And it’s one of those things that everybody knows… but sometimes we all just need to be reminded, in big, bold letters, like we did in the Naked
Security article.


DOUG.  Alright, very good.

Our next story… I do believe the last time we talked about this, I said, and I quote, “We’ll keep an eye on this.”

And we have an update.

This is about the MSI motherboard breach; those security keys that were leaked.

What’s going on here, Paul?

Low-level motherboard security keys leaked in MSI breach, claim researchers


DUCK.  Well, you may remember this, if you’re a regular listener.

It was just over a month ago, wasn’t it, that a ransomware crew going by the street name of Money Message put, on their dark web site, a note to say, “We’ve breached MicroStar International”, better known as MSI, the well-known motherboard manufacturer, very popular with gamers for their tweakable motherboards.

“We’ve hacked their stuff, including source code, development tools, and private keys. We will publish stolen data when timer expires,” they said.

I went back a couple of days ago, and the timer expired more than a month ago, but it still says, “We will publish stolen data when timer expires.”

So they haven’t quite got round to publishing it yet.

But researchers at a company called Binarly claimed that they actually have copies of the data; that it has been leaked.

And when they went through it, they found a whole load of private keys buried in that data.

Unfortunately, if what they found is correct, it’s quite an eclectic mix of stuff.

Apparently, there are four keys for what’s called Intel Boot Guard.

Now, those are not Intel’s keys, just to be clear: they’re OEM, or motherboard manufacturers’, keys that are used to try and lock down the motherboard at runtime against unauthorised firmware updates.

27 firmware image signing keys.

So those are the private keys that a motherboard maker might use to sign a new firmware image that they give you for download, so you can make sure it’s the right one, and really came from them.

And one key that they referred to as an Intel OEM debugging key.

Now, again, that’s not a key from Intel… it’s a key that is used for a feature that Intel provides in its motherboard control hardware that decides whether or not you are allowed to break into the system while it’s booting, with a debugger.

And, obviously, if you can get right in with a debugger at the lowest possible level, then you can do things like reading out data that’s supposed to be only ever in secure storage and fiddling with code that normally would need signing.

It is, if you like, an Access All Areas card that you have to hold up that says, “I don’t want to sign new firmware. I want to run the existing firmware, but I want to be able to freeze it; fiddle with it; snoop on memory.”

And, as Intel wryly states, almost satirically, in its own documentation for these debugging authorisation keys: “It is assumed that the motherboard manufacturer will not share their private keys with any other people.”

In short, it’s a PRIVATE key, folks… the hint is in the name.

[LAUGHTER]

Unfortunately, in this case, it seems that at least one of those leaked out, along with a bunch of other signing keys that could be used to do a little bit of an end run around the protections that are supposed to be there in your motherboard for those who want to take advantage of them.

And, as I said in the article, the only advice we can really give is: Be careful out there, folks.


DOUG.  It’s bolded!


DUCK.  It is indeed, Doug.

Try and be as careful as you can about where you get firmware updates from.

So, indeed, as we said, “Be careful out there, folks.”

And that, of course, applies to MSI motherboard customers: just be careful of where you get those updates from, which I hope you’re doing anyway.

And if you’re someone who has to look after cryptographic keys, whether you are a motherboard manufacturer or not, be careful out there because, as Intel has reminded us all, it’s a PRIVATE key.


DOUG.  Alright, great.

I’m going to say, “Let’s keep an eye on that”… I have a feeling this isn’t quite over yet.

Microsoft, in a semi-related story, is taking a cautious approach to a bootkit zero-day fix.

This was kind of interesting to see, because updates are, by-and-large, automatic, and you don’t have to really worry about it.

This one, they’re taking their time with.

Bootkit zero-day fix – is this Microsoft’s most cautious patch ever?


DUCK.  They are, Douglas.

Now, this is not as serious or as severe as a motherboard firmware update key revocation problem, because we’re talking about Secure Boot – the process that Microsoft has in place, when Secure Boot is turned on, for preventing rogue software from running out of what’s called the EFI, the Extensible Firmware Interface startup partition on your hard disk.

So, if you tell your system, “Hey, I want to blocklist this particular module, because it’s got a security bug in it”, or, “I want to retire this security key”, and then something bad happens and your computer won’t boot…

…with the Microsoft situation, the worst that can happen is you’ll go, “I know. I’ll reach for that recovery CD I made three months ago, and I’ll plug it in. Oh dear, that won’t boot!”

Because that probably contains the old code that’s now been revoked.

So, it’s not as bad as having firmware burned into the motherboard that won’t run, but it is jolly inconvenient, particularly if you’ve only got one computer, or you’re working from home.

You do the update, “Oh, I’ve installed a new bootloader; I’ve revoked permission for the old one to run. Now my computer’s got into problems three or four weeks down the line, so I’ll grab that USB stick I made a few months ago.”

You plug it in… “Oh no, I can’t do anything! Well, I know, I’ll go online and I’ll download a recovery image from Microsoft. Hopefully they’ve updated their recovery images. Oh dear, how am I going to get online, because my computer won’t boot?”

So, it’s not the end of the world: you can still recover even if it all goes horribly wrong.

But I think what Microsoft has done here is that they’ve decided to take a very softly-softly, slow-and-gentle approach, so that nobody gets into that situation…

…where they’ve done the update, but they haven’t quite got round to updating their recovery disks, their ISOs, their bootable USBs yet, and then they get into trouble.

Unfortunately, that means forcing people into a very clumsy and complicated way of doing the update.


DOUG.  OK, it’s a three-step process.

Step One is to fetch the update and install it, at which point your computer will use the new boot up code but will still accept the old exploitable code.


DUCK.  So, to be clear, you’re still essentially vulnerable.


DOUG.  Yes.


DUCK.  You’ve got the patch, but you can also be “unpatched” by someone with your worst interests at heart.

But you’re ready for Step Two.


DOUG.  Yes.

So the first part is reasonably straightforward.

Step Two, you then go and patch all your ISOs, and USB keys, and all the DVDs that you burned with your recovery images.


DUCK.  Unfortunately, I wish we could have put instructions in the Naked Security article, but you need to go to Microsoft’s official instructions, because there are 17 different ways of doing it for each sort of recovery system you want.

It’s not a trivial exercise to replenish all of those.


DOUG.  So, at this point, your computer is updated, yet will still accept the old buggy code, and your recovery devices and images are updated.

Now, Step Three: you want to revoke the buggy code, which you need to do manually.


DUCK.  Yes, there’s a bit of registry messing about, and command line stuff involved in doing that.

Now, in theory, you could just do Step One and Step Three in one go, and Microsoft could have automated that.

They could have installed the new boot up code; they could have told the system, “We don’t want the old code to run anymore”, and then said to you, “At some time (don’t leave it too long), go and do Step Two.”

But we all know what happens [LAUGHS] when there isn’t a clear and pressing need to do something like a backup, where you put it off, and you put it off, and you put it off…

So, what they’re trying to do is to get you to do these things in what is perhaps the least convenient order, but the one that is least likely to put your nose out of joint if something goes wrong with your computer three days, three weeks, three months after you’ve applied this patch.

Although that means that Microsoft has kind of made a bit of a rod for their own back, I think it’s quite a good way to do it, because people who really want to get this locked down now have a well defined way of doing it.


DOUG.  To Microsoft’s credit, they’re saying, “OK, you could do this now (it’s kind of a cumbersome process), but we are working on a much more streamlined process that we hope to get out in the July time frame. And then early next year, in 2024,if you haven’t done this, we’re going to forcibly update, automatically update all the machines that are susceptible to this.”


DUCK.  They’re saying, “At the moment we’re thinking of giving you at least six months before we say, for the greater good of all, ‘You’re getting this revocation installed permanently, come what may’.”


DOUG.  OK.

And now our final story: Apple and Google are joining forces to set standards for Bluetooth trackers.

Tracked by hidden tags? Apple and Google unite to propose safety and security standards…


DUCK.  Yes.

We’ve talked about AirTags quite a few times, haven’t we, on Naked Security and in the podcast.

Whether you love them or hate them, they seem to be pretty popular, and Apple is not the only company that makes them.

If you have an Apple phone or a Google phone, it can kind of “borrow” the network as a whole, if you like, for volunteers to go, “Well, I saw this tag. I have no idea who it belongs to, but I’m just calling it home to the database so the genuine owner can look up and see if it’s been sighted since they lost track of it.”

Tags are very convenient… so wouldn’t it be nice if there were some standards that everybody could follow that would let us continue to make use of these admittedly very useful products, but not have them be quite the stalker’s paradise that some of the naysayers seem to claim?

It’s an interesting dilemma, isn’t it?

In one part of their life, they need to be absolutely careful about not showing up as obviously the same device all the time.

But when they move away from you (and maybe someone snuck one into your car or stuck it in your rucksack), it actually needs to make it fairly clear to you that, “Yes, I’m the same tag that *isn’t* yours, that’s been with you for the last couple of hours.”

So sometimes they have to be quite secretive, and at other times they have to be a lot more open, to implement these so called anti-stalking protections.


DOUG.  OK, it’s important to bring up that this is just a draft, and it came out in early May.

There are six months of comment and feedback, so this could change tremendously over time, but it’s a good first start.

We have plenty of comments on the article, including this one from Wilbur, who writes:

I don’t use any Bluetooth gadgets, so I keep Bluetooth turned off on my iDevices to save battery. Plus, I don’t want to be discovered by people two tables away in a restaurant. All of these tracking prevention schemes rely on victims having active, proprietary Bluetooth devices in their possession. I consider that a major flaw. It requires people to purchase devices they may not otherwise need or want, or it forces them to operate existing devices in a way they may not desire.

What say you, Paul?


DUCK.  Well, you can’t really disagree with that.

As Wilbur goes on to say in a subsequent comment, he’s actually not terribly worried about being tracked; he’s just conscious of the fact that there is this almost crushing irony that because these products are really popular, and they rely on Bluetooth in order to know that you are being followed by one of these tags that doesn’t belong to you…

…you kind of have to opt into the system in the first place.


DOUG.  Exactly! [LAUGHS]


DUCK.  And you have to have Bluetooth on and go, “Right, I’m going to run the app.”

So Wilbur is right.

There is a sort of irony that says if you want to catch these trackers that rely on Bluetooth, you have to have a Bluetooth receiver yourself.

My response was, “Well, maybe it’s an opportunity, if you like having a bit of technical fun…”

Get a Raspberry Pi Zero ([LAUGHS] if you can actually find one for sale), and you could build your own tag-tracking device as a project.

Because, although the systems are proprietary, it is fairly clear how they work, and how you can determine that the same tracker is sticking with you.

But that would only work if the tracker follows these rules.

That’s a difficult irony, and I suppose you could argue, “Well, Pandora’s Jar has been opened.”

These tracking tags are popular; they’re not going to go away; they are quite handy; they do provide a useful service.

But if these standards didn’t exist, then they wouldn’t be trackable anyway, whether you had Bluetooth turned on or not.

So, maybe that’s the way to look at Wilbur’s comment?


DOUG.  Thank you, Wilbur, for sending that in.

And if you have an interesting story, comment or question you’d like to submit, we’d love to read on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH.  Stay secure.

[MUSICAL MODEM]


Bootkit zero-day fix – is this Microsoft’s most cautious patch ever?

Microsoft’s May 2023 Patch Tuesday updates comprise just the sort of mixture you probably expected.

If you go by numbers, there are 38 vulnerabilities, of which seven are considered critical: six in Windows itself, and one in SharePoint.

Apparently, three of the 38 holes are zero-days, because they’re already publicly known, and at least one of them has already been actively exploited by cybercriminals.

Unfortunately, those criminals seem to include the notorious Black Lotus ransomware gang, so it’s good to see a patch delivered for this in-the-wild security hole, dubbed CVE-2023-24932: Secure Boot Security Feature Bypass Vulnerability.

However, although you’ll get the patch if you perform a full Patch Tuesday download and let the update complete…

…it won’t automatically be applied.

To activate the necessary security fixes, you’ll need to read and absorb a 500-word post entitled Guidance related to Secure Boot Manager changes associated with CVE-2023-24932.

Then, you’ll need to work through an instructional reference that runs to nearly 3000 words.

That one is called KB5025885: How to manage the Windows Boot Manager revocations for Secure Boot changes associated with CVE-2023-24932.

The trouble with revocation

If you’ve followed our recent coverage of the MSI data breach, you’ll know that it involves cryptographic keys relevant to firmware security that were allegedly stolen from motherboard giant MSI by a different gang of cyberextortionists going by the street name Money Message.

You’ll also know that commenters on the articles we’ve written about the MSI incident have asked, “Why don’t MSI immediately revoke the stolen keys, stop using them, and then push out new firmware signed with new keys?”

As we’ve explained in the context of that story, disowning compromised firmware keys to block possible rogue firmware code can very easily provoke a bad case of what’s known as “the law of unintended consequences”.

For example, you might decide that the first and most important step is to tell me not to trust anything that’s signed by key XYZ any more, because that’s the one that’s been compromised.

After all, revoking the stolen key is the fastest and surest way to make it useless to the crooks, and if you’re quick enough, you might even get the lock changed before they have a chance to try the key at all.

But you can see where this is going.

If my computer revokes the stolen key in preparation for receiving a fresh key and updated firmware, but my computer reboots (accidentally or otherwise) at the wrong moment…

…then the firmware I’ve already got will no longer be trusted, and I won’t be able to boot – not off hard disk, not off USB, not off the network, probably not at all, because I won’t get as far as the point in the firmware code where I could load anything from an external device.

An abundance of caution

In Microsoft’s CVE-2023-24932 case, the problem isn’t quite as severe as that, because the full patch doesn’t invalidate the existing firmware on the motherboard itself.

The full patch involves updating Microsoft’s bootup code in your hard disk’s startup partition, and then telling your motherboard not to trust the old, insecure bootup code any more.

In theory, if something goes wrong, you should still be able to recover from an operating system boot failure simply by starting up from a recovery disk you prepared earlier.

Except that none of your existing recovery disks will be trusted by your computer at that point, assuming that they include boot-time components that have now been revoked and thus won’t be accepted by your computer.

Again, you can still probably recover your data, if not your entire operating system installation, by using a computer that has been fully patched to create a fully-up-to-date recovery image with the new bootup code on it, assuming you have a spare computer handy to do that.

Or you could download a Microsoft installation image that’s already been updated, assuming that you have some way to fetch the download, and assuming that Microsoft has a fresh image available that matches your hardware and operating system.

(As an experiment, we just fetched [2023-05-09:23:55:00Z] the latest Windows 11 Enterprise Evaluation 64-bit ISO image, which can be used for recovery as well as installation, but it hadn’t been updated recently.)

And even if you or your IT department do have the time and the spare equipment to create recovery images retrospectively, it’s still going to be a time-consuming hassle that you could all do without, especially if you’re working from home and dozens of other people in your company have been stymied at the same time and need to be sent new recovery media.

Download, prepare, revoke

So, Microsoft has built the raw materials you need for this patch into the files you’ll get when you download your May 2023 Patch Tuesday update, but has quite deliberately decided against activating all the steps needed to apply the patch automatically.

Instead, Microsoft urges you need to follow a three-step manual process like this:

  • STEP 1. Fetch the update so that all the files you need are installed on your local hard disk. Your computer will be using the new bootup code, but will still accept the old, exploitable code for the time being. Importantly, this step of the update doesn’t automatically tell your computer to revoke (i.e. no longer to trust) the old bootup code yet.
  • STEP 2. Manually patch all your bootable devices (recovery images) so they have the new bootup code on them. This means your recovery images will work correctly with your computer even after you complete step 3 below, but while you’re preparing new recovery disks, your old ones will still work, just in case. (We’re not going to give step-by-step instructions here because there are many different variants; consult Microsoft’s reference instead.)
  • STEP 3. Manually tell your computer to revoke the buggy bootup code. This step adds a cryptographic identifier (a file hash) to your motherboard’s firmware blocklist to prevent the old, buggy bootup code from being used in the future, thus preventing CVE-2023-24932 from being exploited again. By delaying this step until after step 2, you avoid the risk of getting stuck with a computer that won’t boot and can therefore no longer be used to complete step 2.

As you can see, if you perform steps 1 and 3 together straight away, but leave step 2 until later, and something goes wrong…

…none of your existing recovery images will work any more because they’ll contain bootup code that’s already been disowned and banned by your already-fully-updated computer.

If you like analogies, saving step 3 until last of all helps to prevent you from locking your keys inside the car.

Reformatting your local hard disk won’t help if you do lock yourself out, because step 3 transfers the cryptographic hashes of the revoked bootup code from temporary storage on your hard disk into a “never trust again” list that’s locked into secure storage on the motherboard itself.

In Microsoft’s understandably more dramatic and repetitive official words:

CAUTION

Once the mitigation for this issue is enabled on a device, meaning the revocations have been applied, it cannot be reverted if you continue to use Secure Boot on that device. Even reformatting of the disk will not remove the revocations if they have already been applied.

You have been warned!

If you or your IT team are worried

Microsoft has provided a three-stage schedule for this particular update:

  • 2023-05-09 (now). The full-but-clumsy manual process described above can be used to complete the patch today. If you’re worried, you can simply install the patch (step 1 above) but do nothing else right now, which leaves your computer running the new bootup code and therefore ready to accept the revocation described above, but still able to boot with your existing recovery disks. (Note, of course, that this leaves it still exploitable, because the old bootup code can still be loaded.)
  • 2023-07-11 (two months’ time). Safter automatic deployment tools are promised. Presumably, all official Microsoft installation downloads will be patched by then, so even if something does go wrong you will have an official way to fetch a reliable recovery image. At this point, we assume you will be able to complete the patch safely and easily, without wrangling command lines or hacking the registry by hand.
  • Early in 2024 (next year). Unpatched systems will be forcibly updated, including automatically applying the cryptographic revocations that will prevent old recovery media from working on your computer, thus hopefuilly closing off the CVE-2023-24932 hole permanently for everyone.

By the way, if your computer doesn’t have Secure Boot turned on, then you can simply wait for the three-stage process above to be completed automatically.

After all, without Secure Boot, anyone with access to your computer could hack the bootup code anyway, given that there is no active cryptographic protection to lock down the startup process.


DO I HAVE SECURE BOOT TURNED ON?

You can find out if your computer has Secure Boot turned on by running the command MSINFO32:


Low-level motherboard security keys leaked in MSI breach, claim researchers

About a month ago, we wrote about a data breach notification issued by major motherboard manufacturer MSI.

The company said:

MSI recently suffered a cyberattack on part of its information systems. […] Currently, the affected systems have gradually resumed normal operations, with no significant impact on financial business. […] MSI urges users to obtain firmware/BIOS updates only from its official website, and not to use files from sources other than the official website.

The company’s mea culpa came two days after a cyberextortion gang going by the name Money Message claimed to have stolen MSI source code, BIOS development tools, and private keys.

At the time, the criminals were still in countdown mode, and claimed they would “publish stolen data when timer expires”:

Screenshot three hours before the breach timer expired [2023-04-07].

Clock stopped

The “reveal timer” in the screenshot above expired on 2023-04-07, just over a month ago, but the Money Message site on the dark web is otherwise unchanged since the gang’s initial posting:

One month later [2023-05-09].

Nevertheless, researchers at vulnerability research company Binarly claim not only to have got hold of the data stolen in the breach, but also to have searched through it for embedded crpyotgraphic keys and come up with numerous hits.

So far, Binarly is claiming on Github and Twitter to have extracted numerous signing keys from the data in its possession, including what it describes [2023-05-09T14:00Z] as:

  • 1 Intel OEM key. Apparently, this key can be used to control firmware debugging on 11 different motherboards.
  • 27 image signing keys. Binarly claims that these keys can be used to sign firmware updates for 57 different MSI motherboards.
  • 4 Intel Boot Guard keys. These leaked keys apparently control run-time verification of firmware code for 116 different MSI motherboards.

Hardware-based BIOS protection

According to Intel’s own documentation, modern Intel-based motherboards can be protected by multiple layers of cryptographic safety.

First comes BIOS Guard, which only allows code that’s signed with a manufacturer-specified cryptographic key to get write access to the flash memory used to store so-called Initial Boot Block, or IBB.

As the name suggests, the IBB is the where the first component of the motherboard vendor’s startup code lives.

Subverting it would give an attacker control over an infected computer not only at a level below any operating system that later loads, but also below the level of any firmware utilities installed in the official EFI (extended firmware interface) disk partition, potentially even if that partition is protected by the firmware’s own Secure Boot digital signature system.

After BIOS Guard comes Boot Guard, which verifies the code that’s loaded from the IBB.

The idea here seems to be that although BIOS Guard should prevent any unofficial firmware updates from being flashed in the first place, by denying write access to rogue firmware updating tools…

…it can’t tell that firmware “officially” signed by the motherboard vendor can’t be trusted due to a leaked firmware image signing key.

That’s where Boot Guard steps in, providing a second level of attestation that aims to detect, at run-time during every bootup, that the system is running firmware that’s not approved for your motherboard.

Write-once key storage

To strengthen the level of cryptographic verification provided by both BIOS Guard and Boot Guard, and to tie the process to a specific motherboard or motherboard family, the cryptographic keys they use aren’t themselves stored in rewritable flash memory.

They’re saved, or blown, in the jargon, into write-once memory embedded on the motherboard itself.

The word blown derives from the fact that the storage ciruitry is constructed as a series of nanoscopic “connecting wires” implemented as tiny electrical fuses.

Those connections can be left intact, which means they’ll read out as binary 1s (or 0s, depending on how they’re interpreted), or “blown” – fused in other words – in a one-shot modification that flips them permanently into binary 0s (or 1s).

Triggering the bit-burning process is itself protected by a fuse, so the motherboard vendor gets a one-time chance to set the value of these so-called Field Programmable Fuses.

That’s the good news.

Once the BIOS Guard and Boot Guard cryptographic verification keys are written to the fusible memory, they’re locked in forever, and can never be subverted.

But the corresponding bad news, of course, is that if the private keys that correspond to these safe-until-the-end-of-the-universe public keys are ever compromised, the burned-in public keys can never be updated.

Similarly, a debug-level OEM key, as mentioned above, provides a motherboard vendor with a way to take control over the firmware as it’s booting up, including watching it instruction-by-instruction, tweaking its behaviour, spying on and modifying the data it’s holding in memory, and much more.

As you can imagine, this sort of access to, and control over, the bootup process is intended to help developers get the code right in the lab, before it’s burned into motherboards that will go to customers.

Intel’s documentation lists three debugging levels.

Green denotes debug access allowed to anyone, which isn’t supposed to expose any low-level secrets or to allow the bootup process to be modified.

Orange denotes full, read-write debugging access allowed to someone who has the corresponding vendor’s private key.

Red denotes the same as orange, but refers to a master private key belonging to Intel that can unlock any vnedor’s motherboard.

As Intel rather obviously, and bluntly, states in its documentation:

It is assumed that the Platform Manufacturer will not share their [Orange Mode] authentication key with any other set of debuggers.

Unfortunately, Binarly claims the crooks have now leaked an Orange Mode key that can enable low-level boot-time debugging on 11 different motherboards supplied by HP, Lenovo, Star Labs, AOPEN and CompuLab.

Beware of the bootkit

Binarly’s claims therefore seem to suggest that with a firmware signing key and a Boot Guard signing key, an attacker might not only be able to trick you and your firmware updating tools into installing what looks like a genuine firware update in the first place…

…but also be able to trick a motherboard that’s been hardware-locked via Boot Guard protection into allowing that rogue firmware to load, even if the update patches the Initial Boot Block itself.

Likewise, being able to boot up a stolen computer in firmware debugging mode could allow an attacker to run or implant rogue code, extract secrets, or otherwise manipulate the low-level startup process to leave a victim’s computer in an untrusted, unsafe, and insecure state.

Simply put, you could, in theory at least, end up not just with a rootkit, but a bootkit.

A rootkit, in the jargon, is code that manipulates the operating system kernel in order to prevent even the operating system itself from detecting, reporting or preventing certain types of malware later on.

Some rootkits can be activated after the operating system has loaded, typically by exploiting a kernel-level vulnerablity to make unauthorised internal changes to the operating system code itself.

Other rootkits sidestep the need for a kernel-level security hole by subverting part of the firmware-based startup sequence, aiming to have a security backdoor activated before the operating system starts to load, thus compromising some of the the underlying code on which the operating system’s own security relies.

And a bootkit, loosely speaking, takes that approach further still, so that the low-level backdoor gets loaded as early and as undetectably as possible in the firmware bootstrap process, perhaps even before the computer examines and reads anything from the hard disk at all.

A bootkit down at that level means that even wiping or replacing your entire hard disk (including the so-called Extended Firmware Interface System Partition, abbreviated EFI or ESP) is not enough to disinfect the system.

Typical Mac disk setup.
EFI partition is labelled accordingly.
Typical Windows 11 disk setup.
Type c12a7...ec93b denotes an EFI partition.

As an analogy, you could think of a rootkit that loads after the operating system as being a bit like trying to bribe a jury to acquit a guilty defendant in a criminal trial. (The risk of this happening is one reason why criminal juries typically have 12, 15 or more members.)

A rootkit that loads late in the firmware process is a bit like trying to bribe the prosecutor or the chief investigator to do a bad job and leave at least some evidential loopholes for the guilty parts to wriggle through.

But a bootkit is more like getting the legislature itself to repeal the very law under which the defendant is being charged, so that the case, no matter how carefully the evidence was collected and presented, can’t proceed at all.

What to do?

Boot Guard public keys, once burned into your motherboard, can’t be updated, so if their corresponding private keys are compromised, there’s nothing you can do to correct the problem.

Compromised firmware signing keys can be retired and replaced, which gives firmware downloaders and updating tools a chance of warning you in the future about firmware that was signed with a now-untrusted key, but this doesn’t actively prevent the stolen signing keys being used.

Losing signing keys is a bit like losing the physical master key to every floor and every suite in an office building.

Every time you change one of the compromised locks, you’ve reduced the usefulness of the stolen key, but unless and until you have changed every single lock, you haven’t properly solved your security problem.

But if you immediately replace every single lock in the building overnight, you’ll lock out everyone, so you won’t be able to let genuine tenants and workers keep on using their offices for a grace period during which they can swap their old keys for new ones.

Your best bet in this case, therefore, is to stick closely to MSI’s original advice:

[O]btain firmware/BIOS updates only from [MSI’s] official website, and [do not] use files from sources other than the official website.

Unfortunately, that advice probably boils down to five not entirely helpful words and an exclamation point.

Be careful out there, folks!


PHP Packagist supply chain poisoned by hacker “looking for a job”

We’ve written about PHP’s Packagist ecosystem before.

Like PyPI for Pythonistas, Gems for Ruby fans, NPM for JavaScript programmers, or LuaRocks for Luaphiles, Packagist is a repository where community contributors can publish details of PHP packages they’ve created.

This makes it easy for fellow PHP coders to get hold of library code they want to use in their own projects, and to keep that code up to date automatically if they wish.

Unlike PyPI, which provides its own servers where the actual library code is stored (or LuaRocks, which sometimes stores project source code itself and sometimes links to other repositories), Packagist links to, but doesn’t itself keep copies of, the code you need to download.

There’s an upside to doing it this way, notably that projects that are managed via well-known source code services such as GitHub don’t need to maintain two copies of their official releases, which helps avoid the problem of “version drift” between the source code control system and the packaging system.

And there’s a downside, notably that there are inevitably two different ways that packages could be booby-trapped.

The package manager itself could get hacked, where changing a single URL could be enough to misdirect users of the package.

Or the source code repository that’s linked to could get hacked, so that users who followed what looked like the right URL would end up with rogue content anyway.

Old accounts considered harmful

This attack (we’ll call it that, even though no booby-trapped code was published by the hacker concerned) used what you might call a hybrid approach.

The attacker found four old and inactive Packagist accounts for which they’d somehow acquired the login passwords.

They then identified 14 GitHub projects that were linked to by these inactive accounts and copied them a newly-created GitHub account.

Finally, they tweaked the packages in the Packagist system to point to the new GitHub repositories.

Cloning GitHub projects is incredibly common. Sometimes, developers want to create a genuine fork (alternative version) of the project under new management, or offering different features; at other times, forked projects seem to be copied for what might unflatteringly be called “volumetric reasons”, making GitHub accounts look bigger, better, busier and more committed to the community (if you will pardon the pun) than they really are.

Alhough the hacker could have inserted rogue code into the cloned GitHub PHP source, such as adding trackers, keyloggers, backdoors or other malware, it seems that all they changed was a single item in each project: a file called composer.json.

This file includes an entry entitled description, which usually contains exactly what you’d expect to see: a text string describing what the source code is for.

And that’s all our hacker modified, changing the text from something informative, like Project PPP implements the QQQ protocol so you can RRR, so that their projects instead reported:

 Pwned by XXX@XXXX.com. Ищу работу на позиции Application Security, Penetration Tester, Cyber Security Specialist.

The second sentence, written half in Russian, half in English, means:

 I'm looking for a job in Application Security... etc.

We can’t speak for everyone, but as CVs (résumés) go, we didn’t find this one terribly convincing.

Also, the Packagist team says that all unauthorised changes have now been reverted, and that the 14 cloned GitHub projects hadn’t been modified in any other way than to include the pwner’s solicitation of employment.

For what it’s worth, the would-be Application Security expert’s GitHub account is still live, and still has those “forked”” projects in it.

We don’t know whether GitHub hasn’t yet got round to expunging the account or the projects, or whether the site has decided not to remove them.

After all, forking projects is commonplace and permissible (where licensing terms allow, at least), and although describing a non-malicious code project with the text Pwned by XXXX@XXXX.com is unhelpful, it’s hardly illegal.

What to do?

  • Don’t do this. You’re definitely not going to to attract the interest of any legitimate employers, and (if we are honest) you’re not even going to impress any cybercrooks out there, either.
  • Don’t leave unused accounts active if you can help it. As we said yesterday on World Password Day, consider closing down accounts you don’t need any more, on the grounds that the fewer passwords you have in use, the fewer there are to get stolen.
  • Don’t re-use passwords on more than one account. Packagist’s assumption is that the passwords abused in this case were lying around in data breach records from other accounts where the victims had used the same password as on their Packagist account.
  • Don’t forget your 2FA. Packagists urges all its own users to turn 2FA on, so a password alone is not enough for an attacker to log into your account, and recommends doing the same on your GitHub account, too.
  • Don’t blindly accept supply-chain updates without reviewing them for correctness. If you have a complicated web of package dependencies, it’s tempting to toss your responsibilities aside and to let the system fetch all your updates automatically, but that just puts you and your downstream users at additional risk.

HERE’S THAT ADVICE FROM WORLD PASSWORD DAY


S3 Ep133: Apple takes “tight-lipped” to a whole new level

SILENT SECURITY! (IS THAT A GOOD THING?)

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Passwords, botnets, and malware on the Mac.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how are you doing?


DUCK.  [SCEPTICAL/SQUEAKY VOICE] Malware on Macs??!?!?!!?

Surely some mistake, Doug?

[LAUGHTER]


DOUG.  What?

This must be a typo. [LAUGHS]

Alright, let’s get right to it.

Of course, our first segment of the show is always the This Week in Tech History segment.

And this week – exciting! – BASIC.

If you’ve ever used one of the many flavours of the popular programming language, you may know that it stands for Beginners’ All Purpose Symbolic Instruction Code.

The first version was released at Dartmouth College on 01 May 1964, with the goal of being easy enough for non-math and non-science majors to use, Paul.

I take it you’ve dabbled with BASIC in your life?


DUCK.  I might have done just that, Doug. [LAUGHTER]

But even more importantly than Dartmouth BASIC, of course, was that this was when the DTSS, the Dartmouth Time-Sharing system, went online, so that people could use Dartmouth BASIC and their ALGOL compiler.

Lots of different people on teletypes could share the system at the same time, entering their own BASIC programs, and running them in real time as they sat there.

Wow, 59 years ago, Doug!


DOUG.  A lot has changed…


DUCK.  …and a lot has stayed the same!

This could be said to be where it all began – The Cloud. [LAUGHTER]

The “New England cloud”… it really was.

The network became quite significant.

It went all the way up into Maine, all the way through New Hampshire, right down into New York, I believe, and Long Island.

Schools, and colleges, and universities, all connected together so that they could enjoy coding for themselves.

So there *is* a sense of plus ça change, plus c’est la même chose, Doug. [The more things change, the more they stay the same.]


DOUG.  Excellent.

Alright, well, we are going to talk about Google… and this sounds a little bit more nefarious than it actually is.

Google can now legally force ISPs to filter traffic, but it’s not quite as bad as it sounds.

This is botnet traffic, and it’s because there’s a botnet using a bunch of Google stuff to trick people.

Google wins court order to force ISPs to filter botnet traffic


DUCK.  Yes, I think you do have to say “hats off” to Google for doing this obviously huge exercise.

They’ve had to put together a complex, well-reasoned legal argument why they should be given the right to go to ISPs and say, “Look, you have to stop traffic coming from this IP number or from that domain.”

So it’s not just a takedown of the domain, it’s actually knocking their traffic out.

And Google’s argument was, “If it takes trademark law to get them for this, well, we want to do it because our evidence shows that more than 670,000 people in the US have been infected by this zombie malware, CryptBot”.

CryptBot essentially allows these guys to run a malware-as-a-service or a data-theft-as-a-service service…

…where they can take screenshots, riffle through your passwords, grab all your stuff.

670,000 victims in the US – and it’s not just that they’re victims themselves, so that their data can be stolen.

Their computers can be sold on to help other crooks use them in committing further crimes.

Sounds rather a lot, Doug.

Anyway, it’s not a “snooper’s charter”.

They’ve not got the right to say, “Oh, Google can now force ISPs to look at the traffic and analyse what’s going on.”

It’s just saying, “We think that we can isolate that network as an obvious, overt purveyor of badness.”

The operators seem to be located outside the US; they’ve obviously not going to show up in the US to defend themselves…

…so Google asked the court to make a judgment based on its evidence.

And the court said, “Yes, as far as we can see, we think that if this did go to trial, if the defendants did show up, we think Google has a very, very strong chance of prevailing.”

So the court issued an order that says, “Let’s try and interfere with this operation.”


DOUG.  And I think the key word there is “try”.

Will something like this actually work?

Or how much heavy lifting does it take to reroute 670,000 zombie computers on to somewhere else that can’t be blocked?


DUCK.  I think that’s usually what happens, isn’t it?


DOUG.  Yes.


DUCK.   We see with cybercrime: you cut off one head, and another grows back.

But that’s not something the crooks can do instantaneously.

They have to go and find another provider who’s prepared to take the risk, knowing that they’ve now got the US Department of Justice looking at them from a distance, knowing that maybe the US has now aroused some interest, perhaps, in the Justice Department in their own country.

So I think the idea is to say to the crooks, “You can disappear from one site and come up in some other so called bulletproof hosting company, but we are watching you and we are going to make it difficult.”

And if I read correctly, Doug, the court order also allows, for this limited period, Google to almost unilaterally add new locations themselves to the blocklist.

So they’re now in this trusted position that if they see the crooks moving, and their evidence is strong enough, they can just say,”Yes, add this one, add this one, add that one.”

Whilst it might not *stop* the dissemination of the malware, it might at least give the crooks some hassle.

It might help their business to stagnate a little bit.

Like I said, it might draw some interest from law enforcement in their own country to go and have a look around.

And it might very well protect a few people who would otherwise fall for the ruse.


DOUG.  And there are some things that those of us at home can do, starting with: Stay away from sites offering unofficial downloads of popular software.


DUCK.  Indeed, Doug.

Now, I’m not saying that all unofficial downloads will contain malware.

But it’s usually possible, at least if it’s a mainstream product, say it’s a free and open-source one, to find the one true site, and go and get the thing straight from there.

Because we have seen cases in the past where even so-called legitimate downloader sites that are marketing driven can’t resist offering downloads of free software that they wrap in an installer that adds extra stuff, like adware or pop-ups that you don’t want, and so on.


DOUG.  [IRONIC] And a handy browser toolbar, of course.


DUCK.  [LAUGHS] I’d forgotten about the browser toolbars, Doug!

[MORE LAUGHTER]

Find the right place, and don’t just go to a search engine and type in the name of a product and then take the top link.

You may well end up on an imposter site.. that’s *not* enough for due diligence.


DOUG.  And along those lines, taking things a step further: Never be tempted to go for a pirated or cracked program.


DUCK.  That’s the dark side of the previous tip.

It’s easy to make a case for yourself, isn’t it?

“Oh, a little old me. Just this once, I need to use super-expensive this-that-and-the-other. I just need to do it this one time and then I’ll be good afterwards, honest.”

And you think, “What harm will it do? I wasn’t going to pay them anyway.”

Don’t do it because:

(A) It is illegal.

(B) You inevitably end up consorting with exactly the kind of people behind this CyptoBot scam – they’re hoping you’re desperate and therefore you’ll be much more inclined to trust them, where normally you would go, “You look like a bunch of charlatans.”

(C) And of course, lastly, there’s almost always going to be a free or an open source alternative that you could use.

It might not be as good; it might be harder to use; you might need to invest a little bit of time learning to use it.

But if you really don’t like paying for the big product because you think they’re rich enough already, don’t steal their stuff to prove a point!

Go and put your energy, and your impetus, and your visible support legally behind someone who *does* want to provide you the product for free.

That’s my feeling, Doug.


DOUG.  Yes.

Stick it to the man *legally*.

And then finally, last but not least: Consider running real-time malware blocking tools.

These are things that scan downloads and they can tell you, “Hey, this looks bad.”

But also, if you try to run something bad, at run-time they’ll say, “No!”


DUCK.  Yes.

So that rather than just saying, “Oh, well, I can scan files I’ve already got: are they good, bad or indifferent?”…

…you have a lower chance of putting yourself in harm’s way *in the first place*.

And of course it would be cheesy for me to mention that Sophos Home (https://sophos.com/home) is one way that you can do that.

Free for up to three Mac and Windows users on your account, I believe. Doug?


DOUG.  Correct.


DUCK.  And a modest fee for up to 10 users.

And the nice thing is that you can put friends and family into your account, even if they live remotely.

But I won’t mention that, because that would be overly commercial, wouldn’t it?


DOUG.  [VERBAL SMILE] Of course, so let’s not do that.

Let us talk about Apple.

This is a surprise… they surprised us all with the new Rapid Security Response initiative.

What happened here, Paul?

Apple delivers first-ever Rapid Security Response “cyberattack” patch – leaves some users confused


DUCK.  Well, Doug, I got this Rapid Security Response!

The download was a few tens of megabytes, as far as I remember; the verification a couple of seconds… and then my phone went black.

Then it rebooted and next thing I knew, I was right back where I started, and I had the update: iOS 16.4.1 (a).

(So there’s a weird new version number to go with it as well.)

The only downside I can see, Doug, is that you have no idea what it’s for.

None at all.

Not even a little bit like, “Oh, sorry, we found a zero-day in WebKit, we thought we’d better fix it”, which would be nice to know.

Just nothing!

But… small and fast.

My phone was out of service for seconds rather than tens of minutes.

Same experience on my Mac.

Instead of 35 minutes of grinding away, “Please wait, please wait, please wait,” then rebooting three or four times and “Ohhh, is it going to come back?”…

…basically, the screen went black; seconds later, I’m typing in my password and I’m running again.

So there you are, Doug.

Rapid Security Response.

But no one knows why. [LAUGHTER]


DOUG.  It’s perhaps unsurprising, but it’s still cool nonetheless that they’ve got this kind of programme in place.

So let’s stay on the Apple train and talk about how, for the low, low price of $1,000 a month, you too can get into the Mac malware game, Paul.

Mac malware-for-hire steals passwords and cryptocoins, sends “crime logs” via Telegram


DUCK.  Yes, this is certainly a good reminder that if you are still convinced that Macs don’t get malware, think again.

These are researchers at a company called Cyble, and they have, essentially, a sort-of dark web monitoring team.

If you like, they deliberately try and lie down with dogs to see what fleas they attract [LAUGHS] so that they can find things that are going on before the malware gets out… while it’s being offered for sale, for example.

And that’s exactly what they found here.

And just to make it clear: this isn’t malware that just happens to include a Mac variant.

It is absolutely targeted at helping other cybercriminals who want to target Mac fanbuoys-and-girls directly.

It is called AMOS, Doug: Atomic macOS Stealer.

It does not support Windows; it does not support Linux; it does not run in your browser. [LAUGHTER]

And the crooks are even offering, via a secret channel on Telegram, this “full service” that includes what they call a “beautifully prepared DMG” [Apple Disk Image, commonly used for delivering Mac installers].

So they recognise, I suppose, that Mac users expect software to look right, and to look good, and to install in a certain Mac-like way.

And they’ve tried to follow all those guidelines, and produce a program that is as believable as it can be, particularly since it needs to ask for your admin password so that it can do its dirtiest stuff… stealing all your keychain passwords, but it tries to do it in a way that’s believable.

But in addition to that, not only do you (as a cybercrook who wants to go after Mac users) get access to their online portal, so you don’t need to worry about collating the data yourself… Doug, they even have an app-for-that.

So, if you’ve mounted an attack and you couldn’t be bothered to wake up in the morning, actually log in to your portal, and check whether you’ve been successful, they will send you real-time messages via Telegram to tell you where your attack succeeded, and even to give you access to stolen data.

Right there in the app.

On your phone.

No need to log in, Doug.


DOUG.  [IRONIC] Well, that’s helpful.


DUCK.  As you say, it’s $1,000 a month.

Is that a lot or a little for what you get?

I don’t know.. but at least we know about it now, Doug.

And, as I said, for anyone who’s got a Mac, it is a reminder that there is no magic security that immunises you from malware on a Mac.

You are much less likely to experience malware, but having *less* malware on Macs than you get on Windows is not the same as having *zero* malware and being at no risk from cybercriminals.


DOUG.  Well said!

Let’s talk about passwords.

World Password Day is coming up, and I will cut to the chase, because you have heard us, on this very programme, say, time and time again…

…use a password manager if you can; use 2FA when you can.

Those we’re calling Timeless Tips.

World Password Day: 2 + 2 = 4

But then two other tips to think about.

Number 1: Get rid of accounts you aren’t using.

I had to do this when LastPass was breached.

It’s not a fun process, but it felt very cathartic.

And now I’m down, I believe, to only the accounts I’m still actively using.


DUCK.  Yes, it was interesting to hear you talking about that.

That definitely minimises what’s called, in the jargon, your “attack surface area”.

Fewer passwords, fewer to lose.


DOUG.  And then another one to think about: Revisit your account recovery settings.


DUCK.  I thought it’s worth reminding people about that, because it’s easy to forget that you may have an account that you are still using, that you do know how to log into, but that you’ve forgotten where that recovery email goes, or (if there’s an SMS code) what phone number you put in.

You haven’t needed to use it for seven-and-a-half years; you’ve forgotten all about it.

And you may have put in, say, a phone number that you’re not using anymore.

Which means that: (A) if you need to recover the account in the future, you’re not going to be able to, and (B) for all you know, that phone number could have been issued to someone else in the interim.

Exactly the same with an email account.

If you’ve got a recovery email going to an email account that you’ve lost track of… what if someone else has already got into that account?

Now, they might not realise which services you’ve tied it to, but they might just be sitting there watching it.

And the day when you *do* press [Recover my password], *they’ll* get the message and they’ll go, “Hello, that looks interesting,”and then they can go in and basically take over your account.

So those recovery details really do matter.

If those have got out of date, they are almost more important than the password you have on your account right now, because they are equal keys to your castle.


DOUG.  Alright, very good.

So this year, a Very Happy World Password Day to everyone… take some time to get your ducks in a row.

As the sun begins to set on our show, it’s time to hear from one of our readers – an interesting comment on last week’s podcast.

As a reminder, the podcast is available both in audio mode and in written form.

Paul sweats over a transcript every week, and does a great job – it’s a very readable podcast.

So, we had a reader, Forrest, write about the last podcast.

We were talking about the PaperCut hack, and that a researcher had released a proof-of-concept script [PoC] that people could use very easily…


DUCK.  [EXCITED] To become hackers instantly!


DOUG.  Exactly.


DUCK.  Let’s put put not to fine a point upon it. [LAUGHTER]


DOUG.  So Forrest writes:

For the whole disgruntlement over the PaperCut PoC script. I think it’s important to also understand that PoCs allow both good and bad actors to demonstrate risk.

While it can be damaging to an organisation, demonstrating risk or witnessing someone get owned over it is what drives remediation and patching.

I can’t count the number of times I’ve seen vulnerability management teams light fires under their IT resources only after I’ve weaponised the 10-year-old CVE they have refused to patch.

Good point.

Paul, what are your thoughts on that?

PaperCut security vulnerabilities under active attack – vendor urges customers to patch


DUCK.  I get the point.

I understand what full disclosure is all about.

But I think there is quite a big difference between publishing a proof-of-concept that absolutely anybody who knows how to download a text file and save it on their desktop can use to become an instant abuser of the vulnerability, *while we know that this is a vulnerability currently being exploited by people like ransomware criminals and cryptojackers*.

There’s a difference between blurting that out while the thing is still a clear and present danger, and trying to shake up your management to fix something that is 10 years old.

I think in a balanced world, maybe this researcher could simply have explained how they did it.

They could have shown you the Java methods that they used, and reminded you of the ways that this has been exploited before.

They could have made a little video showing that their attack worked, if they wanted to go on the record as being one of the first people to come up with a PoC.

Because I recognise that that’s important: you’re proving your worth to prospective future employers who might employ you for threat hunting.

But in this case…

…I’m not against the PoC being released.

I just shared your opinion in the podcast.


DOUG.  It was more a *grunting* than *disgruntled*.


DUCK.  Yes, I transcribed that as A-A-A-A-A-R-G-H. [LAUGHS]


DOUG.  I probably would have gone with N-N-N-N-N-G-H, but, yes.


DUCK.  Transcribing is as much art as science, Doug. [LAUGHTER]

I see what our commenter is saying there, and I get the point that knowledge is power.

And I *did* find looking at that PoC useful, but I didn’t need it as a working Python script, so that not *everybody* can do it *anytime* they feel like it.


DOUG.  Alright, thank you very much, Forrest, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


go top