S3 Ep131: Can you really have fun with FORTRAN?

LOOPING THE LOOP

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Juicejacking, public psychotherapy, and Fun with FORTRAN.

All that and more on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do today, Sir?


DUCK.  I’m very well, Douglas.

I’m intrigued by your phrase “Fun with FORTRAN”.

Now, I do know FORTRAN myself, and fun is not the first adjective that springs to mind to describe it. [LAUGHS]


DOUG.  Well, you might say, “You can’t spell ‘FORTRAN’ without ‘fun’.”

That’s not quite accurate, but…


DUCK.  It’s actually astonishingly *inaccurate*, Doug! [LAUGHS]


DOUG.  [LAUGHING] Keep that in mind, because this has to do with inaccuracies.

This week, on 19 April 1957, the first FORTRAN program ran.

FORTRAN simplified programming, beginning with a program run at Westinghouse that threw an error on its first attempt – it produced a “missing comma” diagnostic.

But the second attempt was successful.

How do you like that?


DUCK.  That’s fascinating, Doug, because my own – what I always thought was ‘knowledge’, but turns out may well be an urban legend…

…my own story about FORTRAN comes from about five years after that: the launch of the Mariner 1 space probe.

Spacecraft don’t always follow exactly where they’re supposed to go, and they’re supposed to correct themselves.

Now, you imagine the kind of calculations involved – that was quite hard in the 1960s.

And I was told this semi-officially (meaning, “I heard it from a lecturer at university when I was studying computer science, but it wasn’t part of the syllabus”)…

..apparently, that bug was down to a line in FORTRAN that was supposed to say DO 51 I = 1,100, which is a “for loop”.

It says, “Do 100 loops, up to and including line 51.”

But the person typed DO 51 I = 1.100, with a dot, not a comma.

FORTRAN ignores spaces, so it interpreted DO51I = as a variable assignment, assigned that variable the value 1.100, and then went round the loop once… because it hadn’t been told to loop at line 51, and line 51 just executed once.

I always assumed that that was the correction loop – it was supposed to have a hundred goes to get the spacecraft back on target, and it only had one go, and therefore it didn’t work.

[LAUGHS]

And it seems it may not actually be true… may be a bit of an urban legend.

Because there’s another story that says that actually the bug was down to a problem in the specifications, where someone wrote out the equations that needed to be coded.

And for one of the variables, they said, “Use the current value of this variable”, when in fact, you were supposed to smooth the value of that variable by averaging it over previous readings.

You can imagine why that would throw something off course if it had to do with course correction.

So I don’t know which is true, but I like the DO 51 I = 1,100 story, and I plan to keep dining out on it for as long as I can, Doug.


DOUG.  [LAUGHS] Like I said, “Fun with FORTRAN”.


DUCK.  OK, I take your point, Doug.


DUCK.  Both those stories are fun…

Something not so fun – an update to an update to an update.

I believe this is at least the third time we’ve talked about this story, but this is the psychotherapy clinic in Finland that housed all its patient data, including notes from sessions, online in the cloud under a default password, which was leveraged by evildoers.

Those evildoers tried to get some money out of the company.

And when the company said no, they went after the patients.

Ex-CEO of breached pyschotherapy clinic gets prison sentence for bad data security


DUCK.  How awful must that have been, eh?

Because it wasn’t just that they had the patients’ ID numbers and financial details for how they paid for their treatment.

And it wasn’t just that they had some notes… apparently, the sessions were recorded and transcribed, and *those* were uploaded.

So they basically had everything you’d said to your therapist…

…and one wonders whether you had any idea that your words would be preserved forever.

Might have been in the small print somewhere.

Anyway, as you say, that’s what happened.

The blackmailer went after the company for, what, €450,000 (which was about half a million US dollars at the time), and they weren’t inclined to pay up.

So they thought, “Hey, why don’t I just contact all the patients? Because I’ve got all their contact details, *and* I’ve got all their deepest, darkest secrets and fears.”

The crook figured, “I can contact them and say, ‘You’ve got 24 hours to pay me €200; then I’ll give you 48 hours to pay me €500; and then I’m going to doxx you – I’m going to dump your data for everybody to see’.”

And I did read one article that suggested that when the patients didn’t come up with the money, he actually found people who’d been mentioned in their conversations.


DOUG.  Didn’t someone’s mother get roped into this, or something like that?


DUCK.  Yes!

They said, “Hey, we have conversations with your son; we’re going to dump everything that he said about you, from a private session.”

Anyway, the good news is that the victims decided they were definitely not going to take this lying down.

And loads of them did report it to the Finnish police, and that gave them impetus to take this as a serious case.

And the investigations have been ongoing ever since.

There’s somebody… I believe he’s still in custody in Finland; he hasn’t finished his trial yet for the extortion side.

But they also decided, “You know what, the CEO of the company that was so shabby with the data should bear some personal liability.”

He can’t just go, “Oh, it was the company; we’ll pay a fine” (which they did, and ultimately went bankrupt).

That’s not enough – he’s supposed to be the boss of this company; he’s supposed to set the standards and determine how they operate.

So he went to trial as well.

And he’s just been found guilty and given a three month prison sentence, albeit a suspended one.

So if he keeps his nose clean, he can stay out of prison… but he did get taken to task for this in court, and given a criminal conviction.

As light as the sentence might sound, that does sound like a good start, doesn’t it?


DOUG.  A lot of comments on this post are saying they should force him to go to jail; he should actually spend time in jail.

But one of the commenters, I think rightly, points out that this is common for first-time offenders for non-violent crimes…

…and he does now have a criminal record, so he may never work in this town again, as it were.


DUCK.  Yes, and perhaps more importantly, it will give anybody pause before allowing him the authority to make this kind of poor decision in future.

Because it seems that it wasn’t just that he allowed his IT team to do shabby work or to cut corners.

It seems that they did know they’d been breached on two occasions, I think in 2018 and 2019, and decided, “Well, if we don’t say anything, we’ll get away with it.”

And then in 2020, obviously, a crook got hold of the data and abused it in a way that you couldn’t really doubt where it came from.

It wasn’t just, “Oh, I wonder where they got my email address and national identity number?”

You can only get your Clinic X private psychotherapy transcript from Clinic X, you would expect!


DOUG.  Yes.


DUCK.  So there’s also the aspect that if they’d come clean in 2018; if they’d disclosed the breach as they were supposed to, then…

(A) They would have done the right thing by the law.

(B) They would have done the right thing by their patients, who could have started taking precautions in advance.

And (C), they would have had some compunction upon them to go and fix the holes instead of going, “Oh, let’s just keep quiet about it, because if we claim we didn’t know, then we don’t have to do anything and we could just carry on in the shabby way that we have already.”

It was definitely not considered an innocent mistake.

And therefore, when it comes to cybercrime and data breaches, it is possible to be both a victim and a perpetrator at the same time.


DOUG.  A good point well put!

Let’s move on.

Back in February 2023, we talked about rogue 2FA apps in the app stores, and how sometimes they just kind of linger.

And linger they have.

Paul, you’re going to be doing a live demo of how one of these popular apps works, so everyone can see… and it’s still there, right?

Beware rogue 2FA apps in App Store and Google Play – don’t get hacked!


DUCK.  It is.

Unfortunately, the podcast will come out just after the demo has been done, but this is some research that was done by a pair of independent Apple developers, Tommy Mysk and Talal Haj Bakry.

On Twitter, you can find them as @mysk_co.

They regularly look into cybersecurity stuff so that they can get cybersecurity right in their specialist coding.

They’re programmers after my own heart, because they don’t just do enough to get the job done, they do more than enough to get the job done well.

And this was around the time, if you remember, that Twitter had said, “Hey, we’re going to be discontinuing SMS-based two-factor authentication. Therefore, if you’re relying on that, you will need to go and get a 2FA app. We’ll leave it to you to find one; there are loads.”

Twitter tells users: Pay up if you want to keep using insecure 2FA

Now, if you just went to the App Store or to Google Play and typed in Authenticator App, you got so many hits, how would you know which one to choose?

And on both stores, I believe, the top ones turned out to be rogues.

In the case of the top search app (at least on the Apple Store, and some of the top-ish apps on Google Play), it turns out that the app developers had decided that, in order to monitor their apps, they’d use Google Analytics to record how people use the apps – telemetry, as it’s called.

Lots of apps do this.

But these developers were either sneakily malicious, or so ignorant or careless, that in amongst the stuff they collected about how the app was behaving, they also took a copy of the two-factor authentication seed that is used to generate all the codes for that account!

Basically, they had the keys to everybody’s 2FA castles… all, apparently innocently, through program analytics.

But there it was.

They’re collecting data that absolutely should never leave the phone.

The master key to every six-digit code that comes every 30 seconds, for evermore, for every account on your phone.

How about that, Doug?


DOUG.  Sounds bad.

Well, we will be looking forward to the presentation.

We will dig up the recording, and get it out to people on next week’s podcast… I’m excited!

Alright, moving right along to our final topic, we’re talking about juicejacking.

It’s been a while… been about over ten years since we first heard this term.

And I have to admit, Paul, when I started reading this, I began to roll my eyes, and then I stopped, because, “Why are the FBI and the FCC issuing a warning about juicejacking? This must be something big.”

But their advice is not making a whole lot of sense.

Something must be going on, but it doesn’t seem that big a deal at the same time.

FBI and FCC warn about “Juicejacking” – but just how useful is their advice?


DUCK.  I think I’d agree with that, Doug, and that’s why I was minded to write this up.

The FCC… for those who aren’t in the United States, that’s the Federal Communications Commission, so when it comes to things like mobile networks, you’d think they know their oats.

And the FBI, of course, are essentially the federal police.

So, as you say, this became a massive story.

It got traction all over the world.

It was certainly repeated in many media outlets in the UK: [DRAMATIC VOICE] “Beware charging stations at airports.”

As you say, it did seem like a little bit of a blast from the past.

I wasn’t aware why it would be a clear and present “massive consumer-level danger” right now.

I think it was 2011 that it was a term coined to describe the idea that a rogue charging station might just not provide power.

It might have a hidden computer at the other end of the cable, or at the other side of the socket, that tried to mount your phone as a device (for example, as a media device), and suck files off it without you realising, all under the guise of just providing you with 5 volts DC.

And it does seem as though this was just a warning, because sometimes it pays to repeat old warnings.

My own tests suggested that the mitigation still works that Apple put in place right back in 2011, when juicejacking was first demonstrated at the Black Hat 2011 conference.

When you plug in a device for the first time, you’re offered the choice Trust/Don't Trust.

So there are two things here.

Firstly, you do have to intervene.

And secondly, if your phone’s locked, somebody can’t get at the Trust/Don't Trust button secretly by just reaching over and tapping the button for you.

On Android, I found something similar.

When you plug in a device, it starts charging, but you have to go into the Settings menu, enter the USB connection section, and switch from No Data mode into either “share my pictures” or “share all my files” mode.

There is a slight warning for iPhone users when you plug itinto a Mac.

If you do hit Trust by mistake, you do have the problem that in future, when you plug it in, even if the phone is locked, your Mac will interact with your phone behind your back, so it doesn’t require you to unlock the phone.

And the flip side to that, that I think listeners should be aware of is, on an iPhone, and I consider this a bug (others might just say, “Oh no, that’s an opinion. It’s subjective. Bugs can only be objective errors”)…

…there is no way to review the list of devices you have trusted before, and delete individual devices from the list.

Somehow, Apple expects you to remember all the devices you’ve trusted, and if you want to distrust *one* of them, you have to go in and basically reset the privacy settings on your phone and distrust *all* of them.

And, also, that option is buried, Doug, and I’ll read it out here because you probably won’t find it by yourself. [LAUGHS]

It’s under Settings > General > Transfer or Reset iPhone > Reset Location and Privacy.

And the heading says “Prepare for New iPhone”.

So the implication is you’ll only ever need to use this when you’re moving from one iPhone to the next.

But it does seem, indeed, as you said at the outset, Doug, with juicejacking, that there is a possibility that someone has a zero-day that means plugging into an untrusted or unknown computer could put you at risk.


DOUG.  I’m trying to imagine what it would entail to usurp one of these machines.

It’s this big, garbage-can size machine; you’d have to crack into the housing.

This isn’t like an ATM skimmer where you can just fit something over.

I don’t know what’s going on here that we’re getting this warning, but it seems like it would be so hard to actually get something like this to work.

But, that being said, we do have some advice: Avoid unknown charging connectors or cables if you can.

That’s a good one.


DUCK.  Even a charging station that was set up in perfectly good faith might not have the decency of voltage regulation that you would like.

And, as a flip side to that, I would suggest that if you are on the road and you realize, “Oh, I suddenly need a charger, I don’t have my own charger with me”, be very wary of pound-shop or dollar-shop super-cheap chargers.

If you want to know why, go to YouTube and search for a fellow called Big Clive.

He buys cheap electronic devices like this, takes them apart, analyses the circuitry and makes a video.

He’s got a fantastic video about a knockoff Apple charger

…[a counterfeit] that looks like an Apple USB charger, that he bought for £1 in a pound-shop in Scotland.

And when he takes it apart, be prepared to be shocked.

He also prints out the manufacturer’s circuit diagram, and he actually goes through with a sharpie and puts it under his camera.

“There’s a fuse resistor; they didn’t include that; they left that out [crosses out missing component].”

“Here’s a protective circuit; they left out all those components [crosses more out].”

And eventually he’s down to about half the components that the manufacturer claimed were in the device.

There’s a point where there’s a gap between the mains voltage (which in the UK would be 230 volts AC at 50 Hz) and a trace on the circuit board that would be at the delivery voltage (which for USB is 5 volts)…

…and that gap, Doug, is probably a fraction of a millimetre.

How about that?

So, yes, avoid unknown connectors.


DOUG.  Great advice.


DUCK.  Carry your own connectors!


DOUG.  This is a good one, especially if you’re on the run and you need to charge quickly, aside from the security implications: Lock or turn off your phone before connecting it to a charger or computer.

If you turn off your phone, it’ll charge much faster, so that’s something right there!


DUCK.  It also ensures that if your phone does get stolen… which you could argue is a bit more likely at one of these multi-user charging stations, isn’t it?


DOUG.  Yes!


DUCK.  It also means that if you do plug it in and a Trust prompt does pop up, it’s not just sitting there for someone else to go, “Ha, that looks like fun,”and clicking the button you did not expect.


DOUG.  Alright, and then we’ve got: Consider untrusting all devices on your iPhone before risking an unknown computer or charger.

That’s the setting you just walked through earlier under Settings > General > Transfer or Reset iPhone


DUCK.  Walked *down* into; way down into the pit of darkness. [LAUGHS]

You don’t *need* to do that (and it’s a bit of a pain), but it does mean that you aren’t risking compounding a trust error that you may have made before.

Some people might consider that overkill, but it’s not, “You must do this”, merely a good idea because gets you back to square one.


DOUG.  And last but not least: Consider acquiring a power-only USB cable or adapter socket.

Those are available, and they just charge, they don’t transfer data.


DUCK.  Yes, I’m not sure whether such a cable is available in the USB-C format, but it’s easy to get them in USB-A.

You can actually peer into the socket, and if it’s missing the two middle connectors… I put a picture in the article on Naked Security of a bike light I have that only has the outer connectors.

If you can only see power connectors, then there’s no way for data to be transferred.


DOUG.  Alright, very good.

And let us hear from one of our readers… something of a counterpoint on the juicejacking piece.

Naked Security Reader NotConcerned writes, in part:

This article comes off a bit naive. Of course, juicejacking isn’t some widespread problem, but to discount any warning based on a very basic test of connecting phones to a Windows and Mac PC and getting a prompt is kind of silly. That doesn’t prove there aren’t methods with zero clicks or taps needed.

What say you, Paul?


DUCK.  [SLIGHT SIGH] I get the point.

There could be an 0-day that means when you plug it in at a charging station, there might be a way for some models of phone, some versions of operating system, some configurations… where it could somehow magically bypass the Trust prompt or automatically set your Android into PTP mode or File Transfer mode instead of No Data mode.

It’s not impossible.

But if you’re going to include probably esoteric million-dollar zero-days in the list of things that organisations like the FCC and the FBI make blanket warnings about, then they should be warning, day after day after day: “Don’t use your phone; don’t use your browser; don’t use your laptop; don’t use your Wi-Fi; don’t press anything at all”, in my opinion.

So I think what worries me about this warning is not that you should ignore it.

(I think that the detail that we put in the article and the tips that we just went through suggest that we do take it more than seriously enough – we’ve got some decent advice in there that you can follow if you want.)

What worries me about this kind of warning is that it was presented as such a clear and present danger, and picked up all around the world so that it sort-of implies to people, “Oh, well, that means that when I’m on the road, all I need to do is don’t plug my phone into funny places and I’ll be OK.”

Whereas, in fact, there are probably 99 other things that would give you a lot more safety and security if you were to do those.

And you’re probably not at a significant risk, if you are short of juice, and you really *do* need to recharge your phone because you think, “What if I can’t make an emergency call?”


DOUG.  Alright, excellent.

Well, thank you, NotConcerned, for writing that in.


DUCK.  [DEADPAN] I presume that name was an irony?


DOUG.  [LAUGHS] I think so.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH.  Stay secure!

[MUSICAL MODEM]


Featured image of punched computer card by Arnold Reinhold via Wikipedia under CC BY-SA 2.5


Ex-CEO of breached pyschotherapy clinic gets prison sentence for bad data security

We’ve said this before, but we’ll repeat it again here:

Imagine that you’d spoken in what you thought was total confidence to a psychotherapist, but the contents of your sessions had been saved for posterity, along with precise personal identification details such as your unique national ID number, and perhaps including additional information such as notes about your relationship with your family…

…and then, as if that were not bad enough, imagine that the words you’d never expected to be typed in and saved at all, let alone indefinitely, had been made accessible over the internet, allegedly “protected” by little more than a default password giving anyone access to everything.

That’s what happened to tens of thousands of trusting patients of the now-bankrupt Psychotherapy Centre Vastaamo in Finland.

Crooks found the insecure data

Ultimately, at least one cybercriminal found his way into the ill-protected buckets of information.

After stealing the data, he decided to blackmail the clinic for €450,000 (then about $0.5M); when that didn’t work he stooped lower still and tried blackmailing the patients for €200 each, with a warning that the “fee” would increase to €500 after 24 hours.

Patients who didn’t pay up after a further 48 hours, the blackmailer said, would be doxxed, a jargon term meaning to have your personal data exposed publicly on purpose.

The extortionst apparently threatened not only to leak the sort of information that could cost the victims money due to identity theft, such as contact details and IDs, but also to spill those saved transcripts of their intimate conversations with therapists at the clinic.

Although a suspect in the blackmail part of this case was arrested in France in February 2022, following the issuing of an international arrest warrant, that wasn’t the only interest taken by Finnish law enforcement.

Victim as perpetrator

Even though the clinic was itself the vicitim of an odious cybercrime, the ex-CEO of the clinic, Ville Tapio, faced criminal charges, too.

As well as failing to take the sort of data security precautions that any medical patient would reasonably assume were in place, and that the law would expect…

…it seems that Tapio knew about his company’s sloppy cybersecurity for up to two years before the blackmail took place in 2020.

Worse still, he allegedly knew about the problems because the clinic suffered breaches in 2018 and 2019, and failed to report them, presumably hoping that no traceable cybercrimes would arise as a result, and thus that the company would therefore never get caught out.

But modern breach disclosure and data protection regulations, such as the GDPR in Europe, make it clear that data breaches can’t simply be “swept under the carpet” any more, and must be promptly disclosed for the greater good of all.

Well, news from Finland is that Tapio has now been convicted and given a prison sentence, reminding business leaders that merely promising to look after other people’s personal data is not enough.

Paying lip service alone to cybersecurity is insufficient, to the point that you can end up being treated as both a cybercrime victim and a perpetrator at the same time.

Have your say

Tapio received a three-month prison sentence, but the sentence was suspended, so he isn’t heading directly to jail.

Did he get off lightly, particularly considering the sensitivity of the data that his company’s patients thought they could trust him with?

Have your say in the comments below…


FBI and FCC warn about “Juicejacking” – but just how useful is their advice?

If you’d never heard the cybersecurity jargon word “juicejacking” until the last few days (or, indeed, if you’d never heard it at all until you opened this article), don’t get into a panic about it.

You’re not out of touch.

Here at Naked Security, we knew what it meant, not so much because it’s a clear and public danger, but that we remembered the word from a while ago… close to 12 years ago, in fact, when we first wrote up a series of tips about it:

Back in 2011, the term was (as far as we can tell) brand new, written variously as juice jacking, juice-jacking, and, correctly, in our opinion, simply as juicejacking, and was coined to describe a cyberattack technique that had just been demonstrated at the Black Hat 2011 conference in Las Vegas.

Juicejacking explained

The idea is simple: people on the road, especially at airports, where their own phone charger is either squashed away deep in their carry-on luggage and too troublesome to extract, or packed into the cargo hold of a plane where it cant’t be accessed, often get struck by charge anxiety.

Phone charge anxiety, which first became a thing in the 1990s and 2000s, is the equivalent of electric vehicle range anxiety today, where you can’t resist squeezing in just a bit more juice right now, even if you’ve only got a few minutes to spare, in case you hit a snag later on in your journey.

But phones charge over USB cables, which are specifically designed so they can carry both power and data.

So, if you plug your phone into a USB outlet that’s provided by someone else, how can you be sure that it’s only providing charging power, and not secretly trying to negotiate a data connection with your device at the same time?

What’s if there’s a computer at the other end that’s not only supplying 5 volts DC, but also sneakily trying to interact with your phone behind your back?

The simple answer is that you can’t be sure, especially if its 2011, and you’re at the Black Hat conference attending a talk entitled Mactans: Injecting malware into iOS devices via malicious chargers.

The word Mactans was meant to be a BWAIN, or Bug With An Impressive Name (it’s derived from latrodectus mactans, the small but toxic black widow spider), but “juicejacking” was the nickname that stuck.

Interestingly, Apple responded to the juicejacking demo with a simple but effective change in iOS, which is pretty close to how iOS reacts today when it’s hooked up over USB to an as-yet-unknown device:

“Trust-or-not” popup introduced in iOS 7, following a public demo of juicejacking.

Android, too, doesn’t allow previously unseen computers to exchange files with your phone until you have tapped in your approval on your own phone, after unlocking it.

Is juicejacking still a thing?

In theory, then, you can’t easily get juicejacked any more, because both Apple and Google have adopted defaults that take the element of surprise out of the equation.

You could get tricked, or suckered, or cajoled, or whatever, into agreeing to trust a device you later wish you hadn’t…

…but, in theory at least, data grabbing can’t happen behind your back without you first seeing a visible request, and then replying to it yourself by tapping a button or choosing a menu option to enable it.

We were therefore a bit surprised to see both the US FCC (the Federal Communications Commission) and the FBI (the Federal Bureau of Investigation) publicly warning people in the last few days about the risks of juicejacking.

In the words of the FCC:

If your battery is running low, be aware that juicing up your electronic device at free USB port charging stations, such as those found in airports and hotel lobbies, might have unfortunate consequences. You could become a victim of “juice jacking,” yet another cyber-theft tactic.

Cybersecurity experts warn that bad actors can load malware onto public USB charging stations to maliciously access electronic devices while they are being charged. Malware installed through a corrupted USB port can lock a device or export personal data and passwords directly to the perpetrator. Criminals can then use that information to access online accounts or sell it to other bad actors.

And according to the FBI in Denver, Colorado:

Bad actors have figured out ways to use public USB ports to introduce malware and monitoring software onto devices.

How safe is the power supply?

Make no mistake, we’d advise you to use your own charger whenever you can, and not to rely on unknown USB connectors or cables, not least because you have no idea how safe or reliable the voltage converter in the charging circuit might be.

You don’t know whether you are going to get a well-regulated 5V DC, or a voltage spike that harms your device.

A destructive voltage could arrive by accident, for example due to a cheap-and-cheerful, non-safety-compliant charging circuit that saved a few cents on manufacturing costs by illegally failing to follow proper standards for keeping the mains parts and the low-voltage parts of the circuitry apart.

Or a rogue voltage spike could arrive on purpose: long-term Naked Security readers will remember a device that looked like a USB storage stick but was dubbed the USB Killer, which we wrote about back in 2017:

By using the modest USB voltage and current to charge a bank of capacitors hidden inside the device, it quickly reached the point at which it could release a 240V spike back into your laptop or phone, probably frying it (and perhaps giving you a nasty shock if you were holding or touching it at the time).

How safe is your data?

But what about the risks of getting your data slurped surreptitiously by a charger that also acted as a host computer and tried to take over control of your device without permission?

Do the security improvements introduced in the wake of the Mactans juicejacking tool back in 2011 still hold up?

We think they do, based on plugging an iPhone (iOS 16) and a Google Pixel (Android 13) into a Mac (macOS 13 Ventura) and a Windows 11 laptop (2022H2 build).

Firstly, neither phone would connect automatically to macOS or Windows when plugged in for the first time, whether locked or unlocked.

When plugging the iPhone into Windows 11, we were asked to approve the connection every time before we could view content via the laptop, which required the phone to be unlocked to get at the approval popup:

Popup whenever we plugged the iPhone into a Windows 11 laptop.

Plugging the iPhone into our Mac for the first time required us to agree to trust the computer at the other end, which obviously required unlocking the phone (though once we’d agreed to trust the Mac, the phone would immediately show up in the Mac’s Finder app when connected in future, even if it was locked at the time):

Modern “trust” popup when our Mac first met our iPhone.

Our Google phone needed to be told to switch its USB connection out of No data mode every time we plugged it in, which meant opening the Settings app, which required the device to be unlocked first:

Google Android phone after connection to Windows 11 or macOS 13.

The host computers could see that the phones were connected whenever they were plugged in, thus giving them access to the name of the device and various hardware identifiers, which is a small amount of data leakage you should be aware of, but the data on the phone itself was apparently off limits.

Our Google phone behaved the same way when plugged in for the second, third or subsequent time, identifying that there was a device connected, but automatically setting it into No data mode as shown above, making your files invisible by default both to macOS and to Windows.

Untrusting computers on your iPhone

By the way, one annoying misfeature of iOS (we consider it a bug, but that is an opinion rather than a fact) is there is no menu in the iOS Settings app where you can view a list of computers you’ve previously trusted, and revoke trust for individual devices.

You’re expected to remember which computers you’ve trusted, and you can only revoke that trust in an all-or-nothing way.

To untrust any individual computer, you have to untrust them all, via the not-in-any-way-obvious and deeply nested Settings > General > Transfer or Reset iPhone > Reset Location & Privacy screen, under a misleading heading that suggests these options are only useful when you buy a new iPhone:

Hard-to-find iOS option for unrtusting computers you’ve connected to before.

What to do?

  • Avoid unknown charging connectors or cables if you can. Even a charging station set up in good faith might not have the electrical quality and voltage regulation you would like. Avoid cheap mains chargers, too, if you can. Bring a brand you trust along with you, or charge from your own laptop.
  • Lock or turn off your phone before connecting it to a charger or computer. This minimises the risk of accidentally opening up files to a rogue charging station, and ensures that the device is locked if it gets grabbed and stolen at a multi-user charging unit.
  • Consider untrusting all devices on your iPhone before risking an unknown computer or charger. This ensures there are no forgotten trusted devices you may have set up by mistake on a previous trip.
  • Consider acquiring a power-only USB cable or adapter socket. “Dataless” USB-A plugs are easy to spot because they have only two metallic electrical connectors in their housing, at the outer edges of the socket, rather than four connectors across the width. Note that the inner connectors aren’t always immediately obvious because they don’t come right to the edge of the socket – that’s so the power connectors make contact first.

Power-only bicycle light USB-A connector with outside metallic connectors only.
The pink rectangles indicate roughly where the data connectors would be.

S3 Ep130: Open the garage bay doors, HAL [Audio + Text]

I’M SORRY, DAVE, I’M AFRAID… SORRY, MY MISTAKE, I CAN DO THAT EASILY

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG  Patches aplenty, connected garage doors, and motherboard malfeasance.

All that and more on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do?


DUCK  I am still trying to make sense of when you said “connected garage doors”, Doug.

Because this is connectivity on a whole new scale!


DOUG  Oh, yes!

What could possibly go wrong?

We’ll get into that…

We like to start the show with the This Week in Tech History segment.

We have many options… today we will spin the wheel.

What happened this week?

The first man in space, Yuri Gagarin, in 1961; Ronald Wayne leaves Apple and sells his stock for $800 in 1976 – probably a bit of regret there; the germination of COBOL in 1959; the first Space Shuttle launch in 1981; the Apollo 13 rescue mission in 1970; Metallica sues Napster in 2000; and the first West Coast Computer Faire in 1977.

Let’s go ahead and spin the wheel here, and see where we land.

[FX: WHEEL OF FORTUNE]


DUCK  [CHEERING THE WHEEL] COBOL, COBOL, COBOL!


[FX: WHEEL SLOWS AND STOPS]

DOUG  And we got COBOL!

Congratulations, Paul – good job.

This week, in 1959, there was a meeting, and at the meeting were some very important and influential computing pioneers who discussed the creation of a common, business-friendly programming language.

The one-and-only Grace Hopper suggested that the US Department of Defense fund such a language.

And, luckily enough, a DOD computing director was at the same meeting, liked the idea, and agreed to fund it.

And with that, COBOL was born, Paul.


DUCK  Yes!

COBOL: COmmon Business-Oriented Language.

And it came out of a thing called CODASYL.

[LAUGHS} That’s the acronym to begin/end all acronyms: The Conference/Committee on Data Systems Languages.

But it was an intriguing idea that, of course, has come full circle several times, not least with JavaScript in the browser.

A language like FORTRAN (FORmula TRANslation) was very popular for scientific computing at the time.

But every company, every compiler, every little group of programmers had their own version of FORTRAN, which was better than everybody else’s.

And the idea of COBOL was, “Wouldn’t it be nice if you could write the code, and then you could take it to any compliant compiler on any system, and the code would, within the limits of the system, behave the same?”

So it was a way of providing a cmmon, business-oriented language… exactly as the name suggests.


DOUG  Exactly!

Well-named!

Alright, we’ve come a long way (good job, everybody), including up to the most recent Patch Tuesday.

We’ve got a zero-day; we’ve got two curious bugs; and we’ve got about 90-some other bugs.

But let’s get to the good stuff, Paul…

Patch Tuesday: Microsoft fixes a zero-day, and two curious bugs that take the Secure out of Secure Boot


DUCK  Yes, let’s just knock on the head the zero-day, which is CVE-2023-28252, if you want to search that one down.

Because that’s one that crooks obviously already know how to exploit.

It’s a bug in a part of Windows that we’ve seen bugs in before, namely the Common Log File System driver.

And that’s a system driver that allows any service or app on your device to do system logging in (supposedly) a controlled, secure way.

You write your logs… they don’t get lost; not everyone invents their own way of doing it; they get properly timestamped; they get recorded, even if there’s heavy load; etc.

Unfortunately, the driver that processes these logs… it’s basically doing its stuff under the SYSTEM account.

So if there’s a bug in it, when you log something in a way that’s not supposed to happen, usually what happens is that you have what’s called an Elevation of Privilege, or EoP.

And somebody who a moment ago might have just been a GUEST user suddenly is running under the SYSTEM account, which basically gives them as-good-as total control over the system.

They can load and unload other drivers; they can access pretty much all the files; they can spy on other programs; they can start and stop processes; and so on.

That’s the 0-day.

It only got rated Important by Microsoft… I presume because it’s not remote code execution, so it can’t be used by a crook to hack into your system in the first place.

But once they’re in, this bug could, in theory (and in practice, given that it’s an O-day), be used by a crook who’s already in to get what are effectively superpowers on your computer.


DOUG  And then, if you take the Secure out of Secure Boot, what does it become, Paul?

Just…


DUCK  “Boot”, I suppose?

Yes, these are two bugs that just intrigued me enough to want to focus on them in the article on Naked Security. (If you want to know everything about all the patches, go to news.sophos.com and read the SophosLabs report on these bugs.)

I won’t read out the numbers, they’re in the article… they both are headlined with the following words: Windows Boot Manager Security Feature Bypass Vulnerability.

And I’ll read out how Microsoft describes it:

An attacker who successfully exploited these vulnerabilities could bypass Secure Boot to run unauthorised code.

To be successful, the attacker would need either physical access or administrator privileges…

…which I imagine they might be able to get through the bug we spoke about at the start. [LAUGHS]


DOUG  Exactly, I was just thinking that!


DUCK  But the thing about, “Hey, guys, don’t worry, they’d need physical access to your computer” is, in my opinion, a little bit of a red herring, Doug.

Because the whole idea of Secure Boot is it’s meant to protect you even against people who do get physical access to your computer, because it stops things like the so called “evil cleaner” attack…

…which is where you’ve just left your laptop in your hotel room for 20 minutes while you nip down to breakfast.

Cleaners come into hotel rooms every day; they’re supposed to be there.

Your laptop’s there; it’s closed; you think, “They don’t know the password, so they can’t log in.”

But what if they could just pop the lid open, stick in a USB key, and power it up while they complete the cleaning of your room…

…so they don’t need to spend any time actually doing the hacking, because that’s all automated.

Close the laptop; remove the USB key.

What if they’ve implanted some malware?

That’s what’s known in the jargon as a bootkit.

Not a rootkit, even lower than that: a BOOT kit.

Something that actually influences your computer between the time that the firmware is run and Windows itself actually starts.

In other words, it completely subverts the underpinnings on which Windows itself bases the security that’s coming next.

For example, what if it had logged your BitLocker keystrokes, so it now knew the password to unlock your whole computer for next time?

And the whole idea of Secure Boot is it says, “Well, anything that isn’t digitally signed by a key that’s been preloaded into your computer (into what’s called the Trusted Platform Module), any code that somebody introduces, whether they’re an evil cleaner or a well intentioned IT manager, simply won’t run.

Although Microsoft only rates these bugs Important because they’re not your traditional remote code execution exploits, if I were a daily-driver Windows user, I think I’d patch, if only for those alone.


DOUG  So, get patched up now!

You can read about these specific items on Naked Security, and a broader article on Sophos News that details the 97 CVEs in total that have been patched.

And let’s stay on the patch train, and talk about Apple, including some zero-days, Paul.

Apple issues emergency patches for spyware-style 0-day exploits – update now!


DUCK  These were indeed zero-days that were the only things patched in this particular update released by Apple.

As ever, Apple doesn’t say in advance what it’s going to do, and it doesn’t give you any warning, and it doesn’t say who’s going to get what when…

…just at the beginning of the Easter weekend, we got these patches that covered a WebKit zero-day.

So, in other words, merely looking at a booby-trapped website could get remote code execution, *and* there was a bug in the kernel that meant that once you had pwned an app, you could then pwn the kernel and essentially take over the whole device.

Which basically smells of, “Hey, browse to my lovely website. Oh, dear. Now I’ve got spyware all over your phone. And I haven’t just taken over your browser, I’ve taken over everything.”

And in true Apple fashion… at first, there were updates against both of those bugs for macOS 13 Ventura (the latest version of macOS), and for iOS and iPad OS 16.

There were partial fixes – theere were WebKit fixes – for the two older versions of macOS, but no patches for the kernel level vulnerability.

And there was nothing at all for iOS and iPadOS 15.

Does this mean that the older versions of macOS don’t have the kernel bug?

That they do have the kernel bug, but they just haven’t been patched yet?

Is iOS 15 immune, or is it needing a patch but they’re just not saying?

And then, lo and behold, in the aftermath of the Easter weekend, [LAUGHS] suddenly three more updates came out that filled in all the missing pieces.

Apple zero-day spyware patches extended to cover older Macs, iPhones and iPads

It indeed turned out that all supported iOSes and iPadOSes (which is versions 15 and 16), and all supported macOSes (that is versions 11, 12 and 13) contained both of these bugs.

And now they all have patches against both of them.

Given that this bug was apparently found by a combination of the Amnesty International Security Lab and the Google Threat Response Team…

…well, you can probably guess that it has been used for spyware in real life.

Therefore, even if you don’t think that you’re the kind of person who’s likely to be at risk of that kind of attacker, what it means is that these bugs not only exist, they clearly seem to work pretty well in the wild.

So if you haven’t done an update check on your Mac or your iDevice lately, please do so.

Just in case you missed out.


DOUG  OK!

As we know, connected garage door companies code these garage doors with cybersecurity in mind.

So it’s shocking that something like this has happened, Paul…

Hack and enter! The “secure” garage doors that anyone can open from anywhere – what you need to know


DUCK  Yes.

In this case, Doug (and I feel we’d better say the brand name: it’s Nexx), they seem to have introduced a special form of cybersecurity.

Zero-factor authentication, Doug!

That’s where you take something that is not intended to be made public (unlike an email address or a Twitter handle, where you want people to know it), but that is not actually a secret.

So, an example might be the MAC address of your wireless card.

In this case, they’d given each of their devices a presumably unique device ID…

…and if you knew what any device’s ID was, that counted as basically username, password and login code all in one go.


DOUG  [GROAN] That’s convenient…


DUCK  Even more convenient, Doug: there’s a hard coded password in the firmware of every device.


DOUG  Oh, there we go! [LAUGHS]


DUCK  [LAUGHS] Once someone knows what that magic password is, it allows them to log into the cloud messaging system that these devices use around the globe.

What the researcher who did this found, because he had one of these devices…

…he found that while he was watching for his own traffic, which he would maybe expect to see, he got everyone else’s as well, including their device IDs.


DOUG  [BIGGER GROAN] Oh, my goodness!


DUCK  Just in case the device ID wasn’t enough, they also happen to include your email address, your initial, and your family name in the JSON data as well.

Just in case you didn’t already know how to stalk the person back to where they lived.

So, you could either go round to their house and open their garage and then steal their stuff. (Oh, by the way, this also seems applied to their home alarm systems as well, so you could turn off the alarm before you opened the garage door.)

Or, if you were of sufficiently evil intent, you could just randomly open people’s garage doors wherever they lived, because apparently that’s terribly amusing. Doug.


DOUG  [IRONIC] The least that this researcher could have done would have been to alert the company, say, three-plus months ago, and give them time to fix this.


DUCK  Yes, that is about the least he could have done.

Which is exactly what he did do.

And that’s eventually why, several months later (I think it was in January he first contacted them, and he just couldn’t get them moving on this)…

…eventually he said, “I’m just going to go public with this.”

To back him up, the US CISA [Cybersecurity and Infrastructure Security Agency] actually put out a sort of APB on this saying, “By the way, just so you know, this company isn’t being responsive, and we don’t really know what to advise you.”

Well, my advice was… consider using good old fashioned physical keys; don’t use the app.

To be fair, although the researcher described the nature of the bugs, as I have described them to you here, he didn’t actually put out a proof-of-concept.

It wasn’t like he made it super-easy for everybody.

But I think he felt that he almost had a duty of care to people who had this product to know that maybe they too, needed to lean on the vendor.


DOUG  Alright, this is a classic “we’ll keep an eye on that” type of story.

And a great reminder at the end of the article… you write, as the old joke puts it, “The S in IoT stands for Security”, which is very much the case.


DUCK  Yes, it is time that we put the S in IoT, isn’t it?

I don’t know how many times we’re going to be telling stories like this about IoT devices… every time we do it, we hope it’s the last time, don’t we?

Hard coded passwords.

Replay attacks being possible, because there’s no cryptographic uniqueness in each request.

Leaking other people’s data.

Including unnecessary stuff in requests and replies… if you’ve got the device ID and you’re trying to identify the device, you don’t need to tell the device its owner’s email address every time you want the door to open!

It’s just not necessary, and if you don’t give it out, then it can’t leak!

[IRONIC] But other than that, Doug, I don’t feel strongly about it.


DOUG  [LAUGHS] OK, very good.

Our last story of the day, but certainly not the least.

Motherboard manufacturer MSI is having some certificate-based firmware headaches lately.

Attention gamers! Motherboard maker MSI admits to breach, issues “rogue firmware” alert


DUCK  Yes, this is a rather terrible story.

Allegedly, a ransomware crew going by the name Money Message have breached MSI, the motherboard makers. (They’re very popular with gamers because they’re very tweakable motherboards.)

The criminals claim to have vast quantities of data that they’re going to breach unless they get the money.

They haven’t got the actual data on their leak site (at least they hadn’t when I looked last night, which was just before the deadline expired), but they’re claiming that they have MSI source code.

They’re claiming that they have the framework that MSI uses to develop BIOS or firmware files, so in other words they’re implying that they’ve already got the insider knowledge they need to be able to build firmware that will be in the right format.

And they say, “Also, we have private keys.”

They’re inviting us to infer that those private keys would allow them to sign any rogue firmware that they build, which is quite a worrying thing for MSI, who’ve kind of gone down the middle on this.

They admitted the breach; they’ve disclosed it to the regulator; they’ve disclosed it to law enforcement; and that’s pretty much all they’ve said.

What they *have* done is give advice that we strongly recommend you follow anyway, namely telling its customers:

Obtain firmware or BIOS updates only from MSI’s official website, and do not use files from sources other than the official website.

Now, we’d hope that you wouldn’t go off-piste to go and get yourself potentially rogue firmware BLOBs anyway… as some of our commenters have said, “What do people think when they do that?”

But in the past, if you couldn’t get them from MSI’s site, you could at least perhaps rely on validating the digital certificate by yourself if you liked.

So I think you should say what you usually do about watching this space, Doug…


DOUG  Let’s keep an eye on this one then, too!

And it begs the question from one of our readers (I couldn’t have said it better myself) on the MSI story… Peter asks:

Could MSI not revoke the certificate that was used to sign the files?

So even if someone did download a file that had been compromised, it would then fail the certificate check?

Or does it not work like that?


DUCK  Well, it does work like that in *theory*, Doug.

But if you just blindly start refusing anybody who’s already got firmware that was signed with the now deprecated certificate, you do run the risk, essentially, of having people who have as good as “locked their keys in the car”, if you know what I mean.

For example, imagine that you just go, “Right! On every computer in the world from tomorrow, any MSI firmware signed with this key that has been compromised (if the crooks are telling the truth) just won’t work. You’ll have to get a new one.”

Well, how are you going to boot up your computer to get online to get the new one? [LAUGHS]


DOUG  [LAUGHS] A slight problem!


DUCK  There is that chicken-and-egg problem.

And this does not just apply to firmware… if you’re too quick in blocking everybody’s access to files that are trustworthy but were signed with a certificate that has now become untrustworthy, you do risk potentially doing more harm than good.

You need to leave a bit of an overlap period.


DOUG  Alright, excellent question, and excellent answer.

Thank you very much, Peter, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH  Stay secure!

[MUSICAL MODEM]


Microsoft fixes a zero-day – and two curious bugs that take the Secure out of Secure Boot

It’s Patch Tuesday Week (if you will allow us our daily pleonasm), and Microsoft’s updates include fixes for a number of security holes that the company has dubbed Critical, along with a zero-day fix, although the 0-day only gets a rating of Important.

The 0-day probably got away with not being Critical because it’s not an outright remote code execution (RCE) hole, meaning that it can’t be exploited by someone who hasn’t already hacked into your computer.

That one is CVE-2023-28252, an elevation of privilege (EoP) bug in the Windows Common Log File System Driver.

The problem with Windows EoP bugs, especially in drivers that are installed by default on every Windows computer, is that they almost always allow attackers with few or no significant access privileges to promote themselves directly to the SYSTEM account, giving them as-good-as total control over your computer.

Programs running as SYSTEM can typically: load and unload kernel drivers; install, stop and start system services; read and write most files on the computer; change existing access privileges; run or kill off other programs; spy on other programs; mess with secure parts of the registry; and much more.

Ironically, the Common Log File System (CLFS) is designed to accept and manage offical logging requests on behalf of any service or app on the computer, in an effort to ensure order, precision, consistency and security in official system-level record keeping.

Two high-scoring Critical holes

Two Critical bugs in particular grabbed our interest.

The first one is CVE-2023-21554, an RCE hole in the Microsoft Message Queue system, or MSMQ, a component that is supposed to provide a failsafe way for programs to communicate reliably, regardless of what sort of network connections exist between them.

The MSMQ service isn’t turned on by default, but in high-reliability back-end systems where regular TCP or UDP network messages are not considered robust enough, you might have MSMQ enabled.

(Microsoft’s own examples of applications that might benefit from MSMQ include financial processing services on e-commerce platforms, and airport bagage handling systems.)

Unfortunately, even though this bug isn’t in the wild, it received a rating of Critical and a CVSS “danger score” of 9.8/10.

Microsoft’s two-sentence bug description says simply:

To exploit this vulnerability, an attacker would need to send a specially crafted malicious MSMQ packet to a MSMQ server. This could result in remote code execution on the server side.

Based on the high CVSS score and what Microsoft didn’t mention in the above description, we’re assuming that attackers exploiting this hole wouldn’t need to be logged on, or to have gone through any authentication process first.

DHCP danger

The second Critical bug that caught our eye is CVE-2023-28231, an RCE hole in the Microsoft DHCP Server Service.

DHCP is short for dynamic host configuration protocol, and it’s used in almost all Windows networks to hand out network addresses (IP numbers) to computers that connect to the network.

This helps prevent two users from accidentally trying to use the same IP number (which would cause their network packets to clash with each other), as well as to keep track of which devices are connected at any time.

Usually, remote code execution bugs in DHCP servers are ultra-dangerous, even though DHCP servers generally only work on the local network, and not across the internet.

That’s because DHCP is designed to exchange network packets, as part of in its “configuration dance”, not merely before you’ve put in a password or before you’ve provided a username, but as the very first step of getting your computer online at the network level.

In other words, DHCP servers have to be robust enough to accept and reply to packets from unknown and untrusted devices, just to get your network to the point that it can start deciding how much trust to put in them.

Fortunately, however, this particular bug gets a slightly lower score than the aforementioned MSMQ bug (its CVSS danger level is 8.8/10) because it’s in a part of the DHCP service that’s only accessible from your computer after you’ve logged on.

In Microsoft’s words:

An authenticated attacker could leverage a specially crafted RPC call to the DHCP service to exploit this vulnerability.

Successful exploitation of this vulnerability requires that an attacker will need to first gain access to the restricted network before running an attack.

When Secure Boot is just Boot

The last two bugs that intrigued us were CVE-2023-28249 and CVE-2023-28269, both listed under the headline Windows Boot Manager Security Feature Bypass Vulnerability.

According to Microsoft:

An attacker who successfully exploited [these vulnerabilities] could bypass Secure Boot to run unauthorized code. To be successful the attacker would need either physical access or administrator privileges.

Ironically, the main purpose of the much-vaunted Secure Boot system is that it’s supposed to help you keep your computer on a strict and unwavering path from the time you turn it on to the point that Windows takes control.

Indeed, Secure Boot is supposed to stop attackers who steal your computer from injecting any booby-trapped code that could modify or subvert the initial startup process itself, a trick that’s known in the jargon as a bootkit.

Examples include secretly logging the keystrokes you type in when entering your BitLocker disk encryption unlock code (without which booting Windows is impossible), or sneakily feeding modified disk sectors into the bootloader code that reads in the Windows kernel so it starts up insecurely.

This sort of treachery is often referred to as an “evil cleaner” attack, based on the scenario that anyone with official access to your hotel room while you’re out, such as a traitorous cleaner, might be able to inject a bootkit unobtrusively, for example by starting up your laptop briefly from a USB drive and letting an automatic script do the dirty work…

…and then use a similarly quick and hands-off trick the next day to retrieve stolen data such as keystrokes, and remove any evidence that the bootkit was ever there.

In other words, Secure Boot is meant to keep a properly-encrypted laptop safe from being subverted – even, or perhaps especially, by a cybercriminal who has physical access to it.

So if we had a Windows computer for day-to-day use, we’d be patching these bugs as if they were Critical, even though Microsoft’s own rating is only Important.

What to do?

  • Patch now. With one zero-day already being exploited by criminals, two high-CVSS-score Critical bugs that could lead to remote malware implantation, and two bugs that could remove the Secure from Secure Boot, why delay? Just do it today!
  • Read the SophosLabs report that looks at this month’s patches more broadly. With 97 CVEs patched altogether in Windows itself, Visual Studio Code, SQL Server, Sharepoint and many other components, there are plenty more bugs that sysadmins need to know about.

go top