Category Archives: News

Dangerous Android phone 0-day bugs revealed – patch or work around them now!

Google has just revealed a fourfecta of critical zero-day bugs affecting a wide range of Android phones, including some of its own Pixel models.

These bugs are a bit different from your usual Android vulnerabilities, which typically affect the Android operating system (which is Linux-based) or the applications that come along with it, such as Google Play, Messages or the Chrome browser.

The four bugs we’re talking about here are known as baseband vulnerabilities, meaning that they exist in the special mobile phone networking firmware that runs on the phone’s so-called baseband chip.

Strictly speaking, baseband is a term used to describe the primary, or lowest-frequency parts of an individual radio signal, in contrast to a broadband signal, which (very loosely) consists of multiple baseband signals adjusted into numerous adjacent frequency ranges and transmitted at the same time in order to increase data rates, reduce interference, share frequency spectrum more widely, complicate surveillance, or all of the above. The word baseband is also used metaphorically to describe the hardware chip and the associated firmware that is used to handle the actual sending and receving of radio signals in devices that can communicate wirelessly. (Somewhat confusingly, the word baseband typically refers to the subsystem in a phone that handles conecting to the mobile telephone network, but not to the chips and software that handle Wi-Fi or Bluetooth connections.)

Your mobile phone’s modem

Baseband chips typically operate independently of the “non-telephone” parts of your mobile phone.

They essentially run a miniature operating system of their own, on a processor of their own, and work alongside your device’s main operating system to provide mobile network connectivity for making and answering calls, sending and receiving data, roaming on the network, and so on.

If you’re old enough to have used dialup internet, you’ll remember that you had to buy a modem (short for modulator-and-demodulator), which you plugged either into a serial port on the back of your PC or into an expansion slot inside it; the modem would connect to the phone network, and your PC would connect to the modem.

Well, your mobile phone’s baseband hardware and software is, very simply, a built-in modem, usually implemented as a sub-component of what’s known as the phone’s SoC, short for system-on-chip.

(You can think of an SoC as a sort of “integrated integrated circuit”, where separate electronic components that used to be interconnected by mounting them in close proximity on a motherboard have been integrated still further by combining them into a single chip package.)

In fact, you’ll still see baseband processors referred to as baseband modems, because they still handle the business of modulating and demodulating the sending and receiving of data to and from the network.

As you can imagine, this means that your mobile device isn’t just at risk from cybercriminals via bugs in the main operating system or one of the apps you use…

…but also at risk from security vulnerabilities in the baseband subsystem.

Sometimes, baseband flaws allow an attacker not only to break into the modem itself from the internet or the phone network, but also to break into the main operating system (moving laterally, or pivoting, as the jargon calls it) from the modem.

But even if the crooks can’t get past the modem and onwards into your apps, they can almost certainly do you an enormous amount of cyberharm just by implanting malware in the baseband, such as sniffing out or diverting your network data, snooping on your text messages, tracking your phone calls, and more.

Worse still, you can’t just look at your Android version number or the version numbers of your apps to check whether you’re vulnerable or patched, because the baseband hardware you’ve got, and the firmware and patches you need for it, depend on your physical device, not on the operating system you’re running on it.

Even devices that are in all obvious respects “the same” – sold under the same brand, using the same product name, with the same model number and outward appearance – might turn out to have different baseband chips, depending on which factory assembled them or which market they were sold into.

The new zero-days

Google’s recently discovered bugs are described as follows:

[Bug number] CVE-2023-24033 (and three other vulnerabilities that have yet to be assigned CVE identities) allowed for internet-to-baseband remote code execution. Tests conducted by [Google] Project Zero confirm that those four vulnerabilities allow an attacker to remotely compromise a phone at the baseband level with no user interaction, and require only that the attacker know the victim’s phone number.

With limited additional research and development, we believe that skilled attackers would be able to quickly create an operational exploit to compromise affected devices silently and remotely.

In plain English, an internet-to-baseband remote code execution hole means that criminals could inject malware or spyware over the internet into the part of your phone that sends and receives network data…

…without getting their hands on your actual device, luring you to a rogue website, persuading you to install a dubious app, waiting for you to click the wrong button in a pop-up warning, giving themselves away with a suspicious notification, or tricking you in any other way.

18 bugs, four kept semi-secret

There were 18 bugs in this latest batch, reported by Google in late 2022 and early 2023.

Google says that it is disclosing their existence now because the agreed time has passed since they were disclosed (Google’s timeframe is usually 90 days, or close to it), but for the four bugs above, the company is not disclosing any details, noting that:

Due to a very rare combination of level of access these vulnerabilities provide and the speed with which we believe a reliable operational exploit could be crafted, we have decided to make a policy exception to delay disclosure for the four vulnerabilities that allow for internet-to-baseband remote code execution

In plain English: if we were to tell you how these bugs worked, we’d make it far too easy for cybercriminals to start doing really bad things to lots of people by sneakily implanting malware on their phones.

In other words, even Google, which has attracted controversy in the past for refusing to extend its disclosure deadlines and for openly publishing proof-of-concept code for still-unpatched zero-days, has decided to follow the spirit of its Project Zero responsible disclosure process, rather than sticking to the letter of it.

Google’s argument for generally sticking to the letter and not the spirit of its disclosure rules isn’t entirely unreasonable. By using an inflexible algorithm to decide when to reveal details of unpatched bugs, even if those details could be used for evil, the company argues that complaints of favouritism and subjectivity can be avoided, such as, “Why did company X get an extra three weeks to fix their bug, while company Y did not?”

What to do?

The problem with bugs that are announced but not fully disclosed is that it’s difficult to answer the questions, “Am I affected? And if so, what should I do?”

Apparently, Google’s research focused on devices that used a Samsung Exynos-branded baseband modem component, but that doesn’t necessarily mean that the system-on-chip would identify or brand itself as an Exynos.

For example, Google’s recent Pixel devices use Google’s own system-on-chip, branded Tensor, but both the Pixel 6 and Pixel 7 are vulnerable to these still-semi-secret baseband bugs.

As a result, we can’t give you a definitive list of potentially affected devices, but Google reports (our emphasis):

Based on information from public websites that map chipsets to devices, affected products likely include:

  • Mobile devices from Samsung, including those in the S22, M33, M13, M12, A71, A53, A33, A21s, A13, A12 and A04 series;
  • Mobile devices from Vivo, including those in the S16, S15, S6, X70, X60 and X30 series;
  • The Pixel 6 and Pixel 7 series of devices from Google; and
  • any vehicles that use the Exynos Auto T5123 chipset.

Google says that the baseband firmware in both the Pixel 6 and Pixel 7 was patched as part of the March 2023 Android security updates, so Pixel users should ensure they have the latest patches for their devices.

For other devices, different vendors may take different lengths of time to ship their updates, so check with your vendor or mobile provider for details.

In the meantime, these bugs can apparently be sidestepped in your device settings, if you:

  • Turn off Wi-Fi calling.
  • Turn off Voice-over-LTE (VoLTE).

In Google’s words, “turning off these settings will remove the exploitation risk of these vulnerabilities.”

If you don’t need or use these features, you may as well turn them off anyway until you know for sure what modem chip is in your phone and if it needs an update.

After all, even if your device turns out to be invulnerable or already patched, there’s no downside to not having things you don’t need.


Featured image from Wikipedia, by user Köf3, under a CC BY-SA 3.0 licence.


S3 Ep 126: The price of fast fashion (and feature creep) [Audio + Text]

THE PRICE OF FAST FASHION

Lucky Thirteen! The price of fast fashion. Firefox fixes. Feature creep fail curtailed in Patch Tuesday.

No audio player below? Listen directly on Soundcloud.

With Paul Ducklin and Chester Wisniewski. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

[MUSICAL MODEM]

DUCK.  Hello, everybody.

Welcome to the Sophos Naked Security podcast.

As you can hear, I am Duck; I am not Doug (Doug is on vacation).

So, I am joined by my friend and colleague Chester Wisniewski once again.

Welcome back, Chester.

It’s great to have you!


CHET.  Thanks, Duck.

I was just thinking… actually, I’m looking at my screen as you’re introducing the podcast, and realised that today is the 13th anniversary of when I started the ChetChat podcast, before it retired and eventually became this podcast.

So you and I have been at this for 13 years!


DUCK.  Lucky 13, eh?


CHET.  Yes!


DUCK.  Well, how time flies when you’re having fun.


CHET.  Yes, and it *is* fun.

And I feel really honoured to be in the seat of Andy Greenberg.

You’ve really stepped up the game since I was last on the podcast [LAUGHS].


DUCK.  [LAUGHS] He was a very fun chap to talk to.

I don’t know if you’ve read that book that we featured on the podcast with him: Tracers in the Dark?

Tracers in the Dark: The Global Hunt for the Crime Lords of Crypto


CHET.  Absolutely, yes.


DUCK.  It’s just a fascinating tale, very well told.


CHET.  Yes, I mean, it was certainly the best book on this subject I’ve read…

…probably since Countdown to Zero Day, and that’s a pretty high praise from me.


DUCK.  Chester, let us start with our first topic for today, which is… I’ll just read the title of the article off Naked Security: SHEIN shopping app goes rogue, grabs price and URL data from your clipboard.

A reminder that even apps that aren’t overtly malicious can do dangerous stuff that collects data that was a good idea at the time…

…but they jolly well shouldn’t have.

SHEIN shopping app goes rogue, grabs price and URL data from your clipboard


CHET.  Yes – anything touching my clipboard immediately sets all kinds of alarm bells off in my head about the terrible things I’m imagining they’re doing.

And it does kind of beg the question,if I were a developer, even if I was doing something innocent… which I guess we’ll get to that in a second.

It’s hard to say how innocent what they were trying to do was.


DUCK.  Exactly.


CHET.  When you ask for that kind of permission, all kinds of alarm bells go off in my head.

It’s sort of like on an Android phone, for a long time, in order to use Bluetooth to find an IoT device, the permission you needed was “Access devices nearby”, which required Bluetooth.

And you get this hairy warning on the screen, “This wants to know your location.”

And you’re going, “Why does this smart light bulb need to know my location?”

When you say you’re accessing my clipboard, my mind goes to, “Why is this app trying to steal my passwords?”

Maybe it’s something that we should clarify for people…

…because I think when you say, “Put the contents of the clipboard into the app,” there are times when *you’re* doing it (you may choose to copy your password, or maybe that SMS two factor code from the Messages app and then paste it into the app that you’re authenticating in)…


DUCK.  Yes.


CHET.  That’s *not* what we’re talking about when we’re talking about this permission, right?

This permission is the app itself just peeping in on your existing clipboard content any time it chooses…

…not when you’re actively interacting with the app and long-tapping and saying, “Paste.”


DUCK.  Exactly.

Basically, it’s doing a paste when you didn’t intend it.

No matter how innocent the data that you’ve chosen to copy into the clipboard might be, it really shouldn’t be up to some random app to decide, “Hey, I’m just going to paste it because I feel like it.”

And it particularly rankles that it was essentially pasting it into a web request that it sent off to some RESTful marketing API back at head office!


CHET.  It’s not even an expected behaviour, right, Duck?

I mean, if I am in my banking app and it’s asking for the code from the text message…

…I might see how it would ask the text message app to copy it into the clipboard and paste it in automatically, to make that flow simple.

But I would never expect anything from my clipboard to end up in a fashion app!

Well, don’t use apps if you don’t need them.

That is, I think, a big issue here.

I see constantly, when I go to any kind of a shopping site now, I get some horrifying pop up in my Firefox on my phone saying, “Do I want to install the app? Why am I not accessing the site through the app? Would I prefer to use the app?”

And the answer is NO, NO, and NO, because this is the kind of thing that happens when you have untrusted code.

I can’t trust the code just because Google says it’s OK.

We know that Google doesn’t have any actual humans screening apps… Google’s being run by some Google Chat-GPT monstrosity or something.

So things just get screened in whatever way Google sees fit to screen them, and then they end up in the Play Store.

So I just don’t like any of that code.

I mean, there are apps I have to load on my device, or things that I feel have more trust based on the publishers…

…but in general, just go to the website!


DUCK.  Anyone who listens to the Naked Security podcast knows, from when we’re talking about things like browser zero-days, just how much effort the browser makers put into finding and removing bugs from their code.


CHET.  And folks can remember, as well, that you can make almost any website behave like an app these days as well.

There’s what’s called Progressive Web Apps, or PWA.


DUCK.  Chester, let’s move on to the next story of the last week, a story that I thought was interesting.

I wrote this up just because I liked the number, and there were some interesting issues in it, and that is: Firefox version 111 fixed 11 CVE holes, but there was not 1 zero-day.

(And that’s my excuse for having a headline with the digit 1 repeated six times.) [LAUGHS]

Firefox 111 patches 11 holes, but not 1 zero-day among them…


CHET.  [LAUGHS] I’m a fan of Firefox and it’s nice to see that there was nothing discovered to be actively being exploited.

But the best part about this is that they include those memory safety issues that were preventatively discovered, right?

They’re not crediting them to an outside person or party who discovered something and reported it to them.

They’re just actively hunting, and letting us know that they’re working on memory safety issues…

…which I think is really good.


DUCK.  What I like with Mozilla is that every four weeks, when they do the big update, they take all the memory safety bugs, put them in one little basket and say, “You know what? We didn’t actually try and figure out whether these were exploitable, but we’re still going to give them a CVE number…

…and admit that although these may not actually be exploitable, it’s worth assuming that if someone tried hard enough, or had the will, or had the money behind them, or just wanted badly enough to do so (and there are people in all those categories), you have to assume that they’d find a way to exploit one of these in a way which would be to your detriment.”

And you’ve got a little story about something that you liked, out of the Firefox, or Mozilla, stable…


CHET.  Absolutely – I was just thinking about that.

We were talking, before the podcast, about a project called Servo that Firefox (or the Mozilla Foundation, ultimately) created.

And, as you say, it’s a browser engine rendering engine (currently the one in Mozilla Firefox is called Gecko)… the idea was to write the rendering engine entirely in Rust, and in fact this was the inspiration for creating the Rust programming language.

The important point here is that Rust is a memory-safe language.

You can’t make the mistakes that are being fixed in these CVEs.

So, in a dream world, you would be doing this Firefox update blog without the memory safety CVEs.

And I was pretty excited to see some funding went to the Linux Foundation to continue developing Servo.

Maybe that, in the future, will be a new Firefox engine that’ll make us even safer?


DUCK.  Yes!

Let’s be clear – just because you write code in Rust doesn’t make it right, and it doesn’t make it immune to vulnerabilities.

But, like you say, there are all sorts of issues, particularly relating to memory management, that are, as you say, much, much harder to do.

And in well-written code, even at compile time, the compiler should be able to see that “this is not right”.

And if that can be done automatically, without all the overhead that you need in a scripting language that does something like garbage collection, so you still get good performance, that will be interesting.

I just wonder how long it’ll take?


CHET.  It sounds like they’re taking it in small bites.

The first goal is to get CSS2 rendering to work, and it’s like you’ve got to take each thing as a little block of work, and break it off from the giant monstrosity that is a modern rendering engine… and take some small bites.

And funding for those projects is really important, right?

A lot of things embed browser engines; lots of products are based off the Gecko engine, as well as Google’s Blink, and Apple’s Webkit.

And so more competition, more performance, more memory safety…it’s all good!


DUCK.  So, let’s get to the final topic of the week, that I guess is the big story…

…but the nice thing about it, as big stories go, is that although it has some fascinating bugs in it, and although both of the bugs that we’ll probably end up talking about were technically zero-days, they’re not catastrophic.

They’re just a good reminder of the kind of problems that bugs can cause.

And that topic, of course, is Patch Tuesday.

Microsoft fixes two 0-days on Patch Tuesday – update now!


CHET.  Well, I’m going to be controversial and talk about the Mark of the Web bug first.


DUCK.  [LAUGHS] It’s such a catchy name, isn’t it?

We all know it’s “Internet Zones”, like in the good old Internet Explorer days.

But “Mark of the Web”… it sounds so much grander, and more exciting, and more important!


CHET.  Well, for you Internet Explorer (IE) admin people, you probably remember the you could set this to be in the Trusted Zone; that in the Intranet Zone; the other in the Internet Zone.

That setting is what we’re talking about.

But that not only lives in Internet Explorer, it’s also observed by many other Microsoft processes, to give the provenance of where a file came from…

…on the concept that outside files are far more dangerous than inside files.

And so this very premise I disagree with.

I think it’s a stupid thing!

All files are dangerous!

It doesn’t matter where you found them: in the parking lot on a thumb drive; on the LAN; or on a website.

Why wouldn’t we just treat all of them as if they’re untrusted, and not do terrible things?


DUCK.  I think I can see where Microsoft is coming from here, and I know that Apple has a similar thing… you download a file, you leave it lying around in a directory somewhere, and then you come back to it three weeks later.

But I think I’m inclined to agree with you that when you start going, “Oh well, that file came from inside the firewall, so it must be trusted”…

…that’s good old fashioned “soft chewy interior” all over again!


CHET.  Yes.

So that’s why these types of bugs that allow you to bypass Mark of the Web are problematic, right?

A lot of admins will have a group policy that says, “Microsoft Office cannot execute macros on files with Mark of the Web, but without Mark of the Web we allow you to run macros, because the finance department uses them in Excel spreadsheets and all the managers have to access them.”

This kind of situation… it’s dependent on knowing that that file is from inside or outside, unfortunately.

And so I guess what I was getting at, what I was complaining about, is to say: this vulnerability was allowing people to send you files from the outside, and not have them marked as if they were from the outside.

And because this kind of thing can happen, and does happen, and because there are other ways that this can happen as well, which you kindly point out in your Naked Security article…

…that means your policy should be: if you think macros may be dangerous, you should be blocking them, or forcing the prompt to enable them, *no matter where they originate*.

You shouldn’t have a policy that differentiates between the inside and the outside, because it just puts you at risk of it being bypassed.


DUCK.  Absolutely.

I guess the bottom line here is that although a bypass of this Mark of the Web “branding” (the Internet Zone label on a file)… although that’s something that is obviously useful to crooks, because they know some people rely on, *it’s the kind of failure that you need to plan for anyway*.

I get the idea of Mark of the Web, and I don’t think it’s a bad idea.

I just wouldn’t use it as a significant or an important cybersecurity discriminator.


CHET.  Well, and to remind IT administrators…

…the best approach to solving this problem isn’t to be looking at Mark of the Web.

The best approach is sign your internal macros, so that you know which ones to trust, and block all the rest of them.


DUCK.  Absolutely.

Why don’t you just allow the things that you know you absolutely need, and that you have a good reason to trust…

…and as you say, disallow everything else?

I suppose one answer is, “It’s a bit harder”, isn’t it?

It’s not quite as convenient…


CHET.  Well, this segues into the other vulnerability, which allows for criminals to exploit Microsoft Outlook in a way that could allow…

…I guess, an impersonation attack?

Is that how you would refer to it, Duck?


DUCK.  I think of this one as a kind of Manipulator in the Middle (MitM) attack.

The term that I’ve generally heard used, and that Microsoft uses… they call it a relay attack, basically where you trick someone into authenticating with *you*, while *you’re* authenticating on their behalf, as them, behind the scenes, with the real server.

That’s the trick – you basically get someone, without realising, to go, “Hey, I need to sign into this server I’ve never heard of before. What a great idea! Let me send them a hash of my password!”

What could possibly go wrong?

Quite a lot…


CHET.  It’s another great example of a restrictive policy versus a permissive one, right?

If your firewall is not configured to allow outbound SMB (server message block) traffic, then you’re not at risk from this vulnerability.

Not that you shouldn’t patch it… you should still patch it, because computers go lots of places where all kinds of wacky network things happen.

However, the idea is if your policy is, “Block everything and only allow the things that should be happening”, then you’re less at risk in this case than if it’s permissive, and you’re saying, “We’re going to allow everything, except things that we’ve already identified as being bad.”

Because when a zero-day comes along, no one has identified it as being bad.

That’s why it’s a zero-day!


DUCK.  Exactly.

Why would you want people signing into random external servers, anyway?

Even if they weren’t malevolent, why would you want them to go through a sort of corporate-style authentication, with their corporate credentials, to some server that doesn’t belong to you?

Having said that, Chester, I guess if you’re thinking about the “soft chewy centre”, there is a way that crooks who are already in your network, and who have a little bit of a foothold, could use this inside the network…

…by setting up a rogue file server and tricking you into connecting to that.


CHET.  [LAUGHS] Is that a BYOD?

A Bring Your Own Docker container?


DUCK.  [LAUGHS] Well, I shouldn’t really laugh there, but that’s quite a popular thing with crooks these days, isn’t it?

If they want to avoid getting things like their malware detected, then they’ll use what we call “living off the land” techniques, and just borrow tools that you’ve got already installed…

…like curl, bash, PowerShell, and commands that are absolutely everywhere anyway.

Otherwise, if they can, they’ll just fire up a VM [virtual machine]…

…if they’ve somehow got access to your VM cluster, and they can set up an innocent-looking VM, then they’ll run the malware inside that.

Or their docker container will just be configured completely differently to anything else you’ve got.

So, yes, I guess you’re right: that is a way that you could exploit this internally.

But I thought it was an intriguing bug, because usually when people think about email attacks, they normally think about, “I get the email, but to get pwned, I either have to open an attachment or click a link.”

But this one, I believe, can trigger while Outlook is preparing the email, before it even displays it to you!

Which is quite nasty, isn’t it?


CHET.  Yes.

I thought the days of these kind of bugs were gone when we got rid of JavaScript and ActiveX plugins in our email clients.


DUCK.  I thought you were going to say “Flash” for a moment there, Chester. [LAUGHS]


CHET.  [LAUGHS]

Well, for developers, it’s important to remember that these kinds of bugs are from feature creep.

I mean, the reason emails got safer is we’ve actually been removing features, right?


DUCK.  Correct.


CHET.  We got rid of ActiveX and JavaScript, and all these things…

…and then this nug was being triggered by the “received a new email” sound being a variable that can be sent by the sender of an email.

I don’t know who, on what planet thought, “That sounds like a good feature.”


DUCK.  The proof of concept that I’ve seen for this, which is produced by (I think) a penetration testing company… that’s how they did it.

So it sounds like the crooks who are exploiting this, that’s how *they* were doing it.

But it’s by no means clear that that’s the only feature that could be abused.

My understanding is that if you can say, “Here’s a file name that I want you to use”, then that file name, apparently…

…well, you can just put a UNC path in there, can’t you?

\\SOMEBODY.ELSES.SERVER.NAME\… and that will get accessed by Outlook.

So, you’re right: it does indeed sound like feature creep.

And, like I said, I wonder how many other missed features there might be that this could apply to, and whether those were patched as well?

Microsoft was a little bit tight-lipped about all the details, presumably because this thing was exploited in the wild.


CHET.  I can solve this problem in one word.

Mutt. [A historic text-mode-only email client.]


DUCK.  Yes, Mutt!

Elm, pine, mailx, mail…

…netcat, Chester!


CHET.  You forgot cat.


DUCK.  I was thinking netcat, where you’re actually talking interactively to the mail server at the other end.


CHET.  [LAUGHS] You can only receive email when you’re at the keyboard.


DUCK.  If you patch, let’s hope it actually deals with all places in Outlook where a file could be accessed, and that file just happens to be on a remote server…

…so Outlook says, “Hey, why don’t I try and log into the server for you?”

Now, Chester, when we were discussing this before the podcast, you made an interesting observation that you were surprised that this bug appeared in the wild, because lots of ISPs block SMB port 445, don’t they?

Not because of this authentication bug, but because that used to be one of the major ways that network worms spread…

…and everyone got so sick of them 10, 15, 20 years ago that ISPs around the world just said, “No. Can’t do it. If you want to unblock port 445, you have to jump through hoops or pay us extra money.”

And most people didn’t bother.

So you might be protected against this by accident, rather than by design.

Would you agree with that?


CHET.  Yes, I think it’s likely.

Most ISPs in the world block it.

I mean, you can imagine in Windows XP, years ago, how many computers were on the internet, with no password, sat directly on their Internet connections with the C$ share exposed.

We’re not even talking about exploits here.

We’re just talking about people with ADMI|N$ and C$ flapping in the wind!


DUCK.  If that’s how you’re protected (i.e. it doesn’t work because your ISP doesn’t let it work)…

…don’t use that as an excuse not to apply the patch, right?


CHET.  Yes, absolutely.

You don’t want the attempts even occurring, let alone for them to be successful.

Most of us are travelling around, right?

I use my laptop at the coffee shop; and then I use the laptop at the restaurant; and then I use the laptop at the airport.

Who knows what they’re blocking?

I can’t rely on port 445 being blocked…


DUCK.  Chester, I think we’d better stop there, because I’m mindful of time.

So, thank you so much for stepping up to the microphone at short notice.

Are you going to be back on next week?

You are, aren’t you?


CHET.  I certainly plan on being on next week, unless there are unforeseen circumstances.


DUCK.  Excellent!

All that remains is for us to say, as we customarily do…


CHET.  Until next time, stay secure.

[MUSICAL MODEM]


Microsoft fixes two 0-days on Patch Tuesday – update now!

Thanks to the precise four-week length of February this year, last month’s coincidence of Firefox and Microsoft updates has happened once again.

Last month, Microsoft dealt with three zero-days, by which we mean security holes that cybercriminals found first, and figured out how to abuse in real-life attacks before any patches were available.

(The name zero-day, or just 0-day, is a reminder of the fact that even the most progressive and proactive patchers amongst us enjoyed precisely zero days during which we could have been ahead of the crooks.)

In March 2023, there are two zero-day fixes, one in Outlook, and the other in Windows SmartScreen.

Intriguingly for a bug that was discovered in the wild, albeit one reported rather blandly by Microsoft as Exploitation Detected, the Outlook flaw is jointly credited to CERT-UA (the Ukrainian Computer Emergency Response Team), Microsoft Incident Response, and Microsoft Threat Intelligence.

You can make of that what you will.

Outlook EoP

This bug, dubbed CVE-2023-23397: Microsoft Outlook Elevation of Privilege Vulnerability (EoP), is described as follows:

An attacker who successfully exploited this vulnerability could access a user’s Net-NTLMv2 hash which could be used as a basis of an NTLM Relay attack against another service to authenticate as the user. […]

The attacker could exploit this vulnerability by sending a specially crafted email which triggers automatically when it is retrieved and processed by the Outlook client. This could lead to exploitation BEFORE the email is viewed in the Preview Pane. […]

External attackers could send specially crafted emails that will cause a connection from the victim to an external UNC location of attackers’ control. This will leak the Net-NTLMv2 hash of the victim to the attacker who can then relay this to another service and authenticate as the victim.

To explain (as far as we can guess, given that we don’t have any specifics about the attack to go on).

Net-NTLMv2 authentication, which we’ll just call NTLM2 for short, works very roughly like this,:

  • The location you’re connecting to sends over 8 random bytes known as a challenge.
  • Your computer generates its own 8 random bytes.
  • You calculate an HMAC-MD5 keyed hash of the two challenge strings using an existing securely-stored hash of your password as the key.
  • You send off the keyed hash and your 8-byte challenge.
  • The other end now has both 8-byte challenges and your one-time reply, so it can recompute the keyed hash, and verify your response.

Actually, there’s a fair bit more to it than that, because there are actually two keyed hashes, one mixing in the two 8-byte random-challenge numbers and the other mixing in additional data including your username, domain name and the current time.

But the underlying principle is the same.

Neither your actual password or the stored hash of your password, for example from Active Directory, is ever transmitted, so it can’t leak in transit.

Also, both sides get to inject 8 bytes of their own randomness every time, which prevents either party from sneakily re-using an old challenge string in the hope of ending up with the same the keyed hash as in a previous session.

(Wrapping in the time and other logon-specific data adds extra protection against so-called replay attacks, but we’ll ignore those details here.)

Sitting in the middle

As you can imagine, given that the attacker can trick you into trying to “logon” to their fake server (either when you read the booby-trapped email or, worse, when Outlook starts processing it on your behalf, before you even get a glimpse of how bogus it might look), you end up leaking a single, valid NTLM2 response.

That response is intended to prove to the other end not only that you really do know the password of the account you claim is yours, but also (because of the challenge data mixed in) that you’re not just re-using a previous answer.

So, as Microsoft warns, an attacker who can time things right might be able to start authenticating to a genuine server as you, without knowing your password or its hash, just to get an 8-byte starting challenge from the real server…

…and then pass that challenge back to you at the moment you get tricked into trying to login to their fake server.

If you then compute the keyed hash and send it back as your “proof I know my own password right now”, the crooks might be able to relay that correctly-calculated reply back to the genuine server they’re trying to infiltrate, and thus to trick that server into accepting them as if they were you.

In short, you definitely want to patch against this one, because even if the attack requires lots of tries, time and luck, and isn’t terribly likely to work, we already know that it’s a case of “Exploitation Detected”.

In other words, the attack can be made to work, and has succeeded at least once against an unsuspecting victim who themelves did nothing risky or wrong.

SmartScreen security bypass

The second zero-day is CVE-2023-24880, and this one pretty much describes itself: Windows SmartScreen Security Feature Bypass Vulnerability.

Simply put, Windows usually tags files that arrive via the internet with a flag that says, “This file came from outside; treat it with kid gloves and don’t trust it too much.”

This where-it-came-from flag used to be known as a file’s Internet Zone identifier, and it reminds Windows how much (or how little) trust it should put in the content of that file when it is subsequently used.

These days, the Zone ID (for what it’s worth, an ID of 3 denotes “from the internet”) is usually referred to by the more dramatic and memorable name Mark of the Web, or MotW for short.

Technically, this Zone ID is stored in along with the file in what’s known as an Alternate Data Stream, or ADS, but files can only have ADS data if they’re stored on NTFS-formatted Wiindows disks. If you save a file to a FAT volume, for example, or copy it to a non-NTFS drive, the Zone ID is lost, so this protective label is not perrmanent.

This bug means that some files that come in from outside – for example, downloads or email attachments – don’t get tagged with the right MotW identifier, so they sneakily sidestep Microsoft’s official security checks.

Microsoft’s public bulletin doesn’t say exactly what types of file (images? Office documents? PDFs? all of them?) can be infiltrated into your network in this way, but does warn very broadly that “security features such as Protected View in Microsoft Office” can be bypassed with this trick.

We’re guessing this means that malicious files that would usually be rendered harmless, for example by having built-in macro code suppressed, might be able to spring into life unexpectedly when viewed or opened.

Once again, the update will bring you back on par with the attackers, so: Don’t delay/Patch it today.

What to do?

  • Patch as soon as you can, as we just said above.
  • Read the full SophosLabs analysis of these bugs and more than 70 other patches, in case you still aren’t convinced.

Firefox 111 patches 11 holes, but not 1 zero-day among them…

Heard of cricket (the sport, not the insect)?

It’s much like baseball, except that batters can hit the ball wherever they like, including backwards or sideways; bowlers can hit the batter with the ball on purpose (within certain safety limits, of course – it just wouldn’t be cricket otherwise) without kicking off a 20-minute all-in brawl; there’s almost always a break in the middle of the afternoon for tea and cake; and you can score six runs at a time as long as you hit the ball high and far enough (seven if the bowler makes a mistake as well).

Well, as cricket enthusiasts know, 111 runs is a superstitious score, considered inauspicious by many – the cricketer’s equivalent of Macbeth to an actor.

It’s known as a Nelson, though nobody actually seems to know why.

Today therefore sees Firefox’s Nelson release, with version 111.0 coming out, but there doesn’t seem to be anything inauspicious about this one.

Eleven individual patches, and two batches-of-patches

As usual, there are numerous security patches in the update, including Mozilla’s usual combo-CVE vulnerability numbers for potentially exploitable bugs that were found automatically and patched without waiting to see if a proof-of-concept (PoC) exploit was possible:

  • CVE-2023-28176: Memory safety bugs fixed in Firefox 111 and Firefox ESR 102.9. These bugs were shared between the current version (which includes new features) and the ESR version, short for extended support release (security fixes applied, but with new features frozen since version 102, nine releases ago).
  • CVE-2023-28177: Memory safety bugs fixed in Firefox 111 only. These bugs almost certainly only exist in new code that brought in new features, given that they didn’t show up in the older ESR codebase.

These bags-of-bugs have been rated High rather than Critical.

Mozilla admits that “we presume that with enough effort some of these could have been exploited to run arbitrary code”, but no one has yet figured out how to do so, or even if such exploits are feasible.

None of the other eleven CVE-numbered bugs this month were worse thah High; three of them apply to Firefox for Android only; and no one has yet (so far as we yet know) come up with a PoC exploit that shows how to abuse them in real life.

Two notably interesting vulnerabilities appear amongst the 11, namely:

  • CVE-2023-28161: One-time permissions granted to a local file were extended to other local files loaded in the same tab. With this bug, if you opened a local file (such as downloaded HTML content) that wanted access, say, to your webcam, then any other local file you opened afterwards would magically inherit that access permission without asking you. As Mozilla noted, this could lead to trouble if you were looking through a collection of items in your download directory – the access permission warnings you’d see would depend on the order in which you opened the files.
  • CVE-2023-28163: Windows Save As dialog resolved environment variables. This is another keen reminder to sanitise thine inputs, as we like to say. In Windows commands, some character sequences are treated specially, such as %USERNAME%, which gets converted to the name of the currently logged-on user, or %PUBLIC%, which denotes a shared directory, usually in C:\Users. A sneaky website could use this as a way to trick you into seeing and approving the download of a filename that looks harmless but lands in a directory you wouldn’t expect (and where you might not later realise it had ended up).

What to do?

Most Firefox users will get the update automatically, typically after a random delay to stop everyone’s computer downloading at the same moment…

…but you can avoid the wait by manually using Help > About (or Firefox > About Firefox on a Mac) on a laptop, or by forcing an App Store or Google Play update on a mobile device.

(If you’re a Linux user and Firefox is supplied by the maker of your distro, do a system update to check for the availability of the new version.)


Linux gets double-quick double-update to fix kernel Oops!

Linux has never suffered from the infamous BSoD, short for blue screen of death, the name given to the dreaded “something went terribly wrong” message associated with a Windows system crash.

Microsoft has tried many things over the years to shake that nickname “BSoD”, including changing the background colour used when crash messages appear, adding a super-sized sad-face emoticon to make the message feel more compassionate, displaying QR codes that you can snap with your phone to help you diagnose the problem, and not filling the screen with a technobabble list of kernel code objects that just happened to be loaded at the time.

(Those crash dump lists often led to anti-virus and threat-prevention software being blamed for every system crash, simply because their names tended to show up at or near the top of the list of loaded modules – not because they had anything to do with the crash, but because they generally loaded early on and just happened to be at the top of the list, thus making a convenient scaepgoat.)

Even better, “BSoD” is no longer the everyday, throwaway pejorative term that it used to be, because Windows crashes a lot less often than it used to.

We’re not suggesting that Windows never crashes, or imlying that it is now magically bug-free; merely noting that you generally don’t need the word BSoD as often as you used to.

Linux crash notifications

Of course, Linux has never had BSoDs, not even back when Windows seemed to have them all the time, but that’s not because Linux never crashes, or is magically bug-free.

It’s simply that Linux does’t BSoD (yes, the term can be used as an intransitive verb, as in “my laptop BSoDded half way through an email”), because – in a delightful understatment – it suffers an oops, or if the oops is severe enough that the system can’t reliably stay up even with degraded performance, it panics.

(It’s also possible to configure a Linux kernel so that an oops always get “promoted” to a panic, for environments where security considerations make it better to have a system that shuts down abruptly, albeit with some data not getting saved in time, than a system that ends up in an uncertain state that could lead to data leakage or data corruption.)

An oops typically produces console output something like this (we’ve provided source code below if you want to explore oopses and panics for yourself):

[12710.153112] oops init (level = 1)
[12710.153115] triggering oops via BUG()
[12710.153127] ------------[ cut here ]------------
[12710.153128] kernel BUG at /home/duck/Articles/linuxoops/oops.c:17!
[12710.153132] invalid opcode: 0000 [#1] PREEMPT SMP PTI
[12710.153748] CPU: 0 PID: 5531 Comm: insmod . . . [12710.154322] Hardware name: XXXX
[12710.154940] RIP: 0010:oopsinit+0x3a/0xfc0 [oops]
[12710.155548] Code: . . . . .
[12710.156191] RSP: . . . EFLAGS: . . .
[12710.156849] RAX: . . . RBX: . . . RCX: . . .
[12710.157513] RDX: . . . RSI: . . . RDI: . . .
[12710.158171] RBP: . . . R08: . . . R09: . . .
[12710.158826] R10: . . . R11: . . . R12: . . .
[12710.159483] R13: . . . R14: . . . R15: . . .
[12710.160143] FS: . . . GS: . . . knlGS: . . . . . . . .
[12710.163474] Call Trace:
[12710.164129] [12710.164779] do_one_initcall+0x56/0x230
[12710.165424] do_init_module+0x4a/0x210
[12710.166050] __do_sys_finit_module+0x9e/0xf0
[12710.166711] do_syscall_64+0x37/0x90
[12710.167320] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[12710.167958] RIP: 0033:0x7f6c28b15e39
[12710.168578] Code: . . . . .
[. . . . .
[12710.173349] [12710.174032] Modules linked in: . . . . .
[12710.180294] ---[ end trace 0000000000000000 ]---

Unfortunately, when kernel version 6.2.3 came out at the end of last week, two tiny changes quickly proved to be problematic, with users reporting kernel oopses when managing disk storage.

Kernel 6.1.16 was apparently subject to the same changes, and thus prone to the same oopsiness.

For example, plugging in an removable drive and mounting it worked fine, but unmounting the drive when you’d finished with it could cause an oops.

Although an oops doesn’t immediately freeze the whole computer, kernel-level code crashes when umounting disk storage are worrisone enough that a well-informed user would probably want to shut down as soon as possible, in case of ongoing trouble leading to data corruption…

…but some users reported that the oops prevented what’s known in the jargon as an orderly shutdown, requiring forcibly cycling the power, by holding down the power button for several seconds, or temporarily cutting the mains supply to a server.

The good news is that kernels 6.2.4 and 6.1.17 were immediately released over the weekend to roll back the problems.

Given the velocity of Linux kernel releases, those updates have already been followed by 6.2.5 and 6.1.18, which were themselves updated (today, 2023-03-13) by 6.2.6 and 6.1.19.

What to do?

If you are using a 6.x-version Linux kernel and you aren’t already bang up-to-date, make sure you don’t install 6.2.3 or 6.1.16 along the way.

If you’ve already got one of those versions (we had 6.2.3 for a couple of days and were unable to provoke a driver crash, presumably because our kernel configuration shielded us inadvertently from triggering the bug), consider updating as soon as you can…

…because even if you haven’t suffered any disk-volume-based trouble so far, you may be immune by good fortune, but by upgrading your kernel again you will become immune by design.


EXPLORING OOPS AND PANIC EVENTS ON YOUR OWN

You will need a kernel built from source code that’s already installed on your test computer.

Create a directory, let’s call it /test/oops, and save this source code as oops.c:

#include <linux/kernel.h> #include <linux/module.h> #include <linux/moduleparam.h> #include <linux/init.h> MODULE_LICENSE("GPL"); static int level = 0;
module_param(level,int,0660); static int oopsinit(void) { printk("oops init (level = %d)\n",level); // level: 0->just load; 1->oops; 2->panic switch (level) { case 1: printk("triggering oops via BUG()\n"); BUG(); break; case 2: printk("forcing a full-on panic()\n"); panic("oops module"); break; } return 0; } static void oopsexit(void) { printk("oops exit\n"); } module_init(oopsinit); module_exit(oopsexit);

Create a file in the same directory called Kbuild to control the build parameters, like this:

 EXTRA_CFLAGS = -Wall -g obj-m = oops.o

Then build the module as shown below.

The -C option tells make where to start looking for Makefiles, thus pointing the build process at the right kernel source code tree, and the M= setting tells make where to find the actual module code to build on this occasion.

You must provide the full, absolute path for M=, so don’t try to save typing by using ./ (the current directory moves around during the build process):

/test/oops$ make -C /where/you/built/the/kernel M=/test/oops
CC [M] /home/duck/Articles/linuxoops/oops.o
MODPOST /home/duck/Articles/linuxoops/Module.symvers
CC [M] /home/duck/Articles/linuxoops/oops.mod.o
LD [M] /home/duck/Articles/linuxoops/oops.ko

You can load and unload the new oops.ko kernel module with the parameter level=0 just to check that it works.

Look in dmesg for a log of the init and exit calls:

/test/oops# insmod oops.ko level=0
/test/oops# rmmod oops
/test/oops# dmesg
. . .
[12690.998373] oops: loading out-of-tree module taints kernel.
[12690.999113] oops init (level = 0)
[12704.198814] oops exit

To provoke an oops (recoverable) or a panic (will hang your computer), use level=1 or level=2 respectively.

Don’t forget to save all your work before triggering either condition (you will need to reboot afterwards), and don’t do this on someone else’s computer without formal permission.


go top