ONE WEEK, TWO BWAINS
Apple patches two zero-days, one for a second time. How a 30-year-old cryptosystem got cracked. All your secret are belong to Zenbleed. Remembering those dodgy PC/Mac ads.
No audio player below? Listen directly on Soundcloud.
With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.
You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.
READ THE TRANSCRIPT
DOUGLAS. Apple patches, security versus performance, and hacking police radios.
All that, and more, on the Naked Security podcast.
[MUSICAL MODEM]
Welcome to the podcast, everybody.
I am Doug Aamoth; he is Paul Ducklin.
Paul, what’s up, buddy?
DUCK. It’s July, Douglas!
DOUGLAS. Well, let’s talk about July in our This Week in Tech History segment.
28 July 1993 brought us version 1.0 of the Lua programming language.
And even if you’ve never heard of the Little Language That Could, you’ve probably benefitted from it.
Lua is used in apps such as Roblox, World of Warcraft, Angry Birds, web apps from Venmo and Adobe, not to mention Wireshark, Nmap, Neovim, and zillions more widespread scriptable apps.
Paul, you use Lua in some of the Naked Security articles, if I’m not mistaken.
DUCK. I’m a big Lua fan, Douglas.
I use it quite extensively for my own scripting.
It’s what I like to call a “lean, mean fighting machine”.
It’s got some lovely characteristics: it’s a very easy language to learn; it’s very easy language to read; and yet you can even write programs in functional style.
(Speaking technically, functions are first-class objects in the language, so you can do all sorts of neat stuff that you can’t do with more traditional languages like C.)
And I often use it for what would otherwise be pseudocode in Naked Security articles.
Because (A) you can copy-and-paste the code and try it out for yourself if you want, and (B) it is actually surprisingly readable, even for people who aren’t familiar with programming.
The word Lua imeans ‘moon’ in Portuguese.
DOUGLAS. Lovely!
Alright, let’s stay on the subject of code.
We’ve talked several times now about Apple’s second Rapid Response patch.
It was there, it wasn’t there, what happened to it?
Well, that patch is now part of a full update, and one which actually patched a second zero-day as well, Paul.
Apple ships that recent “Rapid Response” spyware patch to everyone, fixes a second zero-day
DUCK. Yes.
If you remember that Rapid Response, like you said…
…there was an update with version (a)
, which is how they denote the first one, then there was a problem with that (browsing to some websites that weren’t parsing User-Agent strings properly).
And so Apple said, “Oh, don’t worry, we’ll come out with version (b)
in a bit.”
And then the next thing we saw was version (c)
.
You’re right, the idea of these Rapid Responses is they do eventually make it into the full upgrades, where you get a full new version number.
So, even if you’re fearful of Rapid Responses, you will get those fixes later, if not sooner.
And the zero-day in WebKit (that was the Rapid-Response-patched thing) has now been accompanied by a zero-day fix for a kernel-level hole.
And there are some (how can I put it?) “interesting co-incidences” when you compare it with Apple’s last major security upgrade back in June 2023.
Namely that the zero-day fixed in the Rapid Response part was in WebKit, and was attributed to “an anonymous researcher”.
And the zero-day now patched in the kernel was attributed to Russian anti-virus outfit Kaspersky, who famously reported that they’d found a bunch of zero-days on their own executives’ iPhones, presumably used for a spyware implant.
So the smart money is saying, even though Apple didn’t explicitly mention this in their security bulletins, that this is yet another fix related to that so called Triangulation Trojan.
In other words, in-the-wild spyware that was used in at least some targeted attacks.
That makes the Rapid Response yet more understandable (as to why Apple wanted to get it out quickly), because that stops the browser being used to trick your phone in the first place.
And it makes this upgrade super-important, because it means it’s closing off the hole-behind-the-hole that we imagine crooks would use after compromising your browser.
They’d be chaining to this second vulnerability that gave them, essentially, complete control.
DOUGLAS. OK, so we go from two weeks ago to 30 years ago…
…and this is such an interesting story.
It’s a cautionary tale about not trying to keep cryptographic secrets hidden behind non-disclosure agreements. [NDAs]
Complete with a new BWAIN, Paul.
We’ve got a new BWAIN!
Hacking police radios: 30-year-old crypto flaws in the spotlight
DUCK. “Bug With An Impressive Name.”
If keeping the algorithm secret is necessary for it to work correctly…
…it only takes one person to take a bribe, or to make a mistake, or to reverse-engineer your product, for the whole thing to fall apart.
And that’s what this TETRA radio system did.
It relied on non-standard, proprietary, trade-secret encryption algorithms, with the result that they never really got much scrutiny over the years.
TETRA is Terrestrial Trunked Radio.
It’s kind-of like mobile telephony, but with some significant advantages for people like law enforcement and first responders, namely that it has a longer range, so you need far fewer base stations.
And it was designed from the outset with one-to-one and one-to-many communications, which is ideal when you’re trying to co-ordinate a bunch of people to respond to an emergency.
Unfortunately, it turned out to have some imperfections that were only discovered in 2021 by a bunch of Dutch researchers.
And they’ve been patiently waiting nearly two years to do their responsible disclosure, to come out with their details of the bugs, which they’ll be doing at a bunch of conferences, starting with Black Hat 2023.
You can understand why they want to make a big splash about it now, because they’ve been sitting on this information, working with vendors to get patches ready, since late 2021.
In fact, the CVEs, the bug numbers that they got, are all CVE-2022-xxxx, which just indicates how much inertia there is in the system that they’ve had to overcome to get patches out for these holes.
DOUGLAS. And our BWAIN is TETRA:BURST, which is exciting.
Let’s talk about some of these holes.
DUCK. There are five CVEs in total, but there are two main issues that I would think of as “teachable moments”.
The first one, which is CVE-2022-24401, deals with the thorny issue of key agreement.
How do your base station and somebody’s handset agree on the key they’re going to use for this particular conversation, so that it is reliably different from any other key?
TETRA did it by relying on the current time, which clearly only moves in a forward direction. (So far as we know.)
The problem is there was no data authentication or verification stage.
When the handset connects to the base station and gets the timestamp, it doesn’t have a way of checking, “Is this a real timestamp from a base station I trust?”
There was no digital signature on the timestamp, which meant that you could set up a rogue base station and you could trick them into talking to you using *your* timestamp.
In other words, the encryption key for a conversation from somebody else *that you already intercepted and recorded yesterday*…
…you could have a conversation today innocently with somebody, not because you wanted the conversation, but because you wanted to recover the keystream.
Then you could use that keystream, *because it’s the same one that was used yesterday*, for a conversation that you intercepted.
And, of course, another thing you could do is, if you figured that you wanted to be able to intercept something next Tuesday, you could trick someone into having a conversation with you *today* using a fake timestamp for next week.
Then, when you intercept that conversation in the future, you can decrypt it because you got the keystream from the conversation you had today.
DOUGLAS. OK, so that’s the first bug.
And the moral of the story is: Don’t rely on data you can’t verify.
In the second bug, the moral of the story is: Don’t build in backdoors or other deliberate weaknesses.
That is a big no-no, Paul!
DUCK. It is indeed.
That one is CVE 2022-24402.
Now, I’ve seen in the media that there’s been some argumentation about whether this really counts as a backdoor, because it was put in on purpose and everyone who signed the NDA knew that it was in there (or should have realised).
But let’s call it a backdoor, because it’s a deliberately-programmed mechanism whereby the operators of some types of device (fortunately not the ones generally sold to law enforcement or to first responders, but the one sold to commercial organisations)….
…there’s a special mode where, instead of using 80-bit encryption keys, there’s a magic button you can press that says, “Hey, guys, only use 32 bits instead of 80.”
And when you think that we got rid of DES, the data encryption standard, around the turn of the millennium because it only had 56-bit keys, you can imagine, *today in 2023*, just how weak a 32-bit encryption key really is.
The time-and-materials cost of doing a brute-force attack is probably trivial.
You can imagine, with a couple of half-decent laptops, that you could do it in an afternoon for any conversation that you wished to decrypt.
DOUGLAS. Alright, very good.
Last, but not least, we have…
…if you remember Heartbleed back in 2014, don’t panic, but there’s a new thing called Zenbleed
Zenbleed: How the quest for CPU performance could put your passwords at risk
DUCK. Yes, it’s BWAIN Number Two of the week. [LAUGHS]
DOUGLAS. Yes, it’s another BWAIN! [LAUGHTER]
DUCK. I was minded to write this up because it’s got a cute name, Zenbleed (the name “Zen” comes from the fact that the bug applies to AMD’s Zen 2 processor series, as far as I know), and because this one was found by legendary bug-hunter from Google Project Zero, Tavis Ormandy, who’s been turning his attention to what happens inside processors themselves.
“Bleed” attacks… I’ll just describe them using the words that I wrote in the article:
The suffix “-bleed” is used for vulnerabilities that leak data in a haphazard way that neither the attacker nor the victim can really control.
So a bleed attack is one where you can’t poke a knitting needle into a computer across the Internet and go, “Aha! Now I want you to find that specific database called sales.sql
and upload it to me.”
And you can’t stick a knitting needle in another hole and go, “I want you to watch memory offset 12 until a credit card number appears, and then save it to disk for later.”
You just get pseudorandom data that leaks out of other people’s programs.
You get arbitrary stuff that you’re not supposed to see, that you can collect at will for minutes, hours, days, even weeks if you want.
Then you can do your big-data work on that stolen stuff, and see what you get out of it.
So that’s what Tavis Ormandy found here.
It’s basically a problem with vector processing, which is where Intel and AMD processors work not in their normal 64-bit mode (where they can, say, add two 64-bit integers together in one go), but where they can work on 256-bit chunks of data at a time.
And that’s useful for things like password cracking, cryptomining, image processing, all sorts of stuff.
It’s a whole separate instruction set inside the processor; a whole separate set of internal registers; a whole set of fancy and really powerful calculations that you can do on these super-big numbers for super-big performance results.
What’s the chance that those are bug free?
And that’s what Tavis Ormandy went looking for.
He found that a very special instruction that is largely used to avoid reducing performance…
…you have this magical instruction called VZEROUPPER
that tells the CPU, “Because I’ve been using these fancy 256-bit registers but I’m no longer interested in them, you don’t have to worry about saving their state for later.”
Guess what?
This magic instruction, which sets the top 128 bits of all 256-bit vector registers to zero at the same time, all with one instruction (you can see there’s a lot of complexity here)…
…basically, sometimes it leaks data from some other processes or threads that have run recently.
If you abuse this instruction in the right way, and Tavis Ormandy found out how to do this, you do your own magic vector instructions and you use this super-cool VZEROUPPER
instruction in a special way, and what happens is that the vector registers in your program occasionally start showing up with data values that they’re not supposed to have.
And those data values aren’t random.
They’re actually 16-byte (128-bit) chunks of data *that came from somebody else’s process*.
You don’t know whose.
You just know that this rogue data is making its ghostly appearance from time to time.
Unfortunately, Taviso discovered that by misusing this instruction in the right/wrong sort of way, he could actually extract 30KB of rogue, ghostly data from other people’s processes per second per CPU core.
And although that sounds like a very slow data rate (who would want 30KB per second on an internet connection these days? – nobody)…
…when it comes to getting random 16-byte chunks of data out of other people’s programs, it actually works out at about 3GB per day per core.
There are going to be bits of other people’s web pages; there are going to be usernames; there might be password databases; there might be authentication tokens.
All you have to do is go through this extensive supply of haystacks and find any needles that look interesting.
And the really bad part of this is *it’s not just other processes running at the same privilege level as you*.
So if you’re logged in as “Doug”, this bug doesn’t just spy on other processes running under the operating system account “Doug”.
As Taviso himself points out:
Basic operations like
strlen
, memcpy
, and strcmp
…
(Those are standard functions that all programs use for finding the length of text strings, for copying memory around, and for comparing two items of text.)
Those basic operations will use vector registers, so we can effectively use this technique to spy on those operations happening anywhere on the system!
And he allowed himself, understandably, an exclamation point, right there.
It doesn’t matter if they’re happening in other virtual machines, sandboxes, containers, processes, whatever.
I think he actually used a second exclamation point there as well.
In other words, *any process*, whether it’s the operating system, whether it’s another user in the same VM as you, whether it’s the program that controls the VM, whether it’s a sandbox that’s supposed to do super-private processing of passwords.
You’re just getting this steady feed of 16-byte data chunks coming from other people, and all you have to do is sit, and watch, and wait.
DOUGLAS. So, short of waiting for the motherboard vendor to patch…
If you’re using a Mac, you don’t need to worry about this because there are ARM-based Macs and Intel-based Macs, but no AMD Macs, but what about Windows users with AMD processors, and maybe certain Linux users?
DUCK. Your Linux distro may have a firmware microcode update that it will apply automatically for you.
And there is an essentially undocumented (or at best very poorly documented) AMD feature, a special command you can give to the chip via what are known as MSRs, or model-specific registers.
They’re like configuration-setting tools for each particular round of chips.
There is a setting you can make which apparently immunises your chip against this bug, so you can apply that.
There are commands to do this for Linux and the BSDs, but I’m not aware of similar commands on Windows, unfortunately.
Messing with the model-specific CPU registers [MSRs] can be done on Windows, but generally speaking, you need a kernel driver.
And that typically means getting it from some unknown third party, compiling it yourself, installing it, turning driver signing off…
…so only do that if you absolutely need to, and you absolutely know what you’re doing.
If you’re really desperate on Windows, and you have an AMD Zen 2 processor, I think… (I haven’t tried it because I don’t have a suitable computer at hand for my experiments.)
DOUGLAS. You should expense one. [LAUGHS]
This is work-related!
DUCK. You could probably, if you download and install WinDbg [pronounced “windbag”], the Microsoft Debugger…
…that allows you to enable local kernel debugging, connect to your own kernel, and fiddle with model-specific registers [DRAMATIC VOICE] *at your own peril*.
And, of course, if you’re using OpenBSD, from what I hear, good old Theo [de Raadt] has said, “You know what, there is a mitigation; it’s turning on this special bit that stops the bug working. We’re going to make that default in OpenBSD, because our preference is to try to favour security even at the cost of performance.”
But for everyone else, you’re going to have to either wait until it’s fixed or do a little bit of micro-hacking, all on your own!
DOUGLAS. Alright, very good.
We will keep an eye on this, mark my words.
And as the sun begins to set on our show for today, let’s hear from one of our readers over on Facebook.
This relates to the Apple story that talked about at the top of the show.
Anthony writes:
I remember, back in the day, when Apple users used to crow over the PC crowd about how Apple’s architecture was watertight and needed no security patching.
Paul, that begs an interesting question, because I think we revisit this at least annually.
What do we say to people who say that Apple’s so secure that they don’t need any security software, or they don’t need to worry about hacking, or malware, or any of that sort of stuff?
DUCK. Well, usually we give a nice big friendly grin and we say, “Hey, does anyone remember those ads? I’m a PC/I’m a Mac. I’m a PC/I’m a Mac. How did that play out?” [LAUGHTER]
DOUGLAS. Well said!
And thank you very much, Anthony, for writing that in.
If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.
You can email tips@sophos.com, comment on any one of our articles, or you can hit us up on social: @nakedSecurity.
That’s our show for today; thanks very much for listening.
For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…
BOTH. Stay secure!
[MUSICAL MODEM]