Category Archives: News

Attention gamers! Motherboard maker MSI admits to breach, issues “rogue firmware” alert

If you’re a gamer or an avid squeezer of raw computing power, you’ve probably spent hours tweaking your motherboard settings to eke out every last drop of performance.

Over the years, you might even have tried out various unofficial firmware bodges and hacks to let you change settings that would otherwise be inaccessible, or to choose configuration combinations that aren’t usually allowed.

Just to be clear: we strongly advise against installing unknown, untrusted firmware BLOBs.

(BLOB is a jocular jargon term for firmware files that’s short for binary large object, meaning that it’s an all-in-one stew of code, tables of data, embedded files and images, and indeed anything needed by the firmware when it starts up.)

Loosely speaking, the firmware is a kind of low-level operating system in its own right that is responsible for getting your computer to the point at which it can boot into a regular operating system such as Windows, or one of the BSDs, or a Linux distro.

This means that booby-trapped firmware code, if you can be tricked into installing it, could be used to undermine the very security on which your subsequent operating system security relies.

Rogue firmware could, in theory, be used to spy on almost everything you do on your computer, acting as a super-low-level rootkit, the jargon term for malware that exists primarily to protect and hide other malware.

Rootkits generally aim to make higher-level malware difficult not only to remove, but even to detect in the first place.

The word rootkit comes from the old days of Unix hacking, before PCs themselves existed, let alone PC viruses and other malware. It referred to what was essentially a rogueware toolkit that a user with unauthorised sysadmin privileges, also known as root access, could install to evade detection. Rootkit components might include modified ls, ps and rm tools, for example (list files, list processes and remove files respectively), that deliberately suppressed mention of the intruder’s rogue software, and refused to delete it even if asked to do so. The name derives from the concept of “a software kit to help hackers and crackers maintain root access even after they’re being hunted down by the system’s real sysadmins”.

Digital signatures considered helpful

These days, rogue firmware downloads are generally easier to spot than they were in the past, given that they are usually digitally signed by the official vendor.

These digital signatures can either be verified by the existing firmware to prevent rogue updates being installed at all (depending on your motherboard and its current configuration), or verified on another computer to check that they have the imprimatur of the vendor.

Note that digital signatures give you a much stronger proof of legitimacy than download checksums such as SHA-256 file hashes that are published on a company’s download site.

A download checksum simply confirms that the raw content of the file you downloaded matches the copy on the site where the checksum was stored, thus providing a quick way of verifying that there were no network errors during the download.

If crooks hack the server to alter the file you are going to download, they can simply alter its listed checksum at the same time, and the two will match, because there is no cryptographic secret involved in calculating the checkum from the file.

Digital signatures, however, are tied to a so-called private key that the vendor can store separately from the website, and the digital signature is typically calculated and added to the file somewhere in the vendor’s own, supposedly secure, software build system.

That way, the signed file retains its signed digital label wherever it goes.

So, even if crooks manage to create a booby-trapped download server with a Trojanised download on it, they can’t create a digital signature that reliably identifies them as the vendor you’d expect to see as the creator and signer of the file.

Unless, of course, the crooks manage to steal the vendor’s private keys used for creating those digital signatures…

…which is a bit like getting hold of a medieval monarch’s signet ring, so you can press their official sign into the wax seals on totally fraudulent documents.

MSI’s dilemma

Well, fans of MSI motherboards should be doubly cautious of installing off-market firmware right now, apparently even if it apparently comes with a legitimate-looking MSI digital “seal of approval”.

The motherboard megacorp issued an official breach notification at the end of last week, admitting:

MSI recently suffered a cyberattack on part of its information systems. […] Currently, the affected systems have gradually resumed normal operations, with no significant impact on financial business.

Word on the street is that MSI was hit by a ransomware gang going by the in-your-face name of Money Message, who are apparently attempting to blackmail MSI by threatening, amongst other nastinesses, to expose stolen data such as:

MSI source code including framework to develop BIOS [sic], also we have private keys.


Claim made by Money Message blackmail gang on its darkweb “news” server.

The implication seems to be that the criminals now have the wherewithal to build a firmware BLOB not only in the right format but also with the right digital signature embedded in it.

MSI has neither confirmed nor denied what was stolen, but is warning customers “to obtain firmware/BIOS updates only from [MSI’s] official website, and not to use files from sources other than the official website.”

What to do?

If the criminals are telling the truth, and they really do have the private keys they need to sign firmware BLOBs (MSI certainly has lots of different private keys for all sorts of different signing purposes, so even if the crooks have some private keys they might not have the right ones for approving firmware builds)…

…then going off-market is now doubly dangerous, because checking the digital signature of the downloaded file is no longer enough to confirm its origin.

Carefully sticking to MSI’s official site is safer, because the crooks would need not only the signing keys for the firmware file, but also access to the official site to replace the genuine download with their booby-trapped fake.

We’re hoping that MSI is taking extra care over who has access to its official download portal right now, and watching it more carefully than usual for unexpected changes…


Apple zero-day spyware patches extended to cover older Macs, iPhones and iPads

Last week, we warned about the appearance of two critical zero-day bugs that were patched in the very latest versions of macOS (version 13, also known as Ventura), iOS (version 16), and iPadOS (version 16).

Zero-days, as the name suggests, are security vulnerabilities that were found by attackers, and put to real-life use for cybercriminal purposes, before the Good Guys noticed and came up with a patch.

Simply put, there were zero days during which even the most proactive and cybersecurity conscious users amongst us could have patched ahead of the crooks.

What happened?

Notably, in this recent Apple zero-day incident:

  • The initial report provided to Apple was jointly credited to the Amnesty International Security Lab and the Google Threat Analysis group. As we suggested last week:

    It’s not a big jump to assume that this bug was spotted by privacy and social justice activists at Amnesty, and investigated by incident response handlers at Google; if so, we’re almost certainly talking about security holes that can be, and already have been, used for implanting spyware.

  • Security hole #1 was a remote code execution bug in WebKit. Remote code execution, or RCE for short, means exactly what it says: someone who doesn’t have physical access to your device, and who doesn’t have a username and password that would let them log in over the network, can nevertheless dupe your computer into running untrusted code without giving you any security alerts or pop-up warnings. In Apple’s own words, “processing maliciously crafted web content may lead to arbitrary code execution.” Note that “processing web content” is what your browser does automatically even if all you do is to look at a website, so this sort of vulnerability is commonly exploited to implant malware silently onto your device in an attack known in the jargon as a drive-by install.
  • Security hole #2 was a code execution bug in the kernel itself. This means an attacker who has already implanted application-level malware on your device (for example by exploiting a drive-by malware implantation bug in WebKit!) can take over your entire device, not merely a single app such as your browser. Kernel-level malware typically has as-good-as-unregulated access to your entire system, including hardware such cameras and microphones, all files belonging to all apps, and even to the data that each app has in memory at any moment.

Just to be clear: the Apple Safari browser uses WebKit for “processing web content” on all Apple devices, although third-party browsers such as Firefox, Edge and Chromium don’t use WebKit on Macs.

But all browsers on iOS and iPadOS (along with any apps that process web-style content for any reason at all, such as displaying help files or even just popping up About screens) are required to use WebKit.

This thou-shalt-use-WebKit rule is an Apple pre-condition for getting software accepted into the App Store, which is pretty much the only way to install apps on iPhones and iPads.

Updates so far

Last week, iOS 16, iPadOS 16 and macOS 13 Ventura received simultaneous updates for both these security holes, thus patching not only against drive-by installs that exploited the WebKit bug (CVE-2023-28205), but also against device takeover attacks that exploited the kernel vulnerability (CVE-2023-28206).

At the same time, macOS 11 Big Sur and macOS 12 Monterey received patches, but only against the WebKit bug.

Although that stopped criminals using booby-trapped web pages to exploit CVE-2023-28705 and thus to infect you via your browser, it didn’t do anything to prevent attackers with other ways into your system taking over completely by exploiting the kernel bug.

Indeed, we didn’t know at the time whether the older macOSes didn’t get patched against CVE-2023-28206 because they weren’t vulnerable to the kernel bug, or because Apple simply hadn’t got the patch ready yet.

Even more worryingly, iOS 15 and iPadOS 15, which are still officially supported, and are indeed all you can run if you have an older iPhone and iPad that can’t be upgraded to version 16, didn’t get any patches at all.

Were they vulnerable to drive-by installs via web pages but not to kernel-level compromise?

Were they vulnerable in the kernel but not in WebKit?

Were they actually vulnerable to both bugs, or simply not vulnerable at all?

Update to the update story

We now know the answer to the questions above.

All supported versions of iOS and iPadOS (15 and 16) and of macOS (11, 12 and 13) are vulnerable to both of these bugs, and they have now all received patches for both vulnerabilities.

This follows Apple’s email announcements earlier today (ours arrived just after 2023-04-10T18:30:00Z) of the following security bulletins:

  • HT213725: macOS Big Sur 11.7.6. Gets an operating system update that adds a kernel-level patch for the CVE-2023-28206 “device takeover” bug, to go with the WebKit patch that came out last week for the CVE-2023-28205 “drive-by install” bug.
  • HT213724: macOS Monterey 12.6.5. Gets an operating system update that adds a kernel-level patch for the “device takeover” bug, to go with the WebKit patch that came out last week.
  • HT213723: iOS 15.7.5 and iPadOS 15.7.5. All iPhones and iPads running version 15 now receive an operating system update to patch against both bugs.

What to do?

In short: check for updates now.

If you’ve got a recent-model Mac or iDevice you will probably already have all the updates you need, but it makes sense to check, just in case.

If you have an older Mac, you need to ensure you have last week’s Safari update and this latest patch to go with it.

If you have an older iPhone or iPad, you need to get today’s update, or else you remain vulnerable to both bugs, as used in the wild in the attack discovered by Amnesty and investigated by Google.


Popular server-side JavaScript security sandbox “vm2” patches remote execution hole

We’ve written before, back in 2022, about a code execution hole in the widely-used JavaScript sandbox system vm2.

Now we’re writing to let you know about a similar-but-different hole in the same sandbox toolkit, and urging you to update vm2 if you use (or are responsible for building) any products that depend on this package.

As you’ve probably guessed, VM is short for virtual machine, a name often used to describe what you might call a “software computer” that helps you to run applications in a restricted way, under more careful control than would be possible if you gave those applications direct access to the underlying operating system and hardware.

And the word sandbox is another way of referring to a stripped-down and regulated runtime environment that an application thinks is the real deal, but which cocoons the app to restrict its ability to perform dangerous actions, whether through incompetence or malice.

Trapped in an artificial reality

For example, an app might expect to be able to find and open the system-wide user database file /etc/passwd, and might report an error and refuse to go further if it can’t.

In some cases, you might be happy with that, but you might decide (for safety as much as for security) to run the app in a sandbox where it can open a file that answers to the name /etc/passwd, but that is actually a stripped-down or mocked-up copy of the real file.

Likewise, you might want to corral all the network requests made by the app so that it thinks it has unfettered access to the internet, and behaves programmatically as though it does…

.. while in fact it is communicating through what amounts a network simulator that keeps the app inside a well-regulated walled garden, with content and behaviour you can control as you wish.

In short, and in keeping with the metaphor, you’re forcing the app to play in a sandbox of its own, which can help to protect you from possible harm caused by bugs, by malware code, or by ill-considered programming choices in the app itself – all without needing to modify or even recompile the app.

Browser-style sandboxing for servers

Your web browser is a good example of a sandbox, which is how it keeps control over JavaScript programs that it downloads and runs from remote websites.

JavaScript in your browser is implicitly untrusted, so there are lots of JavaScript operations that it isn’t allowed to perform, or from which it will receive deliberately trimmed-down or incomplete answers, such as:

  • No access to files on your local computer. JavaScript in your browser can’t read or write files, list directories, or even find out whether specific files exist or not.
  • No access to cookies and web data from other sites. JavaScript fetched as part of example.com, for instance, can’t peek at web data such as cookies or authentication tokens set by other sites.
  • Controlled access to hardware such as camera and microphone. Website JavaScript can ask to use your audio-visual hardware, but by default it won’t get access unless you agree via a popup that can’t be controlled from JavaScript.
  • Limited precision from timers and other system measurements. To make it harder for browser-based JavaScript to make educated guesses about the identity of your computer based on details such as screen size, execution timings, and so on, browsers typically provide websites with useful but imprecise or incomplete replies that don’t make you stand out from other visitors.
  • No access to the display outside the web page window. This prevents website JavaScript from painting over warnings from the browser itself, or changing the name of the website shown in the address bar, or performing other deliberately misleading visual tricks.

The vm2 package is meant to provide a similar sort of restrictive environment for JavaScript that runs outside your browser, but that may nevertheless come from untrusted or semi-trusted sources, and therefore needs to be kept on a tight leash.

A huge amount of back-end server logic in cloud-based services is coded these days not in Java, but in JavaScript, typically using the node.js JavaScript ecosystem.

So vm2, which it itself written in JavaScript, aims to provide the same sort of sandboxing protection for full-blown server-based apps as your browser provides for JavaScript in web pages.

To be clear: the two languages Java and JavaScript are related only in the shared letters in their respective names. They have little more in common than cars and carpets, or carpets and pets.

Security error in an error handler

Unfortunately, this new CVE-2023-29017 bug in vm2 meant that a JavaScript function in the sandbox that was supposed to help you tidy up after errors when running background tasks…

…could be tricked into running code of your choice if you deliberately provoked an error in order to triggger the buggy function.

Simply put, “a threat actor can bypass the sandbox protections to gain remote code execution rights on the host running the sandbox.”

Worse still, a South Korean Ph.D. student has published two proof-of-concept (PoC) JavaScript fragments on GitHub that show how the exploit works; the code is annotated with the comment, “Expected result: We can escape vm2 and execute arbitrary shellcode.”

The sample exploit snippets show how to run any command you like in a system shell, as you could with the C function system(), the Python function os.system(), or Lua’s os.execute().

What to do?

The vm2 developers patched this bug super-quickly, and promptly published a GitHub advisory…

…so take the hint, and update as soon as you can if you have any apps that rely on vm2.

The bug was patched in vm2 version 3.9.15, which came out last Thursday (2023-04-06T18:46:00Z).

If you use any server-side node.js JavaScript applications that you don’t manage and build yourself, and you aren’t sure whether they use vm2 or not, contact your vendor for advice.


Apple issues emergency patches for spyware-style 0-day exploits – update now!

Apple just issued a short, sharp series of security fixes for Macs, iPhones and iPads.

All supported macOS versions (Big Sur, Monterey and Ventura) have patches you need to install, but only the iOS 16 and iPadOS 16 mobile versions currently have updates available.

As ever, we can’t yet tell you whether iOS 15 and iPadOS 15 users with older devices are immune and therefore don’t need a patch, are at risk and will get a patch in the next few days, or are potentially vulnerable but are going to be left out in the cold.

Two different bugs are addressed in these updates; importantly, both vulnerabilities are described not only as leading to “arbitrary code execution”, but also as “actively exploited”, making them zero-day holes.

Hack your browser, then pwn the kernel

The bugs are:

  • CVE-2023-28205: A security hole in WebKit, whereby merely visiting a booby-trapped website could give cybercriminals control over your browser, or indeed any app that uses WebKit to render and display HTML content. (WebKit is Apple’s web content display subsystem.) Many apps use WebKit to show you web page previews, display help text, or even just to generate a good-looking About screen. Apple’s own Safari browser uses WebKit, making it directly vulnerable to WebKit bugs. Additionally, Apple’s App Store rules mean that all browsers on iPhones and iPads must use WebKit, making this sort of bug a truly cross-browser problem for mobile Apple devices.
  • CVE-2023-28206: A bug in Apple’s IOSurfaceAccelerator display code. This bug allows a booby-trapped local app to inject its own rogue code right into the operating system kernel itself. Kernel code execution bugs are inevitably much more serious than app-level bugs, because the kernel is responsible for managing the security of the entire system, including what permissions apps can acquire, and how freely apps can share files and data between themselves.

Ironically, kernel-level bugs that rely on a booby-trapped app are often not much use on their own against iPhone or iPad users, because Apple’s strict App Store “walled garden” rules make it hard for attackers to trick you installing a rogue app in the first place.

You can’t go off market and install apps from a secondary or unofficial source, even if you want to, so crooks would need to sneak their rogue app into the App Store first before they could attempt to talk you into installing it.

But when attackers can combine a remote browser-busting bug with a local kernel-busting hole, they can sidestep the App Store problem entirely.

That’s apparently the situation here, where the first bug (CVE-2023-28205) allows attackers to take over your phone’s browser app remotely…

…at which point, they have a booby-trapped app that they can use to exploit the second bug (CVE-2023-28206) to take over your entire device.

And remember that because all App Store apps with web display capabilities are required to use WebKit, the CVE-2023-28205 bug affects you even if you have installed a third-party browser to use instead of Safari.

Reported in the wild by activists

The worrying thing about both bugs is not only that they’re zero-day holes, meaning the attackers found them and were already using them before any patches were figured out, but also that they were reported by “Clément Lecigne of Google’s Threat Analysis Group and Donncha Ó Cearbhaill of Amnesty International’s Security Lab”.

Apple isn’t giving any more detail than that, but it’s not a big jump to assume that this bug was spotted by privacy and social justice activists at Amnesty, and investigated by incident response handlers at Google.

If so, we’re almost certainly talking about security holes that can be, and already have been, used for implanting spyware.

Even if this suggests a targeted attack, and thus that most of us are not likely to be at the receiving end of it, it nevertheless implies that these bugs work effectively in real life against unsuspecting victims.

Simply put, you should assume that these vulnerabilities represent a clear and present danger, and aren’t just proof-of-concept holes or theoretical risks.

What to do?

Update now!

You may already have been offered the update by Apple; if you haven’t been, or you were offered it but turned it down for the time being, we suggest forcing an update check as soon as you can.

The updates up for grabs are:

  • HT213722: Safari 16.4.1. This covers CVE-2023-28205 (the WebKit bug only) for Macs running Big Sur and Monterey. The patch isn’t packaged as a new version of the operating system itself, so your macOS version number won’t change.
  • HT213721: macOS Ventura 13.3.1. This covers both bugs for the latest macOS release, and includes the Safari update that has been bundled separately for users of older Macs.
  • HT213720: iOS 16.4.1 and iPadOS 16.4.1. This covers both bugs for iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later.

If you’re still on iOS 15 or iPadOS 15, watch this space (or keep your eyes on Apple’s HT201222 security portal) in case it turns out that you need an update, too.


S3 Ep129: When spyware arrives from someone you trust

WHEN MALWARE COMES FROM WITHIN

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Wi-Fi hacks, World Backup Day, and supply chain blunders.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth and he is Paul Ducklin.

Paul, how do you do?


DUCK.  Looking forward to a full moon ride tonight, Doug!


DOUG.  We like to begin our show with This Week in Tech History, and we’ve got a lot of topics to choose from.

We shall spin the wheel.

The topics today include: first spacecraft to orbit the moon, 1966; first cellphone call, 1973; Microsoft founded, 1975; birth of Netscape, 1994; SATAN (the network scanner, not the guy), 1995… I think the guy came before that.

And Windows 3.1, released in 1992.

I’ll spin the wheel here, Paul…

[FX: WHEEL OF FORTUNE SPINS]


DUCK.  Come on, moon – come on, moon…

..come on, moon-orbiting object thing!

[FX: WHEEL SLOWS AND STOPS]


DOUG.  We got SATAN.

[FX: HORN BLAST]

All right…


DUCK.  Lucifer, eh?

“The bringer of light”, ironically.


DOUG.  [LAUGHS] This week, on 05 April 1995, the world was introduced to SATAN: Security Administrator Tool for Analyzing Networks, which was a free tool for scanning potentially vulnerable networks.

It was not uncontroversial, of course.

Many pointed out that making such a tool available to the general public could lead to untoward behaviour.

And, Paul, I’m hoping you can contextualise how far we’ve come since the early days of scanning tools like this…


DUCK.  Well, I guess they’re still controversial in many ways, Doug, aren’t they?

If you think of tools that people are used to these days, things like NMap (network mapper), where you go out across the network and try and find out…

…what servers are there?

What ports are they listening on?

Maybe even poke a knitting needle in and say, “What kind of things are they doing on that port? Is it really a web port, or are they secretly using it to funnel out traffic of another sort?”

And so on.

I think we’ve just come to realise that most security tools have a good side and a dark side, and it’s more about how and when you use them and whether you have the authority – moral, legal, and technical – to do so, or not.


DOUG.  Alright, very good.

Let us talk about this big supply chain issue.

I hesitate to say, “Another day, another supply chain issue”, but it feels like we’re talking about supply chain issues a lot.

This time it’s telephony company 3CX.

So what has happened here?

Supply chain blunder puts 3CX telephone app users at risk


DUCK.  Well, I think you’re right, Doug.

It is a sort of “here we go again” story.

The initial malware appears to have been built, or signed, or given the imprimatur, of the company 3CX itself.

In other words, it wasn’t just a question of, “Hey, here’s an app that looks just like the real deal, but it’s coming from some completely bogus site, from some alternative supplier you’ve never heard of.”

It looks as though the crooks were able to infiltrate, in some way, some part of the source code repository that 3CX used – apparently, the part where they stored the code for a thing called Electron, which is a huge programming framework that’s very popular.

It’s used by products like Zoom and Visual Studio Code… if you’ve ever wondered why those products are hundreds of megabytes in size, it’s because a lot of the user interface, and the visual interaction, and the web rendering stuff, is done by this Electron underlayer.

So, normally that’s just something you suck in, and then you add your own proprietary code on top of it.

And it seems that the stash where 3CX kept their version of Electron had been poisoned.

Now, I’m guessing the crooks figured, “If we poison 3CX’s own proprietary code, the stuff that they work on every day, it’s much more likely that someone in code review will notice. It’s proprietary; they feel proprietarial about it. But if we just put some dodgy stuff in this giant sea of code that they suck in every time and kind of largely believe in… maybe we’ll get away with it.”

And it looks like that’s exactly what happened.

Seems that the people who got infected either downloaded the 3CX telephony app and installed it fresh during the window that it was infected, or they updated officially from a previous version, and they got the malware.

The main app loaded a DLL, and that DLL, I believe, went out to GitHub it downloaded what looked like an innocent icon file, but it wasn’t.

It was actually a list of command-and-control servers, and then it went to one of those command-and-control servers, and it downloaded the *real* malware that the crooks wanted to deploy and injected it directly into memory.

So that never appeared as a file.

Something of a mix of different tools may have been used; the one that you can read about on news.sophos.com is an infostealer.

In other words, the cooks are after sucking information out of your computer.

Update 2: 3CX users under DLL-sideloading attack: What you need to know


DOUG.  Alright, so check that out.

As Paul said, Naked Security and news.sophos.com have two different articles with everything you need.

Alright, from a supply chain attack where the bad guys inject all the nastiness at the beginning…

…to a WiFi hack where they try to extract information at the end.

Let’s talk about how to bypass Wi-Fi encryption, if only for a brief moment.

Researchers claim they can bypass Wi-Fi encryption (briefly, at least)


DUCK.  Yes, this was a fascinating paper that was published by a bunch of researchers from Belgium and the US.

I believe it’s sort of a preprint of a paper that’s going to be presented at the USENIX 2023 Conference.

They did come up with a sort of funky name… they called it Framing Frames, as in so-called wireless frames or wireless packets.

But I think the subtitle, the strapline, is a little more meaningful, and that says: “Bypassing Wi-Fi encryption by manipulating transmit queues.”

And very simply put, Doug, it has to do with how many or most access points behave in order to give you a higher quality of service, if you like, when your client software or hardware goes off the air temporarily.

“Why don’t we save any leftover traffic so that if they do reappear, we can seamlessly let them carry on where they left off, and everyone will be happy?”

As you imagine there’s a lot that can go wrong when you’re saving up stuff for later…

…and that’s exactly what these researchers found.


DOUG.  Alright, it looks like there’s two different ways this could be carried out.

One just wholesale disconnects, and one where it drops into sleep mode.

So let’s talk about the “sleep mode” version first.


DUCK.  It seems that if your WiFi card decides, “Hey, I’m going to go into power saving mode”, it can tell the access point in a special frame (thus the attack name Framing Frames)… “Hey, I’m going to sleep for a while. So you decide how you want to deal with the fact that I’ll probably wake up and come back online in a moment.”

And, like I said, a lot of access points will queue up left-over traffic.

Obviously, there are not going to be any new requests that need replies if your computer is asleep.

But you might be in the middle of downloading a web page, and it hasn’t quite finished yet, so wouldn’t it be nice if, when you came out of power-saving mode, the web page just finished transmitting those last few packets?

After all, they’re supposed to be encrypted (if you’ve got Wi-Fi encryption turned on), not just under the network key that requires the person to authenticate to the network first, but also under the session key that’s agreed for your laptop for that session.

But it turns out there’s a problem, Doug.

An attacker can send that, “Hey, I’m going to sleepy-byes” frame, pretending that it came from your hardware, and it doesn’t need to be authenticated to the network at all to do so.

So not only does it not need to know your session key, it doesn’t even need to know the network key.

It can basically just say, “I am Douglas and I’m going to have a nap now.”


DOUG.  [LAUGHS] I’d love a nap!


DUCK.  [LAUGHS] And the access points, it seems, don’t buffer up the *encrypted* packets to deliver to Doug later, when Doug wakes up.

They buffer up the packets *after they’ve been decrypted*, because when your computer comes back online, it might decide to negotiate a brand new session key, in which case they’ll need to be reencrypted under that new session key.

Apparently, in the gap while your computer isn’t sleeping but the access point thinks it is, the crooks can jump in and say, “Oh, by the way, I’ve come back to life. Cancel my encrypted connection. I want an unencrypted connection now, thank you very much.”

So the access point will then go, “Oh, Doug’s woken up; he doesn’t want encryption anymore. Let me drain those last few packets left over from the last thing he was looking at, without any encryption.”

Whereupon the attacker can sniff them out!

And, clearly, that shouldn’t really happen, although apparently it seems to be within the specifications.

So it’s legal for an access point to work that way, and at least some do.


DOUG.  Interesting!

OK. the second method does involve what looks like key-swapping…


DUCK.  Yes, it’s a similar sort of attack, but orchestrated in a different way.

This revolves around the fact that if you’re moving around, say in an office, your computer may occasionally disassociate itself from one access point and reassociate to another.

Now, like sleep mode, that disassociating (or kicking a computer off the networ)… that can be done by someone, again, acting as an impostor.

So it’s similar to the sleep mode attack, but apparently in this case, what they do is they reassociate with the network.

That means they do need to know the network key, but for many networks, that’s almost a matter of public record.

And the crooks can jump back in, say, “Hey, I want you use a key that I control now to do the encryption.”

Then, when the reply comes back, they’ll get to see it.

So it’s a tiny bit of information that might be leaked…

…it’s not the end of the world, but it shouldn’t happen, and therefore it must be considered incorrect and potentially dangerous.


DOUG.  We’ve had a couple comments and questions on this.

And over here, on American television, we’re seeing more and more commercials for VPN services saying, [DRAMATIC VOICE] “You cannot, under any circumstance ever, connect – don’t you dare! – to a public Wi-Fi network without using a VPN.”

Which, by the nature of those commercials being on TV, makes me think it’s probably a little bit overblown.

So what are your thoughts on using a VPN for public hotspots?


DUCK.  Well, obviously that would sidestep this problem, because the idea of a VPN is there’s essentially a virtual, a software-based, network card inside your computer that scrambles all the traffic, then spits it out through the access point to some other point in the network, where the traffic gets decrypted and put onto the internet.

So that means that even if someone were to use these Framing Frames attacks to leak occasional packets, not only would those packets potentially be encrypted (say, because you were visiting an HTTPS site), but even the metadata of the packet, like the server IP address and so on, would be encrypted as well.

So, in that sense, VPNs are a great idea, because it means that no hotspot actually sees the contents of your traffic.

Therefore, a VPN… it solves *this* problem, but you need to make sure that it doesn’t open you up to *other* problems, namely that now somebody else might be snooping on *all* your traffic, not just the occasional left-over, queued-up frames at the end of an individual reply.


DOUG.  Let’s talk now about World Backup Day, which was 31 March 31.

Don’t think that you have to wait until next March 31… you can still participate!

Now, we’ve got five tips, starting with my very favourite: Don’t delay, do it today, Paul.

World Backup Day is here again – 5 tips to keep your precious data safe


DUCK.  Very simply put, the only backup you will ever regret is the one you did not make.


DOUG.  And another great one: Less is more.

Don’t be a hoarder, in other words.


DUCK.  That’s difficult for some people.


DOUG.  It sure is.


DUCK.  If that’s the way your digital life is going, that it’s overflowing with stuff you almost certainly aren’t going to look at again…

…then why not take some time, independently of the rush that you are in when you want to do the backup, to *get rid of the stuff you don’t need*.

At home, it will declutter your digital life.

At work. It means you aren’t left holding data that you don’t need, and that, if it were to get breached, would probably get you in bigger trouble with rules like the GDPR, because you couldn’t justify or remember why you’d collected it in the first place.

And, as a side effect, it also means your backups will go faster and take up less space


DOUG.  Of course!

And here’s one that I can guarantee not everyone is thinking of, and may have never thought of.

Number three is: Encrypt in flight; encrypt at rest.

What does that mean, Paul?


DUCK.  Everyone knows that it’s a good idea to encrypt your hard disk… your BitLocker or your File Vault password to get in.

And many people are also in the habit, if they can, of encrypting the backups that they make onto, say, removable drives, so they can put them in a cupboard at home, ut if they have a burglary and someone steals the drive, that person can’t just go and read off the data because it’s password-protected.

It also makes a lot of sense, while you’re going to the trouble of encrypting the data when it’s stored, of making sure that it’s encrypted if you’re doing, say, a cloud backup *before it leaves* your computer, or as it leaves your computer.

That means if the cloud service gets breached, it cannot reveal your data.

And even under a court order, it can’t recover your data.


DOUG.  A right, and this next one sounds straightforward, but it’s not quite as easy: Keep it safe.


DUCK.  Yes, we see, in lots of ransomware attacks, that victims think they’re going to recover without paying easily because they’ve got live backups, either in things like Volume Shadow Copy, or cloud services that automatically sync every few minutes.

And so they think, “I’ll never lose more than ten minutes’ work. If I get hit by ransomware, I’ll log into the cloud and all my data will come back. I don’t need to pay the crooks!”

And then they go and have a look and realise, “Oh, heck, the crooks got in thirst; they found where I kept those backups; and they either filled them with garbage, or redirected the data somewhere else.”

So now they’ve stolen your data and you don’t have it, or otherwise messed up your backups before they do the attack.

Therefore, a backup that is offline disconnected… that’s a great idea.

It’s a little less convenient, but it does keep your backups out of harm’s way if the crooks get in.

And it does mean that, in a ransomware attack, if your live backups have been trashed by the crooks on purpose because they found them before they unleashed the ransomware, you’ve got a second chance to go and recover the stuff.

And, of course, if you can keep that offline backup somewhere that is offsite, that means that if you’re locked out of your business premises, for example, due to a fire, or a gas leak, or some other catastrophe…

…you can still actually start the backup going.


DOUG.  And last but absolutely, positively, certainly not least: Restore is part of backup.


DUCK.  Sometimes the reason you need the backup is not simply to avoid paying crooks money for ransomware.

It might be to recover one lost file, for example, that’s important right now, but by tomorrow, it will be too late.

And the last thing you want to happen, when you’re trying to restore your precious backup, is that you’re forced to cut corners, use guesswork, or take unnecessary risks.

So: practise restoring individual files, even if you’ve got a huge amount of backup.

See how quickly you can and reliably you can get just *on* file for *one* user, because sometimes that will be key to what your restoration is all about.

And also make sure that you are fluent and fluid when you need to do huge restores.

For example, when you need to restore *all* the files belonging to a particular user, because their computer got trashed by ransomware, or stolen, or dropped in Sydney Harbor, or whatever fate befell it.


DOUG.  [LAUGHS] Very good.

And, as the sun begins to set on our show for the day, it’s time to hear from our readers on the World Backup Day article.

Richard writes, “Surely there ought to be two World Backup Days?”


DUCK.  You saw my response there.

I put [:drum emoji:] [:cymbal emoji:].


DOUG.  [LAUGHS] Yes, sir!


DUCK.  As soon as I’d done that, I thought, you know what?


DOUG.  There should be!


DUCK.  It’s not really a joke.

It encapsulates this deep and important truth, as we said at the end of that article on Naked Security, remember: “World Backup Day isn’t the one day every year when you actually do a backup. It’s the day you build a backup plan right into your digital lifestyle.”


DOUG.  Excellent.

Alright, thank you very much for sending that in, Richard.

You made a lot of people laugh with that, myself included!


DUCK.  It’s great.


DOUG.  Really good.


DUCK.  I’m laughing again now… it’s amusing me just as much as it did when the comment first came in.


DOUG.  Perfect.

OK, if you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast. You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


go top