Category Archives: News

Apple patches everything, finally reveals mystery of iOS 16.1.2

Apple has just published a wide range of security fixes for all its supported platforms, from the smallest watch to the biggest laptop.

In other words, if you’ve got an Apple product, and it’s still officially supported, we urge you to do an update check now.

Remember that even if you’ve set your iDevices to update entirely automatically, doing a manual check is still well worth it, because:

  • It ensures that you catch up if something went wrong with your last automatic update.
  • It jumps you to the head of the queue so that even if you haven’t yet been alerted to the update by Apple, you’ll be able to get it at once anyway.

What you need

To summarise, the versions you want to see after you’ve upgraded are as follows:

  • macOS Ventura 13.1
  • macOS Monterey 12.6.2
  • macOS Big Sur 11.7.2
  • tvOS 16.2
  • watchOS 9.2
  • iOS 16.2 (recent devices only)
  • iPadOS 16.2 (recent devices only)
  • iOS 15.7.2 (earlier devices, back to iPhone 6s)
  • iPadOS 15.7.2 (earlier devices, including iPod touch 7th gen)

If you’ve got Big Sur or Monterey, you’ll also need a separate update to take you to Safari 16.2 to fix a number of browser and web-rendering bugs. (Other platform updates get their Safari fixes bundled in.)

Mystery explained

Amusingly, if that’s the right word, some of the the mystery surrounding Apple’s recent iOS 16.1.2 update, which came out all on its own, with no supporting documentation at all, has belatedly been revealed:

A bug in WebKit, Apple’s web rendering engine, known as CVE-2022-42856, apparently showed up in an exploit being used in the wild, and although that bug has now been patched in all the abovementioned updates (except watchOS)…

…it seems that the known exploit only worked on iOS.

Of course, given that the update advisories now explicitly state that the exploit actually only worked “against versions of iOS released before iOS 15.1”, we still don’t know why iOS 16 users got an update while iOS 15 users didn’t.

Perhaps Apple was hoping that some users who were still back on iOS 15, and thus potentially vulnerable, would jump to iOS 16 and get themselves as up-to-date as possible?

Or perhaps the iOS 16.1.2 update was merely a precaution that took less time to push out than it did for Apple to ensure that iOS 16 was not, in fact, at risk?

What to do?

  • On your iPhone or iPad: Settings > General > Software Update
  • On your Mac: Apple menu > About this Mac > Software Update…

Patch Tuesday: 0-days, RCE bugs, and a curious tale of signed malware

Another month, another Microsoft Patch Tuesday, another 48 patches, another two zero-days…

…and an astonishing tale about a bunch of rogue actors who tricked Microsoft itself into giving their malicious code an official digital seal of approval.

For a threat researcher’s view of the Patch Tuesday fixes for December 2002, please consult the Sophos X-Ops writeup on our sister site Sophos News:

For a deep dive into the saga of the signed malware, discovered and reported recently by Sophos Rapid Response experts who were called into deal with the aftermath of a successful attack:

And for a high-level overview of the big issues this month, just keep reading here…

Two zero-day holes patched

Fortunately, neither of these bugs can be exploited for what’s known as RCE (remote code execution), so they don’t give outside attackers a direct route into your network.

Nevertheless, they’re both bugs that make things easier for cybercriminals by providing ways for them to sidestep security protections that would usually stop them in their tracks:


CVE-2022-44710: DirectX Graphics Kernel Elevation of Privilege Vulnerability

An exploit allowing a local user to abuse this bug has apparently been publicly disclosed.

As far as we are aware, however, the bug applies only to the very latest builds (2022H2) of Windows 11.

Kernel-level EoP (elevation-of-privilege) bugs allow regular users to “promote” themselves to system-level powers, potentially turning a troublesome but perhaps limited cybercrime intrusion into a complete computer compromise.


CVE-2022-44698: Windows SmartScreen Security Feature Bypass Vulnerability

This bug is also known to have been expoited in the wild.

An attacker with malicious content that would normally provoke a security alert could bypass that notification and thus infect even well-informed users without warning.


Bugs to watch

And here are three interesting bugs that weren’t 0-days, but that crooks may well be interested in digging into, in the hope of figuring out ways to attack anyone who’s slow at patching.

Remember that patches themselves often unavoidably give attackers clear hints on where to start looking, and what sort of things to to look for.

This sort of “work backwards to the attack” scrutiny can lead to what are known in the jargon as N-day exploits, meaning attacks that come out quickly enough that they still catch many people out, even though the exploits arrived after patches were available.


CVE-2022-44666: Windows Contacts Remote Code Execution Vulnerability 

According to Sophos X-Ops researchers, opening a booby-trapped contact file could do more than simply import a new item into your Contacts list.

With the wrong sort of content in a file that feels (in the words of Douglas Adams) as though it ought to be “mostly harmless”, an attacker could trick you into running untrusted code instead.


CVE-2022-44690 and CVE-2022-44693: Microsoft SharePoint Server Remote Code Execution Vulnerabilities

Fortunately, this bug doesn’t open up your SharePoint server to just anyone, but any existing user on your network who has a SharePoint logon plus “ManageList” permissions could do much more than simply manage SharePoint lists.

Via this vulnerability, they could run code of their choice on your SharePoint server as well.


CVE-2022-41076: PowerShell Remote Code Execution Vulnerability 

Authorised users who are logged on to the network can be given access, via the PowerShell Remoting system, to execute some (but not necessarily all) PowerShell commands on other computers, including clients and servers.

By exploiting this vulnerability, it seems that PowerShell Remoting users can bypass the security restrictions that are supposed to apply to them, and run remote commands that should be off limits.


The signed driver saga

And last, but by no means least, there’s a fascinating new Microsoft security advisory to accompany this month’s Patch Tuesday:


ADV220005: Guidance on Microsoft Signed Drivers Being Used Maliciously

Astonishingly, this advisory means just what it says.

Sophos Rapid Reponse experts, along with researchers from two other cybersecurity companies, have recently discovered and reported real-world attacks involving malware samples that were digitally signed by Microsoft itself.


As Microsoft explains:

Microsoft was recently informed that drivers certified by Microsoft’s Windows Hardware Developer Program were being used maliciously in post-exploitation activity. […] This investigation revealed that several developer accounts for the Microsoft Partner Center were engaged in submitting malicious drivers to obtain a Microsoft signature.

In other words, rogue coders managed to trick Microsoft into signing malicious kernel drivers, meaning that the attacks investigated by Sophos Rapid Response involved cybercriminals who already had a sure-fire way to get kernel-level powers on computers they’d invaded…

…without needing any additional vulnerabilities, exploits or other trickery.

They could simply install an apparently official kernel driver, with Microsoft’s own imprimatur, and Windows, by design, would automatically trust it and load it.

Fortunately, those rogue coders have now been kicked out of the Microsoft Developer Program, and the known rogue drivers have been blocklisted by Microsoft so they will no longer work.

For a deep dive into this dramatic story, including a description of what the criminals were able to achieve with this sort of “officially endorsed” superpower (essentially, terminate security software against its will from inside the operating system itself), please read the Sophos X-Ops analysis:


COVID-bit: the wireless spyware trick with an unfortunate name

If you’re a regular Naked Security reader, you can probably guess where on the planet we’re headed in this virtual journey….

…we’re off once more to the Department of Software and Information Systems Engineering at Ben-Gurion University of the Negev in Israel.

Researchers in the department’s Cyber-Security Research Center regularly investigate security issues related to so-called airgapped networks.

As the name suggests, an airgapped network is deliberately disconnected not only from the internet but also from any other networks, even those in the same facility.

To create a safe high-security data processing area (or, more precisely, any higher-security-than-its-neighbours area where data can’t easily get out), no physical wires are connected from the airgapped network to any other network.

Additionally, all wireless communications hardware is typically disabled (and ideally removed physically if possible, or permanently disconnected by cutting wires or circuit board traces if not).

The idea is to create an environment where even if attackers or disaffected insiders managed to inject malicious code such as spyware into the system, they wouldn’t find it easy, or even possible, to get their stolen data back out again.

It’s harder than it sounds

Unfortunately, creating a usable airgapped network with no outward “data loopholes” is harder than it sounds, and the Ben-Gurion University rearchers have described numerous viable tricks, along with how you can mitigate them, in the past.

We’ve written, admittedly with a mixture of fascination and delight, about their work on many occasions before, including wacky tricks such as GAIROSCOPE (turning a mobile phone’s compass chip into a crude microphone), LANTENNA (using hardwired network cables as radio antennas) and the FANSMITTER (varying CPU fan speed by changing system load to create an audio “data channel”).

This time, the researchers have given their new trick the unfortunate and perhaps needlessly confusing name COVID-bit, where COV is explicitly listed as standing for “covert”, and we’re left to guess that ID-bit stands for something like “information disclosure, bit-by-bit”.

This data exfiltration scheme uses a computer’s own power supply as a source of unauthorised yet detectable and decodable radio transmissions.

The researchers claim covert data transmission rates up to 1000 bits/sec (which was a perfectly useful and useable dialup modem speed 40 years ago).

They also claim that the leaked data can be received by an unmodified and innocent-looking mobile phone – even one with all its own wireless hardware turned off – up to 2 metres away.

This means that accomplices outside a secure lab might be able to use this trick to receive stolen data unsuspiciously, assuming that the walls of the lab aren’t sufficiently well shielded against radio leakage.

So, here’s how COVID-bit works.

Power management as a data channel

Modern CPUs typically vary their operating voltage and frequency in order to adapt to changing load, thus reducing power consumption and helping to prevent overheating.

Indeed, some laptops control CPU temperature without needing fans, by deliberately slowing down the processor if it starts getting too hot, adjusting both frequency and voltage to cut down on waste heat at the cost of lower performance. (If you have ever wondered why your new Linux kernels seem to build faster in winter, this might be why.)

They can do this thanks to a neat electronic device known as an SMPS, short for switched-mode power supply.

SMPSes don’t use transformers and variable resistances to vary their output voltage, like old-fashioned, bulky, inefficient, buzzy power adapters did in the olden days.

Instead, they take a steady input voltage and convert it into a neat DC square wave by using a fast-switching transistor to turn the voltage completely on and completely off, anywhere from hundreds of thousands to millions of times a second.

Fairly simple electrical components then turn this chopped-up DC signal into a a steady voltage that is proportional to the ratio between how long the “on” stages and the “off” stages are in the cleanly switched square wave.

Loosely speaking, imagine a 12V DC input that’s turned fully on for 1/500,000th of a second and then fully off for 1/250,000ths of a second, over and over again, so it’s at 12V for 1/3 of the time and at 0V for 2/3 of it. Then imagine this electrical square wave getting “smoothed out” by an inductor, a diode and a capacitor into a continuous DC output at 1/3 of the peak input level, thus producing an almost-perfectly steady output of 4V.

As you can imagine, this switching and smoothing involves rapid changes of current and voltage inside the SMPS, which in turn creates modest electromagnetic fields (simply put, radio waves) that leak out via the metal conductors in the device itself, such as circuit board conductor traces and copper wiring.

And where there’s electromagnetic leakage, you can be sure that Ben-Gurion University researchers will be looking for ways to use it as a possible secret signalling mechanism.

But how can you use the radio noise of an SMPS switching millions of times a second to convey anything other than noise?

Switch the rate of switching

The trick, according to a report written by researcher Mordechai Guri, is to vary the load on the CPU suddenly and dramatically, but at a much lower frequency, by deliberately changing the code running on each CPU core between 5000 and 8000 times a second.

By creating a systematic pattern of changes in processor load at these comparatively low frequencies…

…Guri was able to trick the SMPS into switching its high-frequency switching rates in such a way that it generated low-frequency radio patterns that could reliably be detected and decoded.

Better yet, given that his deliberately generated electromagnetic “pseudo-noise” showed up between 0Hz and 60kHz, it turned out to be well-aligned with the sampling abilities of the average laptop or mobile phone audio chip, used for digitising voice and playing back music.

(The phrase audio chip above is not a typo, even though we’re talking about radio waves, as you will soon see.)

The human ear, as it happens, can hear frequencies up to about 20kHz, and you need to produce output or record input at at least twice that rate in order to detect sound oscillations reliably and thus to reproduce high frequencies as viable sound waves rather that just spikes or DC-style “straight lines”.

CD sampling rates (compact discs, if you remember them) were set at 44,100Hz for this reason, and DAT (digital audio tape) followed soon afterwards, based on a similar-but-slightly-different rate of 48,000Hz.

As a result, almost all digital audio devices in use today, including those in headsets, mobile phones and podcasting mics, support a recording rate of 48,000Hz. (Some fancy mics go higher, doubling, redoubling and even octupling that rate right up to 384kHz, but 48kHz is a rate at which you can assume that almost any contemporary digital audio device, even the cheapest one you can find, will be able to record.)

Where audio meets radio

Traditional microphones convert physical sound pressure into electrical signals, so most people don’t associate the audio jack on their laptop or mobile phone with electromagnetic radiation.

But you can convert your mobile phone’s audio circuitry into a low-quality, low-frequency, low-power radio receiver or transmitter…

…simply by creating a “microphone” (or a pair of “headphones”) consisting of a wire loop, plugging it into the audio jack, and letting it act as a radio antenna.

If you record the faint electrical “audio” signal that gets generated in the wire loop by the electromagnetic radiation it’s exposed to, you have a 48,000Hz digital reconstruction of the radio waves picked up while your “antennaphone” was plugged in.

So, using some clever frequency encoding techniques to construct radio “noise” that wasn’t just random noise after all, Guri was able to create a covert, one-way data channel with data rates running from 100 bits/sec to 1000 bits/sec, depending on the type of device on which the CPU load-tweaking code was running.

Desktop PCs, Guri found, could be tricked into producing the best quality “secret radio waves”, giving 500 bits/sec with no errors or 1000 bits/sec with a 1% error rate.

A Raspberry Pi 3 could “transmit” at 200 bits/sec with no errors, while a Dell laptop used in the test managed 100 bits/sec.

We’re assuming that the more tightly packed the circuitry and components are inside a device, the greater the interference with the covert radio signals generated by the SMPS circuity.

Guri also suggests that the power management controls typically used on laptop-class computers, aimed primarily at prolonging battery life, reduce the extent to which rapid alterations in CPU processing load affect the switching of the SMPS, thus reducing the data-carrying capacity of the covert signal.

Nevertheless, 100 bits/sec is enough to steal a 256-bit AES key in under 3 seconds, a 4096-bit RSA key in about a minute, or 1 MByte of arbitrary data in under a day.

What to do?

If you run a secure area and you’re worried about covert exfiltration channels of this sort:

  • Consider adding radio shielding around your secure area. Unfortunately, for large labs, this can be expensive, and typically involves expensive isolation of the lab’s power supply wiring as well as shielding walls, floors and ceilings with metallic mesh.
  • Consider generating counter-surveillance radio signals. “Jamming” the radio spectrum in the frequency band that common audio microphones can digitise will mitigate this sort of attack. Note, however, that radio jamming may require permission from the regulators in your country.
  • Consider increasing your airgap above 2 metres. Look at your floor plan and take into account what’s next door to the secure lab. Don’t let staff or visitors working in the insecure part of your network get closer than 2m to equipment inside, even if there’s a wall in the way.
  • Consider running random extra processes on secure devices. This adds unpredictable radio noise on top of the covert signals, making them harder to detect and decode. As Guri notes, however, doing this “just in case” reduces your available processing power all the time, which might not be acceptable.
  • Consider locking your CPU frequency. Some BIOS setup tools let you do this, and it limits the amount of power switching that takes place. However, Guri found that this really only limits the range of the attack, and doesn’t actually eliminate it.

Of course, if you don’t have a secure area to worry about…

…then you can just enjoy this story, while remembering that it reinforces the principle that attacks only ever get better, and thus that security really is a journey, not a destination.


Pwn2Own Toronto: 54 hacks, 63 new bugs, $1 million in bounties

You’ve probably heard of Pwn2Own, a hacking contest that started life alongside the annual CanSecWest cybersecurity event in Vancouver, Canada.

Pwn2Own is now a multi-million “hackers’ brand” in its own right, having been bought up by anti-virus outfit Trend Micro and extended to cover many more types of bug than just browsers and desktop operating systems.

The name, in case you’re wondering, is shorthand for “pwn it to own it”, where pwn (pronounced “pone”) is hacker-speak for “take control by exploiting a security hole”, and own literally means “have legal title over”.

Simply put: hack into it and you can take it home.

In fact, even in the Pwn2Own Toronto 2022 contest, where the cash amounts of the prizes far exceeded the value of the devices up to be hacked, winners got to take home the actual kit they broke into, thus retaining the original, literal sense of the competition.

Even if you’ve just won $100,000 for hacking into a networked printer by hacking your way through a small-business router first (as the team that ended up at the top of the overall leaderboard managed to do), taking home the actual devices is a neat reminder of a job well done.

These days, when hacking hardware such as routers or printers that have their own displays or blinking lights, researchers will prove their pwnership with amusing side-effects such as morse code messages via LEDs, or displaying memetic videos such as a famous song by a famous 1980s pop crooner. The hacked device thus acts as its own historical documentary.

Hacking (the good sort)

We said “a job well done” above, because even though you need to think like a cybercriminal to win at Pwn2Own, given that you’re trying to generate a fully-working remote code execution attack that a crook would love to know about, and then to show your attack working against a current and fully-patched system…

…the ultimate goal of a creating winning “attack” is responsible disclosure, and thus better defences for everyone.

To enter the competition and win a prize, you’re agreeing not only to hand over your exploit code to the device vendor or vendors who put up the prize money, but also to provide a white paper that explains the exploit in the sort of detail that will help the vendor patch it quickly and (you hope) reliably.

The end-of-year Pwn2Own is a peripatetic sort of event, having variously beem held in places as far apart as Aoyama in Tokyo, Amsterdam in the Netherlands, and Austin in Texas.

It was originally known as the “mobile phone” version of Pwn2Own, but the Toronto 2022 event invited contestants to hack in six main categories, of which just one included mobile phones.

The devices put forward by their vendors, and the prize money offered for successful hacks, looked like this:

HACK A PHONE.. AND WIN:
Samsung Galaxy S22 $50,000
Google Pixel 6 $200,000
Apple iPhone 13 $200,000 HACK A SOHO ROUTER.. AND WIN:
TPLink AX1800 $20,000 ($5000 if via LAN)
NETGEAR RAX30 $20,000 ($5000 if via LAN)
Synology RT6600ax $20,000 ($5000 if via LAN)
Cisco C921-4P $30,000 ($15,000 if via LAN)
Microtik RB2011 $30,000 ($15,000 if via LAN)
Ubiquiti EdgeRouter $30,000 ($15,000 if via LAN) HACK A HOME HUB.. AND WIN:
Meta Portal Go $60,000
Amazon Echo Show 15 $60,000
Google Nest Hub Max $60,000 HACK A NETWORK PRINTER.. AND WIN:
HP Color LaserJet Pro $20,000
Lexmark MC3224 $20,000
Lexmark MC3224i $20,000
Canon imageClass MF743Cdw $20,000 HACK A SPEAKER.. AND WIN:
Sonos One Home Speaker $60,000
Apple HomePod Mini $60,000
Amazon Echo Studio $60,000
Google Nest Studio $60,000 HACK A NAS BOX.. AND WIN:
Synology DiskStation $40,000
WD My Cloud Pro PR4100 $40,000

In this year’s event, the organisers went for extra-excitement hacks called Smashups – a bit like a baseball team agreeing in advance that any double play (two outs at once) in the next inning will immediately count as three outs and finish the inning… but with the downside that any single outs on their own won’t count at all.

Smashups were worth up to $100,000 all at once, but you had to declare your intention up front and then hack one of the network devices by breaking in through the router first, followed by pivoting (in the jargon) directly from the router into the internal device.

Hacking the router via the WAN and then separately hacking, say, one of the printers, wouldn’t count as a Smashup – you had to commit to the all-in-one-chain in advance.

Miss the router and you wouldn’t even get a chance at the printer; hack the router but miss the printer and you’d lose what you otherwise could have won by pwning the router on its own.

In the end, eight different teams of researchers decided to back themselves to go for the superbounties available through Smashups…

…and six of them succeeded in getting in through the router and then onto a printer.

Only one of the Smashup teams aimed at anything other than a printer once inside. The Qrious Security duo from Vietnam had a go at the Western Digital NAS via a NETGEAR router, but didn’t get all the way to their target within the 30 minute limit imposed by the rules of the competition.

And the winners were…

To add a poker-like element of luck to the contest, and to avoid arguments about who deserves the most recognition when two teams just happen to find the same bug, the teams go into bat in a randomly decided sequence.

Simply put, if two teams rely on the same bug somewhere in their attack, the one that went first scoops the full cash prize.

Anyone else using the same bug gets the same leaderboard points, but only 50% of the cash reward.

As a result, the outright winners won’t necessarily earn the most money – in the same sort of way that it’s possible to cycle to outright victory in the Tour de France without ever winning an individual stage.

This year, the Master of Pwn (top place finishers do get a winner’s jersey, but unlike Le Tour, it’s not yellow, and it’s technically a jacket) did win the most money, with $142,000.

But the STAR Labs team from Singapore, who ended up just outside the medals in fourth place in the General Classification standings, had the happy comiseration of taking home the next-biggest paycheck, with $97,500.

In case you’re wondering, the top three places were taken by corporate teams for whom bug-hunting and penetration testing is a day job:

1. DEVCORE (18.5 leaderboard points plus $142,000). This team works for a Taiwanese red-teaming and cybersecurity company whose official website includes staff known only by mysterious names such as Angelboy, CB and Meh.

2. NCC Group EDG (16.5 points plus $82,500). This team comes from the dedicated exploit development group (EDG) of a global cybersecurity consultancy originally spun off in 1999 from the UK government’s National Computer Centre.

3. Viettel Security (15.5 points plus $78,750). This is the cybersecurity group of Vietnam’s state-owned telecommunications company, the country’s largest.

THE MAILLOT JAUNE OF PWN2OWN (EVEN IF ONLY THE TEXT IS YELLOW)

Who didn’t get hacked?

Fascinatingly, the eight products that didn’t get hacked were the ones with the biggest bounties.

The phones from Apple and Google, worth $200,000 each (plus a $50,000 bonus for kernel-level access) weren’t breached.

Likewise, the $60,000-a-pop home hubs from Meta, Amazon and Google stayed safe, along with the $60,000-each speakers from Apple, Amazon and Google.

The only $60,000-bounty that paid out was the one offered by Sonos, whose speaker was attacked by three different teams and pwned each time. (Only the first team had a unique chain of bugs, so they were the only ones that netted the full $60,000).

Just as fascinatingly, perhaps, the products that didn’t get pwned didn’t actually survive any attacks, either.

The most likely reason for this, of course, is that no one is going to commit to entering Pwn2Own, writing up a publication-quality report, and travelling to Toronto to face public scrutiny, live-streamed to their peers around the world…

…unless they’re pretty jolly sure that their hacking attempt is going to work out.

But there’s also the issue that there are bug-buying services that compete with Trend Micro’s Zero Day Initiative (ZDI), and that claim to offer much higher bounties.

So we don’t know whether Apple’s and Google’s phones and speakers, for example, went untested because they genuinely were more secure, or simply because any bugs discovered were worth more elsewhere.

Zerodium. for example, claims to pay “up to” $2,500,000 for top-level Android security holes, and $2,000,000 for holes in Apple’s iOS, albeit with the tricky proviso that you don’t get to say what happens to the bug or bugs you send in.

ZDI, in contrast, aims to offer a responsible disclosure pathway for bug hunters.

The “code of silence” that bug finders are required to comply with after handing over their reports is there primarily so that the details can be shared privately and safely with the vendor.

So, even though the vendors in this Pwn2Own paid out a total of $989,750, according to our calculations…

…that’s 63 fewer full-on, genuinely exploitable bugs left out there that cybercriminals and rogue operators might otherwise latch onto and exploit for evil.


S3 Ep112: Data breaches can haunt you more than once! [Audio + Text]

DATA BREACHES – THE STING IN THE TAIL

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  SIM swapping, zero-days, the [dramatic voice] P-i-n-g of D-E-A-T-H, and LastPass… again.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast everybody.

I am Doug Aamoth.

With me, as always, is Paul Ducklin.

Paul, how do you do?


DUCK.  Very well, Doug.

You put some high drama sound into that intro, I’m pleased to see!


DOUG.  Well, how do you say “Ping of Death” without saying [doom metal growl] “P-i-n-g of D-E-A-T-H”?

You can’t just say [gentle voice] “Ping of Death”.

You’ve got to punch it a little bit…


DUCK.  I suppose so.

It’s different in writing – what have you got?

Bold and italics.

I just went with normal text, but I did use capital letters, which helps.


DOUG.  Yes, I think I would bold and italicise the word “death”, so [doom metal again] “The Ping of D-E-A-T-H”.


DUCK.  And use multiple colours!

I’ll do that next time, Doug.


DOUG.  Break out the old <blink> tag in HTML, make it blink a little bit? [LAUGHS]


DUCK.  Doug, for a moment, I was worried you were going to use the word [LAUGHS] <marquee>.


DOUG.  [LAUGHS] We love old stuff here!

And that dovetails nicely with our This Week in Tech History segment – I’m excited about this one because I hadn’t heard about it, but stumbled across it.

This week, on 04 December 2001, the Goner worm ransacked the internet at a pace second only to that of the Love Bug virus.

Goner spread via Microsoft Outlook, and promised unsuspecting victims a fun screen saver when executed.


DUCK.  Goner…

I think it got that name because there was a popup at the end, wasn’t there, that mentioned the Pentagon?

But it was meant to be a pun – it was “Penta/Gone”.

That was certainly the worm that reminded people that, in fact, Windows screensavers are just executable programs.

So, if you were looking out specially for .EXE files, well, they could be wrapped up in .SCR (screensaver) files as well.

If you were only relying on filenames, you could easily be tricked.

And many people were, sadly.


DOUG.  Alright, we’ll go from the old-school to the new-school.

We’re talking about LastPass: there was a breach; the breach itself wasn’t terrible; but that breach has now led to another breach.

Or maybe this is just a continuation of the original breach?

LastPass admits to customer data breach caused by previous breach


DUCK.  Yes, LastPass has written about it essentially as a follow up to the previous breach, which I think was August 2022, wasn’t it?

And as we said at the time, it was a very embarrassing look for LastPass.

But as breaches go, it was probably worse for their PR, marketing and (I guess) for their intellectual property departments, because it seems the main thing the crooks made away with was source code from their development system.

And LastPass was quick to reassure people…

Firstly, their investigations suggested that, whilst they were in there, the crooks weren’t able to make any unauthorised changes that might later percolate into the real code.

Secondly, access to the development system doesn’t give you access to the production system, where the actual code is built.

And thirdly, they were able to say it seemed that no encrypted password vaults were stolen, so the cloud storage of your encrypted passwords was not accessed.

And even if it had been accessed, then only you would know the password, because the decryption (what you called the “heavy lifting” when we spoke about it on the podcast) is actually done in memory on your devices – LastPass never sees your password.

And then, fourthly, they said, as far as we can tell, as a result of that breach, some of the stuff that was in the development environment has now given either the same… or possibly a completely different load of crooks who bought the stolen data off the previous lot, who knows?

That did allow them to get into some cloud service where some as-yet apparently unknown set of customer data was stolen.

I don’t think they quite know yet, because it can take a while to work out what actually did get accessed after a breach happened.

So I think it is fair to say this is sort of the B-side of the original breach.


DOUG.  All right, we suggest that if you’re a LastPass customer, to keep an eye on the company’s security incident report.

We will keep an eye on this story as it’s still developing.

And if you, like Paul and I, fight cybercrime for a living, there are some excellent lessons to be learned from the Uber breach.

So that’s a podcast episode – a “minisode” – with Chester Wisniewski that Paul has embedded at the bottom of the LastPass article:

S3 Ep100.5: Uber breach – an expert speaks [Audio + Text]

Lots to learn on that front!


DUCK.  As you say, that’s a great listen, because it is, I believe, what is known in America as “actionable advice”, or “news you can use”.


DOUG.  [LAUGHS] Wonderful.

Speaking of news-you-can’t-really-use, Apple is generally tight-lipped about its security updates… and there was a security update:

Apple pushes out iOS security update that’s more tight-lipped than ever


DUCK.  Oh, Doug, that’s one of your finest… I like that segue.


DOUG.  [LAUGHS] Thank you; thank you very much.


DUCK.  Yes, this surprised me.

I thought, “Well, I’ll grab the update because it sounds serious.”

And I gave myself the reason, “Let me do it for Naked Security readers.”

Because if I do it and there are no side-effects, then I can at least say to other people, “Look, I just blindly did it and no harm came to me. So maybe you can do it as well.”

I just suddenly noticed that there was an iOS 16.1.2 update available, although I had had no security advisory email from Apple.

No email?!

That’s weird.. so I went to the HT201222 portal page that Apple has for its security bulletins, and there it was: iOS 16.1.2.

And what does it say, Doug, “Details will follow soon”?


DOUG.  And did they follow soon?


DUCK.  Well, that was more than a week ago, and they’re not there yet.

So are we talking “soon” meaning hours, days, weeks, or months?

At the moment, it’s looking like weeks.

And, as always with Apple, there’s no indication of anything to do with any other operating systems.

Have they been forgotten?

Do they not need the update?

Did they also need the update, but it’s just not ready yet?

Have they been dropped out of support?

But it did seem, as I said in the headline, even more tight-lipped than usual for Apple, and not necessarily the most helpful thing in the world.


DOUG.  OK, very good… still some questions, which leads us to our next story.

A very interesting question!

Sometimes, when you sign up for a service and it enforces two-factor authentication, it says, “Do you want to get notified via text message, or do you want to use an authentication app?”

And this story is a cautionary tale to not use your phone – use an authentication app, even if it’s a little bit more cumbersome.

This is a very interesting story:

SIM swapper sent to prison for 2FA cryptocurrency heist of over $20m


DUCK.  It is, Doug!

If you’ve ever lost a mobile phone, or locked yourself out of your SIM card by putting in the PIN incorrectly too many times, you’ll know that you can go into the mobile phone shop…

…and usually they’ll ask for ID or something, and you say, “Hey, I need a new SIM card.”

And they’ll generate one for you.

When you put it into your phone, bingo!… it’s got your old number on it.

So what that means is that if a crook can go through the same exercise that you would to convince the mobile phone company that they have “lost” or “broken” their SIM card (i.e. *your SIM card*), and they can get that card either handed to, or sent to, or given to them somehow…

…then, when they plug it into their phone, they start getting your SMS two-factor authentication codes, *and* your phone stops working.

That’s the bad news.

The good news in this article is this was a case of a chap who got busted for it.

He’s been sent to prison in the US for 18 months.

He, with a bunch of accomplices – or, in the words of the Department of Justice, the Scheme Participants… [LAUGHS]

…they made off with one particular victim’s cryptocurrency, apparently to the tune of $20 million, if you don’t mind.


DOUG.  Oof!


DUCK.  So he agreed to plead guilty, take a prison sentence, and immediately forfeit… the amount was [reading carefully] $983,010.72… just to forfeit that right away.

So, presumably, he had that lying around.

And he apparently also has some kind of legal obligation to refund over $20 million.


DOUG.  Good luck with that, everyone! Good luck.

His other [vocal italics] Scheme Participants might cause some issues there! [LAUGHS]


DUCK.  Yes, I don’t know what happens if they refuse to cooperate as well.

Like, if they just hang him out to dry, what happens?

But we’ve got some tips, and some advice on how to beef up security (in more ways than just the 2FA you use) in the article.

So go and read that… every little bit helps.


DOUG.  OK, speaking of “little bits”…

…this was another fascinating story, how the lowly ping can be used to trigger remote code execution:

Ping of death! FreeBSD fixes crashtastic bug in network tool


DUCK.  [Liking the segue again] I think you’ve bettered yourself, Doug!


DOUG.  [LAUGHS] I’m on a roll today…


DUCK.  From Apple to the [weak attempt at doom vocals] Ping of D-E-A-T-H!

Yes, this was an intriguing bug.

I don’t think it will really cause many people much harm, and it *is* patched, so fixing it is easy.

But there’s a great writeup in the FreeBSD security advisory

…and it makes for an entertaining, and, if I say so myself, a very informative tale for the current generation of programmers who may have relied on,”Third-party libraries will just do it for me. Dealing with low level network packets? I never have to think about it…”

There are some great lessons to be learned here.

The ping utility, which is the one network tool that pretty much everybody knows about it, gets its name from SONAR.

You go [makes movie submarine noise] ping, and then the echo comes back from the server at the other end.

And this is a feature that’s built into the Internet Protocol, IP, using a thing called ICMP, which is Internet Control Message Protocol.

It’s a special, low-level protocol, much lower than UDP or TCP that people are probably used to, that’s pretty much designed for exactly this kind of thing: “Are you actually even alive at the other end, before I go worrying about why your web server isn’t working?”

There’s a special kind of packet you can send out called “ICMP Echo”.

So, you send this tiny little packet with a short message in it (the message can be anything you like), and it simply sends that very same message back to you.

It’s just a basic way of saying, “If that message doesn’t come back, either the network or the entire server is down”, rather than that there’s some software problem on the computer.

By analogy with SONAR, the program that sends out these echo requests is called… [pause] I’m going to do the sound effect, Doug … [fake submarine movie noise again] ping. [LAUGHTER]

And the idea is, you go, say, ping -c3 (that means check three times) nakedsecurity.sophos.com.

You can do that right now, and you should get three replies, each of them one second apart, from the WordPress servers that host our site.

And it’s saying the site is alive.

It’s not telling you that the web server is up; it’s not telling you that WordPress is up; it’s not telling that Naked Security is actually available to read.

But it at least it confirms that you can see the server, and the server can reach you.

And who would have thought that that lowly little ping reply could trip up the FreeBSD ping program in such a way that a rogue server could send back a booby trapped “Yes, I am alive” message that could, in theory (in theory only; I don’t think anyone has done this in practice) trigger remote code execution on your computer.


DOUG.  Yes, that’s amazing; that’s the amazing part.

Even if it’s a proof-of-concept, it’s such a small little thing!


DUCK.  The ping program itself gets the whole IP packet back, and it’s supposed to divide it into two parts.

Normally, the kernel would handle this for you, so you’d just see the data part.

But when you’re dealing with what are called raw sockets, what you get back is the Internet Protocol header, which just says, “Hey, these bytes came from such and such a server.”

And then you get a thing called the “ICMP Echo Reply”, which is the second half of the packet you get back.

Now, these packets, they’re typically just 100 bytes or so, and if it’s IPv4, the first 20 bytes are the IP header and the remainder, whatever it is, is the Echo Reply.

That has a few bytes to say, “This is an Echo Reply,” and then the original message that went out coming back.

And so the obvious thing to do, Doug, when you get it, is you split it into…

…the IP header, which is 20 bytes long, and the rest.

Guess where the problem lies?


DOUG.  Do tell!


DUCK.  The problem is that IP headers are *almost always* 20 bytes long – in fact, I don’t think I’ve ever seen one that wasn’t.

And you can tell they’re 20 bytes long because the first byte will be hexadecimal 0x45.

The “4”” means IPv4, and the “5”… “Oh, we’ll use that to say how long the header is.”

You take that number 5 and you multiply it by 4 (for 32-bit values), and you get 20 bytes..

…and that is the size of probably six sigma’s worth of IP headers that you will ever see in the whole world, Doug. [LAUGHTER]

But they *can* go up to 60 bytes.

If you put 0x4F instead of 0x45, that says there are 0xF (or 15 in decimal) × 4 = 60 bytes in the header.

And the FreeBSD code simply took that header and copied it into a buffer on the stack that was 20 bytes in size.

A simple, old-school stack buffer overflow.

It’s a case of a venerable network troubleshooting tool with a venerable type of bug in it. (Well, not any more.)

So, when you are programming and you have to deal with low-level stuff that nobody’s really thought about for ages, don’t just go with the received wisdom that says, “Oh, it’ll always be 20 bytes; you’ll never see anything bigger.”

Because one day you might.

And when that day comes, it might be there deliberately because a crook made it so on purpose.

So the devil, as always, is in the programming details, Doug.


DOUG.  OK, very interesting; great story.

And we will stick on the subject of code with this final story about Chrome.

Another zero-day, which brings the 2022 total to nine times:

Number Nine! Chrome fixes another 2022 zero-day, Edge patched too


DUCK.  [Formal voice, sounding like a recording] “Number 9. Number 9. Number 9, number 9,” Douglas.


DOUG.  [LAUGHS] Is this Yoko Ono?


DUCK.  That’s Revolution 9 off the Beatles “White Album”.

Yoko can be heard riffing away in that song – that soundscape, I believe they call it – but apparently the bit at the beginning where there’s somebody saying “Number 9, number 9” over and over again, it was, in fact, a test tape they found lying around.


DOUG.  Ah, very cool.


DUCK.  An EMI engineer saying something like, “This is EMI test tape number 9” [LAUGHTER], and apparently I don’t even think anyone knows whose voice it was.

That has *nothing* to do with Chrome, Doug.

But given that somebody commented on Facebook the other day, “That Paul guy is starting to look like a Beatle”… [quizzical] which I found slightly odd.


DOUG.  [LAUGHS] Yes, how are you supposed to take that?


DUCK.  …I figured I could dine out on “Number 9”.

It is the ninth zero-day of the year so far, it seems, Doug.

And it’s a one-bug fix, with the bug identified as CVE 2022-4282.

Because Microsoft Edge uses the Chromium open-source core, it too was vulnerable, and a couple of days later, Microsoft followed up with an update for Edge.

So this is both a Chrome and an Edge issue.

Although those browsers should update themselves, I recommend going to check anyway – we show you how to do that in the article – just in case.

I won’t read out the version numbers here because they’re different for Mac, Linux and Windows on Chrome, and they’re different again for Edge.

Like Apple, Google’s being a bit tight-lipped about this one.

It was found by one of their threat hunting team, I do believe.

So I imagine they found it while investigating an incident that happened in the wild, and therefore they probably want to keep it under their hat, even though Google usually has a lot to say about “openness” when it comes to bug-fixing.

You can see why, in a case like this, you might want a little bit of time to dig a little bit deeper before you tell everybody exactly how it works.


DOUG.  Excellent… and we do have a reader question that is probably a question a lot of people are thinking.

Cassandra asks, “Are the bug finders just getting lucky at finding bugs? Or have they struck a ‘seam’ full of bugs? Or is Chromium issuing new code that is more buggy than normal? Or is something else going on?”


DUCK.  Yes, that’s a great question, actually, and I’m afraid that I could only answer it in a slightly facetious sort of way, Doug.

Because Cassandra had given choices A), B) and C), I said, “Well, maybe it’s D) All of the above.

We do know that when a bug of one particular sort shows up in code, then it’s reasonable to assume that the same programmer may have made similar bugs elsewhere in the software.

Or other programmers at the same company may have been using what was considered received wisdom or standard practice at the time, and may have followed suit.

And a great example Is, if you look back at Log4J… there was a fix to patch the problem.

And then, when they went looking, “Oh, actually, there are other places where similar mistakes have been made.”

So there was a fix for the fix, and then there was a fix for the fix for the fix, If I remember.

There is, of course, also the issue that when you add new code, you may get bugs that are unique to that new code and come about because of adding features.

And that’s why many browsers, Chrome included, have an if-you-like “slightly older” version that you can stick with.

And the idea is that those “older” releases… they have none of the new features, but all of the relevant security fixes.

So, if you want to be conservative about new features, you can be.

But we certainly know that, sometimes, when you shovel new features into a product, new bugs come with the new features.

And you can tell that, for example, when there’s an update, say, for your iPhone, and you get updates, say, for iOS 15 and iOS 16.

Then, when you look at the bug lists, there are few bugs that only apply to iOS 16.

And you think, “Hello, those must be bugs in the code that weren’t there before.”

So, yes, that’s a possibility.

And I think the other things that are going on can be considered good.

The first is that I think that, particularly for things like browsers, the browser makers are getting much better at pushing out full rebuilds really, really quickly.


DOUG.  Interesting.


DUCK.  And I think the other thing that’s changed is that, in the past, you could argue that for many vendors… it was quite difficult to get people to apply patches at all, even when they came out only on a monthly schedule, and even if they had multiple zero-day fixes in them.

I think, maybe it also is a response to the fact that more and more of us are more and more likely not just to accept, but actually to *expect* automatic updating that is really prompt.

So, I think you can read some good stuff into this.

The fact not only that Google can push out a single zero-day fix almost instantaneously, but also that people are willing to accept that and even to demand it.

So I like to see that issue of, “Wow, nine zero-days in the year fixed individually!”…

…I like to think of that more as “glass half fill and filling up” than “glass half empty and draining through a small hole in the bottom”. [LAUGHTER]

That is my opinion.


DOUG.  Alright, very good.

Thank you for the question, Cassandra.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you: Until next time…


BOTH.  Stay secure!

[MUSICAL MODEM]


go top