Category Archives: News

Microsoft Patch Tuesday: 36 RCE bugs, 3 zero-days, 75 CVEs

Deciphering Microsoft’s official Update Guide web pages is not for the faint-hearted.

Most of the information you need, if not everything you’d really like to know, is there, but there’s such a dizzing number of ways to view it, and so many generated-on-the-fly pages are needed to display it, that it can be tricky to find out what’s truly new, and what’s truly important.

Should you search by the operating system platforms affected?

By the severity of the vulnerabilies? By the likelihood of exploitation?

Should you sort the zero-days to the top?

(We don’t think you can – we think there are three zero-days in this month’ list, but we had to drill into individual CVE pages and search for the text “Exploitation detected” in order to be sure that a specific bug was already known to cybercriminals.)

What’s worse, an EoP or an RCE?

Is a Critical elevation of privilege (EoP) bug more alarming than an Important remote code execution (RCE)?

The former type of bug requires cybercriminals to break in first, but probably gives them a way to take over completely, typically getting them the equivalent of sysadmin powers or operating system-level control.

The second type of bug might only get the crooks in with the lowly access privileges of little old you, but it nevertheless gets them onto the network in the first place.

Of course, while everyone else might breathe a sigh of relief if an attacker wasn’t able to get access to their stuff, that’s cold comfort for you, if you’re the one who did get attacked.

We counted 75 CVE-numbered bugs dated 2023-02-14, given that this year’s February updates arrived on Valentine’s Day.

(Actually, we fond 76, but we ignored one bug that didn’t have a severity rating, was tagged CVE-2019-15126, and seems to boil down to a report about unsupported Broadcom Wi-Fi chips in Microsoft Hololens devices – if you have a Hololens and have any advice for other readers, please let us know in the comments below.)

We extracted a list and included it below, sorted so that the bugs dubbed Critical are at the top (there are seven of them, all RCE-class bugs).

You can also read the SophosLabs analysis of Patch Tuesday for more details.



Security bug classes explained

If you’re not familiar with the bug abbreviations shown below, here’s a high-speed guide to security flaws:

  • RCE means Remote Code Execution. Attackers who aren’t currently logged on to your computer could trick it into running a fragment of program code, or even a full-blown program, as if they had authenticated access. Typically, on desktops or servers, the criminals use this sort of bug to implant code that allows them to get back in at will in future, thus establishing a beachhead from which to kick off a network-wide attack. On mobile devices such as phones, the crooks may use RCE bugs to leave behind spyware that will track you from then on, so they don’t need to break in over and over again to keep their evil eyes on you.
  • EoP means Elevation of Privilege. As mentioned above, this means crooks can boost their access rights, typically acquiring the same sort of powers that an official sysadmin or the operating itself would usually enjoy. Once they have system-level powers, they are often able to roam freely on your network, steal secure files even from restricted-access servers, create hidden user accounts for getting back in later, or map out your entire IT estate in preparation for a ransomware attack.
  • Leak means that security-related or private data might escape from secure storage. Sometimes, even apparently minor leaks, such as the location of specific operating system code in memory, which an attacker isn’t supposed to be able to predict, can give criminals the information they need to turn an probably unsuccessful attack into an almost certainly successful one.
  • Bypass means that a security protection you’d usually expect to keep you safe can be skirted. Crooks typically exploit bypass vulnerabilities to trick you into trusting remote content such as email attachments, for example by finding a way to avoid the “content warnings” or to circumvent the malware detection that are supposed to keep you safe.
  • Spoof means that content can be made to look more trustworthy than it really is. For example, attackers who lure you to a fake website that shows up in your browser with an official server name in the address bar (or what looks like the address bar)are much likely to trick you into handing over personal data than if they’re forced to put their fake content on a site that clearly isn’t the one you’d expect.
  • DoS means Denial of Service. Bugs that allow network or server services to be knocked offline temporarily are often considered low-grade flaws, assuming that the bug doesn’t then allow attackers to break in, steal data or access anything they shouldn’t. But attackers who can reliably take down parts of your network may be able to do so over and over again in a co-ordinated way, for example by timing their DoS probes to happen every time your crashed servers restart. This can be extremely disruptive, esepcially if you are running an online business, and can also be used as a distraction to draw attention away from other illegal activities that the crooks are doing on your network at the same time.

The big bug list

The 75-strong bug list is here, with the three zero-days we know about marked with an asterisk (*):

NIST ID Level Type Component affected
--------------- ----------- ------ ----------------------------------------
CVE-2023-21689: (Critical) RCE Windows Protected EAP (PEAP) CVE-2023-21690: (Critical) RCE Windows Protected EAP (PEAP) CVE-2023-21692: (Critical) RCE Windows Protected EAP (PEAP) CVE-2023-21716: (Critical) RCE Microsoft Office Word CVE-2023-21803: (Critical) RCE Windows iSCSI CVE-2023-21815: (Critical) RCE Visual Studio CVE-2023-23381: (Critical) RCE Visual Studio CVE-2023-21528: (Important) RCE SQL Server CVE-2023-21529: (Important) RCE Microsoft Exchange Server CVE-2023-21568: (Important) RCE SQL Server CVE-2023-21684: (Important) RCE Microsoft PostScript Printer Driver CVE-2023-21685: (Important) RCE Microsoft WDAC OLE DB provider for SQL CVE-2023-21686: (Important) RCE Microsoft WDAC OLE DB provider for SQL CVE-2023-21694: (Important) RCE Windows Fax and Scan Service CVE-2023-21695: (Important) RCE Windows Protected EAP (PEAP) CVE-2023-21703: (Important) RCE Azure Data Box Gateway CVE-2023-21704: (Important) RCE SQL Server CVE-2023-21705: (Important) RCE SQL Server CVE-2023-21706: (Important) RCE Microsoft Exchange Server CVE-2023-21707: (Important) RCE Microsoft Exchange Server CVE-2023-21710: (Important) RCE Microsoft Exchange Server CVE-2023-21713: (Important) RCE SQL Server CVE-2023-21718: (Important) RCE SQL Server CVE-2023-21778: (Important) RCE Microsoft Dynamics CVE-2023-21797: (Important) RCE Windows ODBC Driver CVE-2023-21798: (Important) RCE Windows ODBC Driver CVE-2023-21799: (Important) RCE Microsoft WDAC OLE DB provider for SQL CVE-2023-21801: (Important) RCE Microsoft PostScript Printer Driver CVE-2023-21802: (Important) RCE Microsoft Windows Codecs Library CVE-2023-21805: (Important) RCE Windows MSHTML Platform CVE-2023-21808: (Important) RCE .NET and Visual Studio CVE-2023-21820: (Important) RCE Windows Distributed File System (DFS) CVE-2023-21823: (Important) *RCE Microsoft Graphics Component
CVE-2023-23377: (Important) RCE 3D Builder CVE-2023-23378: (Important) RCE 3D Builder CVE-2023-23390: (Important) RCE 3D Builder CVE-2023-21566: (Important) EoP Visual Studio CVE-2023-21688: (Important) EoP Windows ALPC CVE-2023-21717: (Important) EoP Microsoft Office SharePoint CVE-2023-21777: (Important) EoP Azure App Service CVE-2023-21800: (Important) EoP Windows Installer CVE-2023-21804: (Important) EoP Microsoft Graphics Component CVE-2023-21812: (Important) EoP Windows Common Log File System Driver CVE-2023-21817: (Important) EoP Windows Kerberos CVE-2023-21822: (Important) EoP Windows Win32K CVE-2023-23376: (Important) *EoP Windows Common Log File System Driver CVE-2023-23379: (Important) EoP Microsoft Defender for IoT CVE-2023-21687: (Important) Leak Windows HTTP.sys CVE-2023-21691: (Important) Leak Windows Protected EAP (PEAP) CVE-2023-21693: (Important) Leak Microsoft PostScript Printer Driver CVE-2023-21697: (Important) Leak Internet Storage Name Service CVE-2023-21699: (Important) Leak Internet Storage Name Service CVE-2023-21714: (Important) Leak Microsoft Office CVE-2023-23382: (Important) Leak Azure Machine Learning CVE-2023-21715: (Important) *Bypass Microsoft Office Publisher CVE-2023-21809: (Important) Bypass Microsoft Defender for Endpoint CVE-2023-21564: (Important) Spoof Azure DevOps CVE-2023-21570: (Important) Spoof Microsoft Dynamics CVE-2023-21571: (Important) Spoof Microsoft Dynamics CVE-2023-21572: (Important) Spoof Microsoft Dynamics CVE-2023-21573: (Important) Spoof Microsoft Dynamics CVE-2023-21721: (Important) Spoof Microsoft Office OneNote CVE-2023-21806: (Important) Spoof Power BI CVE-2023-21807: (Important) Spoof Microsoft Dynamics CVE-2023-21567: (Important) DoS Visual Studio CVE-2023-21700: (Important) DoS Windows iSCSI CVE-2023-21701: (Important) DoS Windows Protected EAP (PEAP) CVE-2023-21702: (Important) DoS Windows iSCSI CVE-2023-21722: (Important) DoS .NET Framework CVE-2023-21811: (Important) DoS Windows iSCSI CVE-2023-21813: (Important) DoS Windows Cryptographic Services CVE-2023-21816: (Important) DoS Windows Active Directory CVE-2023-21818: (Important) DoS Windows SChannel CVE-2023-21819: (Important) DoS Windows Cryptographic Services CVE-2023-21553: (Unknown ) RCE Azure DevOps 

What to do?

Business users like to prioritise patches, rather than doing them all at once and hoping nothing breaks; we therefore put the Critical bugs at the top, along with the RCE holes, given that RCEs are typically used by crooks to get their initial foothold.

In the end, however, all bugs need to be patched, especially now that the updates are available and attackers can start “working backwards” by trying to figure out from the patches what sort of holes existed before the updates came out.

Reverse engineering Windows patches can be time-consuming, not least because Windows is a closed-source operating system, but it’s an awful lot easier to figure out how bugs work and how to exploit them if you’ve got a good idea where to start looking, and what to look for.

The sooner you get ahead (or the quicker you catch up, in the case of zero-day holes, which are bugs that the crooks found first), the less likely you’ll be the one who gets attacked.

So even if you don’t patch everything at once, we’re nevertheless going to say: Don’t delay/Get started today!


READ THE SOPHOSLABS ANALYSIS OF PATCH TUESDAY FOR MORE DETAILS


Apple fixes zero-day spyware implant bug – patch now!

Apple has just released updates for all supported Macs, and for any mobile devices running the very latest versions of their respective operating systems.

In version number terms:

  • iPhones and iPads on version 16 go to iOS 16.3.1 and iPadOS 16.3.1 respectively (see HT213635).
  • Apple Watches on version 9 go to watchOS 9.3.1 (no bulletin).
  • Macs running Ventura (version 13) go to macOS 13.2.1 (see HT213633).
  • Macs running Big Sur (version 11) and Monterery (12) get an update dubbed Safari 16.3.1 (see HT213638).

Oh, and tvOS gets an update, too, although Apple’s TV platform confusingly goes to tvOS 16.3.2 (no bulletin).

Apparently, tvOS recently received a product-specific functionality fix (one listed on Apple’s security page with no information beyond the sentence This update has no published CVE entries, implying no reported security fixes) that already used up the version number 16.3.1 for Apple TVs.

As we’ve seen before, mobile devices still using iOS 15 and iOS 12 get nothing, but whether that’s because they’re immune to this bug or simply that Apple hasn’t got round to patching them yet…

…we have no idea.

We’ve never been quite sure whether this counts as a telltale of delayed updates or not, but (as we’ve seen in the past) Apple’s security bulletin numbers form an intermittent integer sequence. The numbers go from 213633 to 213638 inclusive, with a gap at 213634 and gaps at 213636 and 213637. Are these security holes that will get backfilled with yet-to-be-released patches, or are they just gaps?

What sort of zero-day is it?

Given that the Safari browser has been updated on the pre-previous and pre-pre-previous versions of macOS, we’re assuming that older mobile devices will eventually receive patches, too, but you’ll have to keep your eyes on Apple’s official HT201222 Security Updates portal to know if and when they come out.

As mentioned in the headline, this is another of those “this smells like spyware or a jailbreak” issues, given that the all updates for which official documentation exists include patches for a bug denoted CVE-2023-23529.

This security hole is a flaw in Apple’s WebKit component that’s described as Processing maliciously crafted web content may lead to arbitrary code execution.

The bug also receives Apple’s usual euphemism for “this is a zero-day hole that crooks are already abusing for evil ends, and you can surely imagine what those might be”, namely the words that Apple is aware of a report that this issue may have been actively exploited.

Remember that WebKit is a low-level operating system component that’s responsible for processing data fetched from remote web servers so that it can be displayed by Safari and many other web-based windows programmed into hundreds of other apps.

So, the words arbitrary code execution above really stand for remote code execution, or RCE.

Installjacking

Web-based RCE exploits generally give attackers a way to lure you to a booby-trapped website that looks entirely unexceptionable and unthreatening, while implanting malware invisibly simply as a side-effect of you viewing the site.

A web RCE typically doesn’t provoke any popups, warnings, download requests or any other visible signs that you are initiating any sort of risky behaviour, so there’s no point at which attacker needs catch you out or to trick you into taking the sort of online risk that you’d normally avoid.

That’s why this sort of attack is often referred to as a drive-by download or a drive-by install.

Just looking at a website, which ought to be harmless, or opening an app that relies on web-based content for any of its pages (for example its splash screen or its help system), could be enough to infect your device.

Remember also that on Apple’s mobile devices, even non-Apple browsers such as Firefox, Chrome and Edge are compelled by Apple’s AppStore rules to stick to WebKit.

If you install Firefox (which has its own browser “engine” called Gecko) or Edge (based on a underlying layer called Blink) on your Mac, those alternative browsers don’t use WebKit under the hood, and therefore won’t be vulnerable to WebKit bugs.

(Note that this doesn’t immunise you from security problems, given that Gecko and Blink may bring along their own additional bugs, and given that plenty of Mac software components use WebKit anyway, whether you steer clear of Safari or not.)

But on iPhones and iPads, all browsers, regardless of vendor, are required to use the operating system’s own WebKit substrate, so all of them, including Safari, are theoretically at risk when a WebKit bug shows up.

What to do?

If you have an Apple product on the list above, do an update check now.

That way, if you’ve already got the update, you’ll reassure yourself that you’re patched, but if your device hasn’t got to the front of the download queue yet (or you’ve got automatic updates turned off, either by accident or design), you’ll be offered the update right away.

On a Mac, it’s Apple menu > About this Mac > Software Update… and on an iDevice, it’s Settings > General > Software Update.


If your Apple product isn’t on the list, notably if you’re stuck back on iOS 15 or iOS 12, there’s nothing you can do right now, but we suggest keeping an eye on Apple’s HT201222 page in case your product is affected and does get an update in the next few days.


As you can imagine, given how strictly Apple locks down its mobile products to stop you using apps from anywhere but the App Store, over which it exerts complete commercial and technical control…

…bugs that allow rogues and crooks to inject unauthorised code onto Apple phones are highly sought after, given that RCEs are about the only reliable way for attackers to hit you up with malware, spyware or any other sort of cyberzombie programming.

Which gives us a good reason, as always, to say: Don’t delay/Do it today.


Serious Security: GnuTLS follows OpenSSL, fixes timing attack bug

Last week, we wrote about a bunch of memory management bugs that were fixed in the latest security update of the popular OpenSSL encryption library.

Along with those memory bugs, we also reported on a bug dubbed CVE-2022-4304: Timing Oracle in RSA Decryption.

In this bug, firing the same encrypted message over and over again at a server, but modifying the padding at the end of the data to make the data invalid, and thus provoking some sort of unpredictable behaviour…

…wouldn’t take a consistent amount of time, assuming you were close to the target on the network that you could reliably guess how long the data transfer part of the process would take.

Not all data processed equally

If you fire off a request, time how long the answer takes, and subtract the time consumed in the low-level sending-and-receiving of the network data, you know how long the server took to do its internal computation to process the request.

Even if you aren’t sure how much time is used up in the network, you can look for variations in round-trip times by firing off lots of requests and collecting loads of samples.

If the network is reliable enough to assume that the networking overhead is largely constant, you may be able to use statistical methods to infer which sort of data modification causes what sort of extra processing delay.

From this, you many be able to infer something about the the structure, or even the content, of the original unencrypted data that’s supposed to be kept secret inside each repeated request.

Even if you can only extract one byte of plaintext, well, that’s not supposed to happen.

So-called timing attacks of this sort are always troublesome, even if you might need to send millions of bogus packets and time them all to have any chance of recovering just one byte of plaintext data…

…because networks are faster, more predictable, and capable of handling much more load than they were just a few years ago.

You might think that millions of treacherous packets spammed at you in, say, the next hour would stand out like a sort thumb.

But “a million packets an hour more or less than usual” simply isn’t a particularly large variation any more.

Similar “oracle” bug in GnuTLS

Well, the same person who reported the fixed-at-last bug timing bug in OpenSSL also reported a similar bug in GnuTLS at about the same time.

This one has the bug identifier CVE-2023-0361.

Although GnuTLS isn’t quite as popular or widely-used as OpenSSL, you probably have a number of programs in your IT estate, or even on your own computer, that use it or include it, possibly including FFmpeg, GnuPG, Mplayer, QEMU, Rdesktop, Samba, Wget and Wireshark.

Ironically, the timing flaw in GnuTLS appeared in code that was supposed to log timing attack errors in the first place.

As you can see from the code difference (diff) below, the programmer was aware that any conditional (if ... then) operation used in checking and dealing with a decryption error might produce timing variations, because CPUs generally take a different amount of time depending on which way your code goes after a “branch” instruction.

(That’s especially true for a branch that often goes one way and seldom the other, because CPUs tend to remember, or cache, code that runs repeatedly in order to improve performance, thus making the infrequently-taken code run detectably slower.)

Code diff of gnutls-3.7.8/lib/auth/rsa.c against 3.7.9

But the programmer still wanted to log that an attack might be happening, which happens if the if (ok) test above fails and branches into the else { ... } section.

At this point, the code calls the _gnutls_debug_log() function, which could take quite a while to do its work.

Therefore the coder inserted a deliberate call to _gnutls_no_log() in the then { ... } part of the code, which pretends to log an “attack” when there isn’t one, in order to try to even up the time that the code spends in either direction that the if (ok) branch instruction can take.

Apparently, however, the two code paths were not sufficiently similar in the time they used up (or perhaps the _gnutls_debug_log() function on its own was insufficiently consistent in dealing with different sorts of error), and an attacker could begin to distinguish decryption telltales after a million or so tries.

What to do?

If you’re a programmer: the bug fix here was simple, and followed the “less is more” principle.

The code in pink above, which was deemed not to give terribly useful attack detection data anyway, was simply deleted, on the grounds that code that’s not there can’t be compiled in by mistake, regardless of your build settings…

…and code that’s not compiled in can never run, whether by accident or design.

If you’re a GnuTLS user: the recently-released version 3.7.9 and the “new product flavour” 3.8.0 have this fix, along with various others, included.

If you’re running a Linux distro, check for updates to any centrally-managed shared library version of GnuTLS you have, as well as for apps that bring their own version along.

On Linux, search for files with the name libgnutls*.so to find any shared libraries lying around, and search for gnutls-cli to find any copies of the command line utility that’s often included with the library.

You can run gnutls-cli -vv to find out which version of libgnutls it’s dynamically linked to:

 $ gnutls-cli -vv gnutls-cli 3.7.9 <-- my Linux distro got the update last Friday (2023-02-10)

Reddit admits it was hacked and data stolen, says “Don’t panic”

Popular social media site Reddit – “orange Usenet with ads”, as we’ve somewhat ungraciously heard it described – is the latest well-known web property to suffer a data breach in which its own source code was stolen.

In recent weeks, LastPass and GitHub have confessed to similar experiences, with cyercriminals apparently breaking and entering in much the same way: by figuring out a live access code or password for an individual staff member, and sneaking in under cover of that individual’s corporate identity.

In Reddit’s own words:

Reddit systems were hacked as a result of a sophisticated and highly-targeted phishing attack. They gained access to some internal documents, code, and some internal business systems.

We’re not sure quite how suitable the adjective “sophisticated” is here, not least because Reddit quickly goes on to state that:

As in most phishing campaigns, the attacker sent out plausible-sounding prompts pointing employees to a website that cloned the behavior of our intranet gateway, in an attempt to steal credentials and second-factor tokens.

After successfully obtaining a single employee’s credentials, the attacker gained access to some internal docs, code, as well as some internal dashboards and business systems. We show no indications of breach of our primary production systems (the parts of our stack that run Reddit and store the majority of our data).

In other words, this attack almost certainly succeeded not because it was sophisticated, but because it wasn’t.

Someone, perhaps in a hurry, arrived at what they thought was the frontier, handed over their passport to a fellow-traveller instead of to an official border agent, and then found themselves trapped in nowhere-land without any ID while the imposter sailed through the border crossing in their name.

The single most important factor in an identity-hijacking attack of this sort is not sophistication but, as Reddit rightly pointed out above, plausibility, making it easy even for well-informed and cautious individuals to “coast through” based on habit and experience.

The risk posed by habitual behaviour is why official British road signage includes a bright red rectangle containing the words NEW ROAD LAYOUT AHEAD that’s used when a busy piece of road gets reorganised. The sign isn’t there to protect old-timers from nervous new road users who might find a big junction or roundabout complicated. It’s there to protect those new users, who have no choice but to work cautiously from first principles, and are therefore likely follow the road rules just fine, from old-timers who think they “know” how traffic will behave at that location, and therefore sail through carelessly, based on incorrect assumptions and “learned-but-now-improper” behaviour.

How far did the crooks get?

As already stated, some of Reddit’s own internal systems were accessed by the attackers.

In addition to the mostly-harmless-sounding “docs” and “code” listed above, Reddit has admitted that information about past and present employees and “contacts” (we’re assuming this includes, but is not limited to, contractors and other non-permanent staffers) was stolen, along with information about advertising customers.

Reddit hasn’t stated publicly what sort of data fields were included in the stolen information, merely that the breach was “limited”.

But the word limited might be a good sign (e.g. name and email address, and no other data), but could just as easily be a bad thing (e.g. “only” two data items: your social security number and a scan of your driving licence).

Signed-up users of the Reddit service, it seems – Redditors, as they as known – can stand down from Blue Alert, with Reddit saying that its investigation so far shows no indication that what it calls “non-public data” (in other words, stuff that you didn’t post for the world to see anyway) was accessed by the cybercriminals.

And, as mentioned earlier, the Reddit systems themselves – the operating systems, code and networks that run the Reddit services you interact with, whether as a user or a visitor – don’t seem to have been breached.

From this, we infer that the crooks are unlikely to have made off with data such as login records, system logs, location information or password hashes.

The company also stated, in its notification, that it is still investigating this incident (which happened on Sunday 2023-02-05).

Given its reasonably quick response so far, we’re guessing that Reddit will follow up in due course to say whether it found any further evidence of compromise.

What to do?

To be honest, unless you’re a Reddit staffer or advertiser, it doesn’t look as though there’s much you can or need to do right now.

(We’re assuming, if you do work for or advertise with Reddit, that the company will already have contacted you personally if your data was amongst the “limited” information stolen, which we would consider a better short-term response than telling the whole world first.)

Reddit itself has made three suggestions, namely:

  • Protect against phishing by using a password manager. This makes it harder to put the right password into the wrong site, because the password manager isn’t deceived by the look-and-feel of a site, but works unemotionally with the exact name of the web page it sees in the address bar. Ironically, this seems to be advice that Reddit itself didn’t follow, given that the attackers used a plausible look-alike site to steal login credentials, which a password manager would presumably have rejected as unknown.
  • Turn on 2FA if you can. This means you need a one-time code that changes at every login, which makes a stolen password useless on its own. We agree that this is a great idea, but note that Reddit’s own mechanism for 2FA (two-factor authentication), based on a regularly-changing six-digit code generated by an app on your phone, apparently didn’t help here, because the attackers phished both a current password and a valid-right-now 2FA code.
  • Change your passwords every two months. We disagree with this advice, as does the US National Institute of Standards and Technology (NIST). Change for change’s sake is rarely a good idea, because it tends to enforce habitual behaviour that, in the words of Naked Security friend and colleague Chester Wisniewski, “gets everybody in the habit of a bad habit“.

BUSTING PASSWORD MYTHS

Even though we recorded this podcast more than a decade ago, the advice it contains is still relevant and thoughtful today. We haven’t hit the passwordless future yet, so password-related cybersecurity advice will be valuable for a good while yet. Listen here, or click through for a full transcript.


In short: we continue to recommend password managers, especially if you tend to drift into the habit of picking obvious, identical or even similar passwords for multiple sites without one.

We also recommend password managers as a helpful tool for pulling you up short on imposter sites that look visually perfect to you, but that don’t match the plain and emotionless expectations of your password manager.

And we advise you to turn on 2FA wherever you can, even though we know it’s a bit of a hassle.

We nevertheless remind you that 2FA codes (such as those one-time 6-digit SMS or app-based messages) can still be phished, as happened here to Reddit, so they are not a cure-all for caution.

But we don’t agree with forcing yourself regularly to change all your passwords on an algorithmic basis.

Much better to change your passwords right away whenever you genuinely think it’s worth doing so, than to rely on “I’ll be changing it sometime soon anyway, so I’ll just wait until the process tells me to do it.”

(We’re not saying you mustn’t change your passwords all the time if that makes you happy, but doing it as what you might call a “procedural requirement” will give you a false sense of security, and uses up time you could spend on other tasks that directly improve your online safety.)

As we’ve said before, we may be heading towards a passwordless future, but we suspect we’ll all be juggling passwords for at least some important online service for many years yet.


S3 Ep121: Can you get hacked and then prosecuted for it? [Audio + Text]

CAN YOU GET HACKED AND THEN PROSECUTED FOR IT?

Cryptocurrency crimelords. Security patches for VMware, OpenSSH and OpenSSL. Medical breacher busted. Is that a bug or a feature?

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT


DOUG.   Patches, fixes and crimelords – oh my!

Oh, and yet another password manager in the news.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Paul Ducklin; he is Doug Aamoth…

..think I got that backwards, Paul: *I* am Doug Aamoth; *he* is Paul Ducklin.

Paul, we like to start the show with a This Week in Tech History segment.

And I’d like to submit something from very recent history.

This week, on 06 February 2023, our own Paul Ducklin…


DUCK.   [DELIGHTED] Woooooo!


DOUG.   …published an interview with technology journalist Andy Greenberg about his new book, “Tracers in the Dark – the Global Hunt for the Crime Lords of Cryptocurrency.”

Let’s listen to a quick clip…

[MUSICAL STING]


PAUL DUCKLIN. There’s certainly been a fascination for decades to say, “You know what? This encryption thing? It’s actually a really, really bad idea. We need backdoors. We need to be able to break it, somebody has to think of the children, etc, etc.”

ANDY GREENBERG. Well, it’s interesting to talk about crypto backdoors, and the legal debate over encryption that even law enforcement can’t crack.

I think that, in some ways, the story of this book shows that that is often not necessary.

I mean, the criminals in this book were using traditional encryption.

They were using Tor and the Dark Web.

And none of that was cracked to bust them.


[MUSICAL STING]

DUCK.   I know I would say this, Doug, but I strongly recommend listening to that podcast.

Or, if you prefer to read, go and look through the transcript, because…

…as I said to Andy at the end, it was as fascinating talking to him as it was reading the book in the first place.

I thoroughly recommend the book, and he’s got some amazing insights into things like cryptographic backdoors that come not just from opinion, but from looking into how law enforcement has dealt, apparently very effectively, with cybercrimes, without needing to trample on our privacy perhaps as much as some people think is necessary.

So, some fascinating insights in there, Doug:

Tracers in the Dark: The Global Hunt for the Crime Lords of Crypto


DOUG.   Check that out… that is in the standard Naked Security podcast feed.

If you’re getting our podcast, that should be the one right before this.

And let us now move to a lightning round of fixes-and-updates.

We’ve got OpenSSL. we’ve got VMware, and we’ve got OpenSSH.

Let’s start with VMware. Paul:

VMWare user? Worried about “ESXi ransomware”? Check your patches now!


DUCK.   This became a huge story, I think, because of a bulletin that was put out by the French CERT (Computer Emergency Response Team) on Friday of last week.

So. that would be 03 February 2023.

They simply told it how it was: “Hey, there are these old vulnerabilities in VMware ESXi that you could have patched in 2000 and 2021, but some people didn’t, and now crooks are abusing them. Surprise, surprise: end result equals ransomware.”

They didn’t quite put it like that… but that was the purpose of the bulletin.

It kind of turned into a bit of a news storm of [STARTLED VOICE], “Oh, no! Giant bug in VMware!”

It seems as though people were inferring, “Oh, no! There’s a brand new zero-day! I’d better throw out everything and go and have a look!”

And in some ways, it’s worse than a zero-day, because if you’re at risk of this particular boutique cybergang’s attack, ending in ransomware…

…you’ve been vulnerable for two years.


DOUG.   A 730-day, actually…


DUCK.   Exactly!

So I wrote the article to explain what the problem was.

I also decompiled and analysed the malware that they were using at the end.

Because I think what a lot of people were reading into this story is, “Wow, there’s this big bug in VMware, and it’s leading to ransomware. So if I’m patched, I don’t need to do anything, and the ransomware won’t happen.”

And the problems are that these holes can be used, essentially, for getting root access on ESXi boxes, where the crooks don’t have to use ransomware.

They could do data stealing, spam sending, keylogging, cryptomining, {insert least-favourite cybercrime here}.

And the ransomware tool that these crooks are using, that is semi-automated but can be used manually, is a standalone file scrambler that’s designed to scramble really big files quickly.

So they’re not fully encrypted – they’ve configured it so it encrypts a megabyte, skips 99MB, encrypts a megabyte, skips 99MB…

…so it’ll get through a multi-gigabyte or even a terabyte VMDK (virtual machine image file) really, really quickly.

And they have a script that runs this encryption tool for every VMware image it can find, all in parallel.

Of course, anybody could deploy this particular tool *without breaking in through the VMware vulnerability*.

So, if you aren’t patched, it doesn’t necessarily end in ransomware.

And if you are patched, that’s not the only way the crooks could get in.

So it’s useful to inform yourself about the risks of this ransomware and how you might defend against it.


DOUG.   OK, very good.

Then we’ve got a pokeable double-free memory bug in OpenSSH.

That’s fun to say…

OpenSSH fixes double-free memory bug that’s pokable over the network


DUCK.   It is, Doug.

And I thought, “It’s quite fun to understand,” so I wrote that up on Naked Security as a way of helping you to understand some of this memory-related bug jargon.

It’s quite an esoteric problem (it probably won’t affect you if you do use OpenSSH), but I still think that’s an interesting story, because [A] because the OpenSSH team decided that they would disclose it in their release notes, “It doesn’t have a CVE number, but here’s how it works anyway,” and [B] it’s a great reminder that memory management bugs, particularly when you’re coding in C, can happen even to experienced programmers.

This is a double-free, which is a case of where you finish with a block of memory, so you hand it back to the system and say, “You can give this to another part of my program. I’m done with it.”

And then, later on, rather than using that same block again after you’ve given up (which would be obviously bad), you hand the memory back again.

And it kind of sounds like, “Well, what’s the harm done? You’re just making sure.”

It’s like running back from the car park into your apartment and going up and checking, “Did I really turn the oven off?”

It doesn’t matter if you go back and it is off; it only matters if you goes back and you find you didn’t turn it off.

So what’s the harm with a double-free?

The problem, of course, is that it can confuse the underlying system, and that could lead to somebody else’s memory becoming mismanaged or mismanageable in a way that crooks could exploit.

So if you don’t understand how all that stuff works, then I think this is an interesting, perhaps even an important, read…

…even though the bug is reasonably esoteric and, as far as we know, nobody has figured out a way to exploit it yet.


DOUG.   Last but certainly not least, there is a high-severity data stealing bug in OpenSSL that’s been fixed.

And I would urge people, if you’re like me, reasonably technical, but jargon averse…

…the official notes are chock full of jargon, but, Paul, you do a masterful job of translating said jargon into plain English.

Including a dynamite explainer of how memory bugs work, including: NULL dereference, invalid pointer dereference, read buffer overflow, use-after-free, double-free (which we just talked about), and more:

OpenSSL fixes High Severity data-stealing bug – patch now!


DUCK.   [PAUSE] Well, you’ve left me slightly speechless there, Doug.

Thank you so much for your kind words.

I wrote this one up for… I was going to say two reasons, but sort-of three reasons.

The first is that OpenSSH and OpenSSL are two completely different things – they’re two completely different open source projects run by different teams – but they are both extra-super-widely used.

So, the OpenSSL bug in particular probably applies to you somewhere in your IT estate, because some product you’ve got somewhere almost certainly includes it.

And if you have a Linux distro, the distro probably provides its own version as well – my Linux updated the same day, so you want to go and check for youself.

So I wanted to make people aware of the new version numbers.

And, as we said, there was this dizzying load of jargon that I thought was worth explaining… why even little things matter.

And there is one high-severity bug. (I won’t explain type confusion here – go to the article if you want some analogies on how that works.)

And this is a case where an attacker, maybe, just may be able to trigger what seem like perfectly innocent memory comparisons where they’re just comparing this buffer of memory with that buffer of memory…

…but they misdirect one of the buffers and, lo and behold, they can work out what’s in *your* buffer by comparing it with known stuff that they’ve put in *theirs*.

In theory, you could abuse a bug like that in what you might call a Heartbleed kind of way.

I’m sure we all remember that, if our IT careers go back to 2014 or before – the OpenSSL Heartbleed bug, where a client could ping a server and say, “Are you still alive?”

“Heartbleed heartache” – should you REALLY change all your passwords right away?

And it would send a message back that included up to 64 kilobytes of extra data that possibly included other people’s secrets by mistake.

And that’s the problem with memory leakage bugs, or potential memory leakage bugs, in cryptographic products.

They, by design, generally have a lot more to hide than traditional programs!

So, go and read that and definitely patch as soon as you can.


DOUG.   I cannot believe that Heartbleed was 2014.

That seems… I only had one child when that came out and he was a baby, and now I have two more.


DUCK.   And yet we still talk about it…


DOUG.   Seriously!


DUCK.   …as a defining reminder of why a simple read buffer overflow can be quite catastrophic.

Because a lot of people tend to think, “Oh, well, surely that’s much less harmful than a *write* buffer overflow, where I might get to inject shellcode or divert the behaviour of a program?”

Surely if I can just read stuff, well, I might get your secrets… that’s bad, but it doesn’t let me get root access and take over your network.

But as many recent data breaches have proved, sometimes being able to read things from one server may spill secrets that let you log into a bunch of other servers and do much naughtier things!


DOUG.   Well, that’s a great segue about naughty things and secrets.

We have an update to a story from Naked Security past.

You may recall the story from late last year about someone breaching a psychotherapy company and stealing a bunch of transcripts of therapy sessions, then using that information to extort the patients of this company.

Well, he went on the run… and was just recently arrested in France:

Finnish psychotherapy extortion suspect arrested in France


DUCK.   This was a truly ugly crime.

He didn’t just breach a company and steal a load of data.

He breached a *psychotherapy* company, and doubly-sadly, that company had been utterly remiss, it seems, in their data security.

In fact, their former CEO is in trouble with the authorities on charges that themselves could result in a prison sentence, because they just simply had all this dynamite information that they really owed it to their patients to protect, and didn’t.

They put it on a cloud server with a default password, apparently, where the crook stumbled across it.

But it’s the nature of how the breach unfolded that was truly awful.

He blackmailed the company… I believe he said, “I want €450,000 or I’ll spill all the data.”

And of course, the company had been keeping schtumm about it – this is why the regulators decided to go after the company as well.

They’d been keeping quiet about it, hoping that no one would ever find out, and here comes this guy saying, “Pay us the money, or else.”

Well, they weren’t going to pay him.

There was no point: he’d got the date already, and he was already doing bad things with it.

And so, as you say, the crooks decided, “Well, if I can’t get €450,000 out of the company, why don’t I try hitting up each and every person who had psychotherapy for €200 each?”

According to well-known cybersleuth journo Brian Krebs, his extortion note said, “You’ve got 24 hours to pay me €200. Then I’ll give you 48 hours to pay €500. And if I haven’t heard from you after 72 hours, I will tell your friends, and family, and anyone who wants to know, the things that you said.”

Because that data included transcripts, Doug.

Why on earth were they even storing those things by default in the first place?

I shall never understand that.

As you say, he did flee the country, and he got arrested “in absentia” by the Finns; that allowed them to issue an international arrest warrant.

Anyway, now he is facing the music in France, where, of course, the French are seeking to extradite him to Finland, and the Finns are seeking to put him in court.

Apparently he has form [US equivalent: priors] for this. Doug.

He’s been convicted of cybercrimes before, but back then, he was a minor.

He’s now 25 years old, I do believe; back then he was 17, so he got a second chance.

He got a suspended sentence and a small fine.

But if these allegations are correct, I think a lot of us suspect that he won’t be getting off so lightly this time, if convicted.


DOUG.   So this is a good reminder that you can be – if you’re like this company – both the victim *and* the culprit.

And yet another reminder that you have got to have a plan in place.

So, we have some advice at the end of the article, starting with: Rehearse what you will do if you suffer a breach yourself.

You’ve got to have a plan!


DUCK.   Absolutely.

You cannot make it up as you go along, because there simply will not be time.


DOUG.   And also, if you’re a person that’s affected by something like this: Consider filing a report, because it helps with the investigation.


DUCK.   Indeed it does.

My understanding is that, in this case, plenty of people who received these extortion demands *did* go to the authorities and said, “This came out of the blue. This is like being assaulted in the street! What are you going to do about it?”

The authorities said, “Great, let’s collect the reports,” and that means they can build a better case, and make a stronger case for something like extradition.


DOUG.   Alright, very good.

We will round out our show with: “Another week, another password manager on the hot seat.”

This time, it’s KeePass.

But this particular kerfuffle isn’t so straightforward, Paul:

Password-stealing “vulnerability” reported in KeePass – bug or feature?


DUCK.   Actually, Doug, I think you could say that it’s very straightforward… and immensely complicated at the same time. [LAUGHS]


DOUG.   [LAUGHS] OK, let’s talk about how this actually works.

The feature itself is kind of an automation feature, a scripty-type…


DUCK.   “Trigger” is the term to search for – that’s what they call it.

So, for example, when you save the [KeePass] database file, for example (maybe you’ve updated a password, or generated a new account and you hit the save button), wouldn’t it be nice if you could call on a customised script of your own that synchronises that data with some cloud backup?

Rather than try and write code in KeePass to deal with every possible cloud upload system in the world, why not provide a mechanism where people can customise it if they want?

Exactly the same when you try and use a password… you say, “I want to copy that password and use it.”

Wouldn’t it be nice if you could call on a script that gets a copy of the plaintext password, so that it can use it to log into accounts that aren’t quite as simple as just putting the data into a web form that’s on your screen?

That might be something like your GitHub account, or your Continuous Integration account, or whatever it is.

So these things are called “triggers” because they’re designed to trigger when the product does certain things.

And some of those things – inescapably, because it is a password manager – deal with handling your passwords.

The naysayers feel that, “Oh, well, those triggers, they’re too easy to set up, and adding a trigger isn’t protected itself by a tamper-protection password.”

You have to put in a master password to get access to your passwords, but you don’t have to put in the master password to get access to the configuration file to get access to the passwords.

That’s, I think, where the naysayers are coming from.

And other people are saying, “You know what? They have to get access to the config file. If they’ve got that, you’re in deep trouble already!”


DOUG.   “The people” include KeePass, who is saying, “This program is not set up to defend against someone [LAUGHS] who’s sitting in your chair when you’ve already logged into your machine and the app.”


DUCK.   Indeed.

And I think the truth is probably somewhere in the middle.

I can see the argument why, if you’re going to have the passwords protected with the master password… why don’t you protect the configuration file as well?

But I also agree with people who say, “You know what? If they’ve logged into your account, and they’re on your computer, and they are already you, you kind-of came second in the race already.”

So don’t do that!


DOUG.   [LAUGHS] OK, so if we zoom out a bit on this story…

…Naked Security reader Richard asks:

Is a password manager, no matter which one, a single point of failure? By design, it is a high-value target for a hacker. And the presence of any vulnerability allows an attacker to jackpot every password on the system, regardless of those passwords’ notional strength.

I think that’s a question a lot of people are asking right now.


DUCK.   In a way, Doug, that’s sort of an unanswerable question.

A little bit like this “trigger” thing in the configuration file in KeePass.

Is it a bug, or is it a feature, or do we have to accept that it’s a bit of both?

I think, as another commenter said on that very same article, there’s a problem with saying, “A password manager is a single point of failure, so I’m not going to use one. What I’ll do is, I’ll think up *one* really, really, complicated password and I’ll use it for all my sites.”

Which is what a lot of people do if they aren’t using a password manager… and instead of being a *potential* single point of failure, that creates something that is exactly, absolutely *and already* a single point of failure.

Therefore a password manager is certainly the lesser of two evils.

And I think there’s a lot of truth in that.


DOUG.   Yes, I would say I think it *can* be a single point of failure, depending on the types of accounts you keep.

But for many services, it isn’t and shouldn’t be a single point of *total* failure.

For instance, if my bank password gets stolen, and someone goes to log into my bank account, my bank will see that they’re logging in from the other side of the world and say, “Whoa! Wait a second! This looks weird.”

And they’ll ask me a security question, or they’ll email me a secondary code that I have to put in, even if I’m not set up for 2FA.

Most of my important accounts… I don’t worry so much about those credentials, because there would be an automatic second factor that I’d have to jump through because the login would look suspicious.

And I hope that technology gets so easy to implement that any site that’s keeping any sort of data just has that built in: “Why is this person logging in from Romania in the middle of the night, when they’re normally in Boston?”

A lot of those failsafes are in place for big important stuff that you might keep online, so I’m hoping that needn’t to be a single point of failure in that sense.


DUCK.   That’s a great point, Doug, and I think it kind of illustrates that there is, if you like, a burning question-behind-the-question, which is, “Why do we need so many passwords in the first place?”

And maybe one way to head towards a passwordless future is simply to allow people to use websites where they can choose *not* to have the (air-quotes) “giant convenience” of needing to create an account in the first place.


DOUG.   [GLUM LAUGH] As we discussed, I was affected by the LastPass breach, and I looked at my giant list of passwords and said, “Oh, my God, I’ve got to go change all these passwords!”

As it turns out, I had to *change* half of those passwords, and worse, I had to *cancel* the other half of these accounts, because I had so many accounts in there…

…just for what you said; “I have to make an account just to access something on this site.”

And they’re not all just click-and-cancel.

Some, you’ve got to call.

Some, you’ve got to talk to someone over live chat.

It’s was much more arduous than just changing a bunch of passwords.

But I would urge people, whether you’re using a password manager or not, take a look at just the sheer number of accounts you have, and delete the ones you’re not using any more!


DUCK.   Yes.

In three words, “Less is more.”


DOUG.   Absolutely!

Alright, thank you very much, Richard, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.   Stay secure!

[MUSICAL MODEM]


go top