As far as we can tell, there are a whopping 2874 items in this month’s Patch Tuesday update list from Microsoft, based on the CSV download we just grabbed from Redmond’s Security Update Guide web page.
(The website itself says 2283, but the CSV export contained 2875 lines, where the first line isn’t actually a data record but a list of the various field names for the rest of the lines in the file.)
Glaringly obvious at the very top of the list are the names in the Product column of the first nine entries, dealing with an elevation-of-privilege (EoP) patch denoted CVE-2013-21773 for Windows 7, Windows 8.1, and Windows RT 8.1.
Windows 7, as many people will remember, was extremely popular in its day (indeed, some still consider it the best Windows ever), finally luring even die-hard fans across from Windows XP when XP support ended.
Windows 8.1, which is remembered more as a sort-of “bug-fix” release for the unlamented and long-dropped Windows 8 than as a real Windows version in its own right, never really caught on.
And Windows RT 8.1 was everything people didn’t like in the regular version of Windows 8.1, but running on proprietary ARM-based hardware that was locked down strictly, like an iPhone or an iPad – not something that Windows users were used to, nor, to judge by the market reaction, something that many people were willing to accept.
Indeed, you’ll sometimes read that the comparative unpopularity of Windows 8 is why the next major release after 8.1 was numbered Windows 10, thus deliberately creating a sense of separation between the old version and the new one.
Other explanations include that Windows 10 was supposed to be the full name of the product, so that the 10 formed part of the brand new product name, rather than being just a number added to the name to denote a version. The subsequent appearance of Windows 11 put something of a dent in that theory – but there never was a Windows 9.
The end of two eras
Shed your tears now, because this month sees the very last security updates for the old-school Windows 7 and Windows 8.1 versions.
Windows 7 has now reached the end of its three-year pay-extra-to-get-ESU period (ESU is short for extended security updates), and Windows 8.1 simply isn’t getting extended updates, apparently no matter how much you’re willing to pay:
As a reminder, Windows 8.1 will reach end of support on January 10, 2023 [2023-01-10], at which point technical assistance and software updates will no longer be provided. […]
Microsoft will not be offering an Extended Security Update (ESU) program for Windows 8.1. Continuing to use Windows 8.1 after January 10, 2023 may increase an organization’s exposure to security risks or impact its ability to meet compliance obligations.
So, it really is the end of the Windows 7 and Windows 8.1 eras, and any operating system bugs left on any computers still running those versions will be there forever.
Remember, of course, that despite their ages, both those platforms have this very month received patches for dozens of different CVE-numbered vulnerabilities: 42 CVEs in the case of Windows 7, and 48 CVEs in the case of Windows 8.1.
Even if contemporary threat researchers and cybercriminals aren’t explicitly looking for bugs in old Windows builds, flaws that are first found by attackers digging into the very latest build of Windows 11 might turn out to have been inherited from legacy code.
In fact, the CVE counts of 42 and 48 above compare with a total of 90 different CVEs listed on Microsoft’s official January 2023 Release Notes page, loosely suggesting that about half of today’s bugs (in this month’s list, all 90 have CVE-2023-XXXX date designators) have been waiting around to be found in Windows for at least a decade.
In other words, in the same way that bugs uncovered in old versions may turn out still to affect the latest and greatest releases, you will also often find that “new” bugs go way back, and can be retrofitted into exploits that work on old Windows versions, too.
Ironically, “new” bugs may ultimately be easier to exploit on older versions, due to the less restrictive software build settings and more liberal run-time configurations that were considered acceptable back then.
Older laptops with less memory than today were typically set up with 32-bit versions of Windows, even if they had 64-bit processors. Some threat mitigation techniques, notably those that involve randomising the locations where programs end up in memory in order to to reduce predictability and make exploits harder to pull off reliably, are typically less effective on 32-bit Windows, simply because there are fewer memory addresses to choose from. Like hide-and-seek, the more possible places there are to hide, the longer it generally takes to find you.
“Exploitation detected”
According to Bleeping Computer, only two of the vulnerabilities disclosed this month are listed as being in-the-wild, in other words known outside Microsoft and the immediate research community:
CVE-2023-21674:Windows Advanced Local Procedure Call (ALPC) Elevation of Privilege Vulnerability. Confusingly, this one is listed as Publicly disclosed: no, but Exploitation Detected. From this, we assume that cybercriminals already know how to abuse this bug, but they’re carefully keeping the details of the exploit to themselves, presumably to make it harder for threat responders to know what to look for on systems that haven’t been patched yet.
CVE-2023-21549:Windows SMB Witness Service Elevation of Privilege Vulnerability. This one is denoted Publicly disclosed, but nevertheless written up as Exploitation Less Likely. From this, we infer that even if someone tells you where the bug is located and how you might trigger it, figuring out how to exploit the bug successfully and actually achieving an elevation of privilege is going to be difficult.
Intriguingly, the CVE-2023-21674 bug, which is actively in use by attackers, isn’t on the Windows 7 patch list, but it does apply to Windows 8.1.
The second bug, CVE-2023-21549, described as publicly known, applies to both Windows 7 and Windows 8.1.
As we said above, newly discovered flaws often go a long way.
CVE-2023-21674 applies all the way from Windows 8.1 to the very latest builds of Windows 11 2022H2 (H2, in case you were wondering, means “the release issued in the second half of the year”).
Even more dramatically, CVE-2023-21549 applies right from Windows 7 to Windows 11 2022H2.
What to do with those old computers?
If you’ve got Windows 7 or Windows 8.1 computers that you still consider usable and useful, consider switching to an open source operating system, such as a Linux distro, that is still getting both support and updates.
Some community Linux builds specialise in keeping their distros small and simple
Even though they may not have the latest and greatest collection of photo filters, video editing tools, chess engines and high-resolution wallpapers, minimalist distros are still suitable for browsing and email, even on old, 32-bit hardware with small hard disks and low memory.
JWT is short for JSON Web Token, where JSON itself is short for JavaScript Object Notation.
JSON is a modernish way of representing structured data; its format is a bit like XML, and can often be used instead, but without all the opening-and-closing angle brackets to get in the way of legibility.
For example, data that might be recorded like this in XML…
Whether the JSON really is easier to read than the XML is an open question, but the big idea of JSON is that because the data is encoded as legal JavaScript source, albeit without any directly or indirectly executable code in it, you can parse and process it using your existing JavaScript engine, like this:
The output string undefined above merely reflects the fact that console.log() is a procedure – a function that does some work but doesn’t return a value. The word Sophos is printed out as a side-effect of calling the function, while undefined denotes what the function calculated and sent back: nothing.
The popularity of JavaScript for both in-browser and server-side programming, plus the visual familiarity of JSON to JavaScript coders, means that JSON is widely used these days, especially when exchanging structured data between web clients and servers.
And one popular use of JSON is the JWT system, which isn’t (officially, at any rate) read aloud as juh-witt, as it is written, but peculiarly pronounced jot, an English word that is sometimes used to refer the little dot we write above above an i or j, and that refers to a tiny but potentially important detail.
Authenticate strongly, then get a temporary token
Loosely speaking, a JWT is a blob of encoded data that is used by many cloud servers as a service access token.
The idea is that you start by proving your identity to the service, for example by providing a username, password and 2FA code, and you get back a JWT.
The JWT sent back to you is a blob of base64-encoded (actually, URL64-encoded) data that includes three fields:
Which crytographic algorithm was used in constructing the JWT.
What sort of access the JWT grants, and for how long.
A keyed cryptographic hash of the first two fields, using a secret key known only to your service provider.
Once you’ve authenticated up front, you can make subsequent requests to the online service, for example to check a product price or to look up an email address in a database, simply by including the JWT in each request, using it as a sort-of temporary access card.
Clearly, if someone steals your JWT after it’s been issued, they can play it back to the relevant server, which will typically give them access instead of you…
…but JWTs don’t need to be saved to disk, usually have a limited lifetime, and are sent and received over HTTPS connections, so that they can’t (in theory at least) easily be sniffed out or stolen.
When JWTs expire, or if they are cancelled for security reasons by the server, you need to go through the full-blown authentication process again in order to re-establish your right to access the service.
But for as long they’re valid, JWTs improve performance because they avoid the need to reauthenticate fully for every online request you want to make – rather like session cookies that are set in your browser while you’re logged into a social network or a news site.
Security validation as infiltration
Well, cybersecurity news today is full of a revelation by researchers at Palo Alto that we’ve variously seen described as a “high-severity flaw” or a “critical security flaw” in a popular JWT implementation.
In theory, at least, this bug could be exploited by cybercriminals for attacks ranging from implanting unauthorised files onto a JWT server, thus maliciously modifying its configuration or modifying the code it might later use, to direct and immediate code execution inside a victim’s network.
Simply put, the act of presenting a JWT to a back-end server for validation – something that typically happens at every API call (jargon for making a service request) – could lead malware being implanted.
But here’s the good news:
The flaw isn’t intrinsic to the JWT protocol. It applies to a specific implementation of JWT called jsonwebtoken from a group called Auth0.
The bug was patched three weeks ago. If you’ve updated your version of jsonwebtoken from 8.5.1 or earlier to version 9.0.0, which came out on 2022-12-21, you’re now protected from this particular vulnerability.
Cybercriminals can’t directly exploit the bug simply by logging in and making API calls. As far as we can see, although an attacker could subsequently trigger the vulnerability by making remote API requests, the bug needs to be “primed” first by deliberately writing a booby-trapped secret key into your authentication server’s key-store.
According to the researchers, the bug existed in the part of Auth0’s code that validated incoming JWTs against the secret key stored centrally for that user.
As mentioned above, the JWT itself consists of two fields of data denoting your access privileges, and a third field consisting of the first two fields hashed using a secret key known only to the service you’re calling.
To validate the token, the server needs to recalculate the keyed hash of those first two JWT fields, and to confirm the hash that you presented matches the hash it just calculated.
Given that you don’t know the secret key, but you can present a hash that was computed recently using that key…
…the server can infer that you must have acquired the hash from the authentication server in the first place, by proving your identity up front in some suitable way.
Data type confusion
It turns out that the hash validation code in jsonwebtoken assumes (or, until recently, assumed) that the secret key for your account in the server’s own authentication key-store really was a cryptographic secret key, encoded in a standard text-based format such as PEM (short for privacy enhanced mail, but mainly used for non-email purposes these days).
If you could somehow corrupt a user’s secret key by replacing it with data that wasn’t in PEM format, but that was, in fact, some other more complex sort of JavaScript data object…
…then you could booby-trap the secret-key-based hash validation calculation by tricking the authentication server into running some JavaScript code of your choice from that infiltrated “fake key”.
Simply put, the server would try to decode a secret key that it assumed was in a format it could handle safely, even if the key wasn’t in a safe format and the server couldn’t deal with it securely.
Note, however, that you’d pretty much need to hack into the secret key-store database first, before any sort of truly remote code execution trigger would be possible.
And if attackers are already able to wander around your network to the point that they can not only poke their noses into but also modify your JWT secret-key database, you’ve probably got bigger problems than CVE-2022-23539, as this bug has been designated.
What to do?
If you’re using an affected version of jsonwebtoken, update to version 9.0.0 to leave this bug behind.
However, if you’ve now patched but you think crooks might realistically have been able to pull off this sort of JWT attack on your network, patching alone isn’t enough.
In other words, if you think you might have been at risk here, don’t just patch and move on.
Use threat detection and response techniques to look for holes by which cybercriminals could get far enough to attack your network more generally…
…and make sure you don’t have crooks in your network anyway, even after applying the patch.
LEARN THE TRICKS THAT CROOKS USE WHEN THEY’RE ALREADY IN
If you’re a programmer, whether you code for a hobby or professionally, you’ll know that creating a new version of your project – an official “release” version that you yourself, or your friends, or your customers, will actually install and use – is always a bit of a white-knuckle ride.
After all, a release version depends on all your code, relies on all your default settings, goes out only with your published documentation (but no insider knowledge), and needs to work even on computers you’ve never seen before, set up in configurations you’ve never imagined, alongside other software you’ve never tested for compatibility.
Simply put, the more complex a project becomes, and the more developers you have working on it, and the more separate components that have to work smoothly with all the others…
…the more likely it is for the whole thing to be much less impressive than the sum of the parts.
As a crude analogy, consider that the track team with the fastest individual 100m sprinters doesn’t always win the 4x100m relay.
CI to the rescue
One attempt to avoid this sort of “but it worked fine on my computer” crisis is a technique known in the jargon as Continuous Integration, or CI for short.
The idea is simple: every time anyone makes a change in their part of the project, grab that person’s new code, and whisk them and their new code through a full build-and-test cycle, just like you would before creating a final release version.
Clearly, this is a luxury that projects in the physical world can’t take: if you’re constructing, say, a Sydney Harbour Bridge, you can’t rebuild an entire test span, with all-new raw materials, every time you decide to tweak the riveting process or to see if you can fit bigger flagpoles at the summit.
Even when you “build” a computer software project from one bunch of source files into a collection of output files, you consume precious resources, such as electricity, and you need a sudden surge in computing power to run alongside all the computers that the developers themselves are using.
After all, in software engineering processess that use CI, the idea is not to wait until everyone is ready, and then for everyone to step back from programming and to wait for a final build to be completed.
Builds happen all day, every day, so that coders can tell long in advance if they’ve inadvertently made “improvements” that negatively affect everyone else – breaking the build, as the jargon might say.
The idea is: fail early, fix quickly, increase quality, make predictable progress, and ship on time.
Sure, even after a successful test build, your new code may still have bugs in it, but at least you won’t get to the end of a development cycle and then find that everyone has to go back to the drawing board just to get the software to build and work at all, because the various components have drifted out of alignment.
Early software development methods were often referred to as following a waterfall model, where everyone worked harmoniously but independently as the project drifted gently downriver between version deadlines, until everything came together at the end of the cycle to create a new release, ready to plunge over the tumultuous waterfall of a version upgrade, only to emerge into another gentle period of clear water downstream for further design and development. One problem with those “waterfalls”, however, was that you often ended up trapped in an apparently endless circular eddy right at the very edge of the waterfall, gravity notwithstanding, unable to get over the lip of the precipice at all until lengthy hacks and modifications (and concomitant overruns) made the onward journey possible.
Just the job for the cloud
As you can imagine, adopting CI means having a bunch of powerful, ready-to-go servers at your disposal whenever any of your developers triggers a build-and-test procedure, in order to avoid drifting back into that “getting stuck at the very lip of the waterfall” situation.
That sounds like a job for the cloud!
And, indeed, it is, with numerous so-called CI/CD cloud services (this CD is not a playable music disc, but shorthand for continuous delivery) offering you the flexibility to have an ever-varying number of different branches of different products going through differently configured builds, perhaps even on different hardware, at the same time.
CircleCI is one such cloud-based service…
…but, unfortunately for their customers, they’ve just suffered a breach.
Technically, and as seems to be common these days, the company hasn’t actually used the words “breach”, “intrusion” or “attack” anywhere in its official notification: so far, it’s just a security incident.
The original notice [2023-01-04] stated simply that:
We wanted to make you aware that we are currently investigating a security incident, and that our investigation is ongoing. We will provide you updates about this incident, and our response, as they become available. At this point, we are confident that there are no unauthorized actors active in our systems; however, out of an abundance of caution, we want to ensure that all customers take certain preventative measures to protect your data as well.
What to do?
Since then, CircleCI has provided regular updates and further advice, which mostly boils down to this: “Please rotate any and all secrets stored in CircleCI.”
As we’ve explained before, the jargon word rotate is badly chosen here, because it’s the legacy of a dangerous past where people literally did “rotate” passwords and secrets through a small number of predictable choices, not only because keeping track of new ones was harder back then, but also because cybersecurity wasn’t as important as it is today.
What CircleCI means is that you need to CHANGE all your passwords, secrets, access tokens, environment variables, public-private keypairs, and so on, presumably because the attackers who breached the network either did steal yours, or can’t be proved not to have stolen them.
The company has a provided a list of the various sorts of private security data that was affected by the breach, and has created a handy script called CircleCI-Env-Inspector that you can use to export a JSON-formatted list of all the CI secrets that you need to change in your environment.
Additionally, cybercriminals may now have access tokens and cryptographic keys that could give them a way back into your own network, especially because CI build processes sometimes need to “call home” to request code or data that you can’t or don’t want to upload into the cloud (scripts that do this are known in the jargon as runners).
So, CircleCI advises:
We also recommend customers review internal logs for their systems for any unauthorized access starting from 2022-12-21 [up to and including 2023-01-04], or upon completion of [changing your secrets].
Intriguingly, if understandably, some customers have noted that the date implied by CircleCI on which this breach began [2022-12-21] just happens to coincide with a blog post the company published about recent reliability updates.
Customers wanted to know, “Was the breach related to bugs introduced in this update?”
Given that the company’s reliability update articles seem to be rolling news summaries, rather than announcements of individual changes made on specific dates, the obvious answer is, “No”…
…and CircleCI has stated that the coincidental date of 2022-12-21 for the reliability blog post was just that: a coincidence.
There’s been a bit of a kerfuffle in the technology media over the past few days about whether the venerable public-key cryptosystem known as RSA might soon be crackable.
RSA, as you probably know, is short for Rivest-Shamir-Adleman, the three cryptographers who devised what turned into an astonishingly useful and long-lived encryption system by means of which two people can communicate securely…
…without meeting up first to agree on a secret encryption key.
Very simply put, RSA has not one key, like a traditional door lock, but two different keys, one for locking the door and the other for unlocking it.
You can fairly quickly generate a pair of one-to-lock and the-other-to-unlock keys, but given only one of them, you can’t figure out what the other one looks like.
So, you designate one of them as your “public key”, which you share with the world, and you keep the other as your “private key”.
This means that anyone who wants to send you a private message can lock it up with your public key, but (assuming that you really do treat your private key as private), only you can unlock it.
Working the other way around, someone who wants you to prove your identity can send you a message, and ask you to lock it up with your private key and send it back.
If your public key correctly unlocks it, then they have some reason to think you’re who you say.
We’re ignoring here the issues of how you ensure that a public key really belongs to the person you think, what you do if you realise your private key has been stolen, and numerous other operational complexities. The big deal is that RSA introduced a two-key system where one key can’t be worked out from the other, in contrast to the traditional one-key system, with the same key to lock and unlock your secrets, that had been in use for centuries.
Public-key crypto
You’ll see this sort of process variously referred to as as public-key cryptography, public-private encryption, or asymmetric enccryption (symmetric enryption, such as AES, is where the same key is used for locking and unlocking your data).
In fact, if you really know your cryptographic history, you might even have heard it called by the curious name of non-secret encryption (NSE), because cryptographers in the UK had come up with a similar idea some years earlier that R, S and A, but in what turned out to be a massively missed opportunity, the British government decided to suppress the discovery, and not to develop or even publish the process.
Even though there are alternatives to RSA these days which let you have smaller public and private keys, and which are based on algorithms that run faster, RSA is still widely used, and there’s still a lot of potentially crackable data sitting around in archives, logfiles and network captures that was protected by RSA when it was transmitted.
In other words, if RSA turns out to be easily crackable (for some senses of easily, at least), for example because a Big Fast Quantum Computer comes along, we would have reasonable cause for concern.
The big deal about factoring integers (where you figure out, for example, that 15 = 3×5, or that 15538213 x 16860433 = 261980999226229) is that doing just that lies at the heart of cracking RSA, which is based on calculations involving two huge, random prime numbers.
In RSA, everyone knows the number you get when you multiply those numbers together (called the product), but only the person who originally came up with the starting numbers knows how the product was created – the factors together essentially form their private key.
So, if you could split the product back into its unique pair of prime factors (as they are known), you’d be able to crack that person’s encryption.
The thing is that if your initial prime numbers are big enough (these days, 1024 bits each, or more, for a product of 2048 bits, or more), you just won’t have enough computing power to prise the product apart.
Unless you can make, buy or rent a powerful enough quantum computer, that is.
Big prime products
Apparently, the biggest prime product yet factored by a quantum computer is just 249919 (491 x 509), which my eight-year old laptop can handle conventionally, including the time taken to load the program and print the answer, in a time so short that the answer is variously reported as being 0 milliseconds or 1 millisecond.
And, as the Chinese researchers report, the standard ways of approaching RSA cracking with a quantum computer would require millions of so called qubits (quantum computer type bits), where the biggest such computer known today has just over 400 qubits.
As you can see, if RSA-2048 needs millions of qubits to break, you need loads more qubits than there are bits in the number you want to factor.
But the researchers suggest that they have may have found a way of optimising the cracking process so it requires not just fewer than a million qubits, but even fewer qubits than the number of bits in the number you’re trying to crack:
We estimate that a quantum circuit with 372 physical qubits and a depth of thousands is necessary to challenge RSA-2048 using our algorithm. Our study shows great promise in expediting the application of current noisy quantum computers, and paves the way to factor large integers of realistic cryptographic significance.
The burning question is…
Are they right?
If we already have computers with 100s of qubits, is the end of RSA-2048 indeed just round the corner?
We just don’t have the mathematical expertise to tell you – their 32-page paper isn’t for the faint-hearted or even for the mathematical generalist – but the consensus, for now at least, seems to be…
No.
Nevertheless, this is a great time to be thinking about how ready you are for any encryption or hashing algorithm suddenly to be found wanting, whether for quantum reasons or not.
Named the HP-35 simply because it had 35 buttons, the calculator was a challenge by HP’s Bill Hewlett to shrink down the company’s desktop-size 9100A scientific calculator so it could fit in his shirt pocket.
The HP-35 stood out for being able to perform trigonometric and exponential functions on the go, things that until then had required the use of slide rules.
At launch, it sold for $395, almost $2500 in today’s money.
And Paul, I know you to be a fan of old HP calculators…
DUCK. Not *old* HP calculators, just “HP calculators”.
DOUG. Just in general? [LAUGHS]
Yes, OK…
DUCK. Apparently, at the launch, Bill Hewlett himself was showing it off.
And remember, this is a calculator that is replacing a desktop calculator/computer that weighed 20kg…
…apparently, he dropped it.
If you’ve ever seen an old HP calculator, they were beautifully built – so he picked it up, and, of course, it worked.
And apparently all the salespeople at HP built that into their repartee. [LAUGHS]
When they went out on the road to do demos, they’d accidentally (or otherwise) let their calculator fall, and then just pick it up and carry on regardless.
DOUG. Love it! [LAUGHS]
DUCK. They don’t make ’em like they used to, Doug.
DOUG. They certainly don’t.
Those were the days – incredible.
OK, let’s talk about something that’s not so cool.
DUCK. Uh-oh!
DOUG. LastPass: we said we’d keep an eye on it, and we *did* keep an eye on it, and it got worse!
DUCK. It turns out to be a long running story, where LastPass-the-company apparently simply did not realise what had happened.
And every time they scratched that rust spot on their car a little bit, the hole got bigger, until eventually the whole thing fell in.
So how did it start?
They said, “Look, the crooks got in, but they were only in for four days, and they were only in the development network. So it’s our intellectual property. Oh, dear. Silly us. But don’t worry, we don’t think they got into the customer data.”
Then they came back and said, “They *definitely* didn’t get into the customer data or the password vaults, because those aren’t accessible from the development network.”
Then they said, “W-e-e-e-e-e-l, actually, it turns out that they *were* able to do what’s known in the jargon as “lateral movement. Based on what they stole in incident one, there was incident two, where actually they did get into customer information.”
So, we all thought, “Oh, dear, that’s bad, but at least they haven’t got the password vaults!”
And then they said, “Oh, by the way, when we said ‘customer information’, let us tell you what we mean. We mean a whole lot of stuff about you, like: who you are; where you live; what your phone and email contact details are; stuff like that. *And* [PAUSE] your password vault.”
DOUG. [GASP] OK?!
DUCK. And *then* they said, “Oh, when we said ‘vault’,” where you probably imagined a great big door being shut, and a big wheel being turned, and huge bolts coming through, and everything inside locked up…
“Well, in our vault, only *some* of the stuff was actually secured, and the other stuff was effectively in plain text. But don’t worry, it was in a proprietary format.”
So, actually your passwords were encrypted, but the websites and the web services and an unstated list of other stuff that you stored, well, that wasn’t encrypted.
So it’s a special sort of “zero-knowledge”, which is a phrase they’d used a lot.
[LONGISH SILENCE]
[COUGHS FOR ATTENTION] I left a dramatic pause there, Doug.
[LAUGHTER]
And *THEN* it turned out that…
…you know how they’ve been telling everybody, “Don’t worry, there’s 100,100 iterations of HMAC-SHA-256 in PBKDF2“?
Well, *maybe*.
DOUG. Not for everyone!
DUCK. If you had first installed the software after 2018, that might be the case.
DOUG. Well, I first installed the software in 2017, so I was not privy to this “state-of-the-art” encryption.
And I just checked.
I did change my master password, but it’s a setting – you’ve got to go into your Account Settings, and there’s an Advanced Settings button; you click that and then you get to choose the number of times your password is tumbled…
…and mine was still set at 5000.
Between that, and getting the email on the Friday before Christmas, which I read; then clicked through to the blog post; read the blog post…
…and my impression of my reaction is as follows:
[VERY LONG TIRED SIGH]
Just a long sigh.
DUCK. But probably louder than that in real life…
DOUG. It just keeps getting worse.
So: I’m out!
I think I’m done…
DUCK. Really?
OK.
DOUG. That’s enough.
I had already started transitioning to a different provider, but I don’t even want to say this was “the last straw”.
I mean, there were so many straws, and they just kept breaking. [LAUGHTER]
When you choose a password manager, you have to assume that this is some of the most advanced technology available, and it’s protected better than anything.
And it just doesn’t seem like this was the case.
DUCK. [IRONIC] But at least they didn’t get my credit card number!
Although I could have got a new credit card in three-and-a-quarter days, probably more quickly than changing all my passwords, including my master password and *every* account in there.
DOUG. Ab-so-lutely!
OK, so if we have people out there who are LastPass users, if they’re thinking of switching, or if they’re wondering what they can do to shore up their account, I can tell them firsthand…
Go into your account; go to the general settings and then click the Advanced Settings tab, and see what the what the iteration count is.
You choose it.
So mine was set… my account was so old that it was set at 5000.
I set it to something much higher.
They give you a recommended number; I would go even higher than that.
And then it re-encrypts your whole account.
But like we said, the cat’s out of the bag…. if you don’t change all your passwords, and they manage to crack your [old] master password, they’ve got an offline copy of your account.
So just changing your master password and just re-encrypting everything doesn’t do the job completely.
DUCK. Exactly.
If you go in and your iteration count is still at 5000, that’s the number of times they hash-hash-hash-and-rehash your password before it’s used, in order to slow down password-guessing attacks.
That’s the number of iterations used *on the vault that the crooks now have*.
So even if you change it to 100,100…
…strange number: Naked Security recommends 200,000 [date: October 2022]; OWASP, I believe, recommends something like 310,000, so LastPass saying, “Oh, well, we do a really, really sort of gung-ho, above average 100,100”?
This was a fun little story that I wrote up between Christmas and New Year because I thought it was interesting, and apparently so did loads of readers because we’ve had active comments there… quantum computing is the cool thing, isn’t it?
It’s like nuclear fusion, or dark matter, or superstring theory, or gravitons, all that sort of stuff.
Everyone has a vague idea of what it’s about, but not many people really understand it.
So, the theory of quantum computing, very loosely speaking, is that it’s a way of constructing an analog computing device, if you like, that is able to do certain types of calculation in such a way that, essentially, all the answers appear immediately inside the device.
And the trick you have is that if you can coallpse this – what is called, I believe, a “superposition”, based on quantum mechanics…
…if you can collapse this superposition such that the answer you actually want is the one that pops out, and all the others vanish in a puff of quantum smoke, then you can imagine what that might mean for cryptography.
Because you might be able to reduce the time taken to do cryptographic cracking dramatically.
And, in fact, there are two main sorts of algorithmic speedup that are possible, if powerful enough quantum computers come along.
One of them deals with cracking things like symmetric-key encryption, like AES, or colliding hashes, like SHA-256, where, if you needed an effort in the amount of X before quantum computing, you might be able to do that cracking with an effort of just the square root of X afterwards.
But even more importantly, for another class of cryptographic algorithm, notably some sorts of public-key cryptography, you could reduce the cracking effort required from X to the *logarithm* of X.
And to give you an idea of how dramatic those changes can be, talking in base 10, let’s say you have a problem that would take you 1,000,000 units of effort.
The square root of 1,000,000 is 1000 – sounds much more tractable, doesn’t it?
And the logarithm of 1,000,000 [in base 10] is just 6!
So, the concern about quantum computing and cryptography is not merely that today’s cryptographic algorithms might require replacing at some time in the future.
The problem is actually that the stuff we are encrypting today, hoping to keep it secure, say, for a couple of years, or even for a couple of decades, might, *during the lifetime of that data*, suddenly become crackable almost in an instant…
…especially to an attacker with plenty of money.
So, in other words, we have to make the change of algorithm *before* we think that these quantum computers might come along, rather than waiting until they appear for the first time.
You’ve got to be ahead in order to stay level, as it were.
We have to remain cryptographically agile so that we can adapt to these changes, and if necessary, so we can adapt proactively, well in advance.
And *that* is what I think they meant by cryptographic agility.
Cybersecurity is a journey, not a destination.
And part of that journey is anticipating where you’re going next, not waiting until you get there.
DOUG. What a segue to our next story!
When it comes to predicting what will happen in 2023, we should remember that history has a funny way of repeating itself…
And that is why I had a rather curious headline, where I was thinking, “Hey, wouldn’t it be cool if I could have a headline like ‘Naked Security 33 1/3’?
I couldn’t quite remember why I thought that was funny… and then I remembered it was Frank Drebin… it was ‘Naked *Gun* 33 1/3’. [LAUGHS]
That wasn’t why I wrote it… the 33 1/3 was a little bit of a joke.
It should really have been “just over 34”, but it’s something we’ve spoken about on the podcast at least a couple of times before.
The Internet Worm, in 1988 [“just over 34” years ago], relied on three main what-you-might-call hacking, cracking and malware-spreading techniques.
Poor password choice.
Memory mismanagement (buffer overflows).
And not patching or securing your existing software properly.
The password guessing… it carried around its own dictionary of 400 or so words, and it didn’t have to guess *everybody’s* password, just *somebody’s* password on the system.
The buffer overflow, in this case, was on the stack – those are harder to exploit these days, but memory mismanagement still accounts for a huge number of the bugs that we see, including some zero-days.
And of course, not patching – in this case, it was people who’d installed mail servers that had been compiled for debugging.
When they realised they shouldn’t have done that, they never went back and changed it.
And so, if you’re looking for cybersecurity predictions for 2023, there will be lots of companies out there who will be selling you their fantastic new vision, their fantastic new threats…
…and sadly, all of the new stuff is something that you have to worry about as well.
But the old things haven’t gone away, and if they haven’t gone away in 33 1/3 years, then it is reasonable to expect, unless we get very vigorous about it, as Congress is suggesting we do with quantum computing, that in 16 2/3 years time, we’ll still have those very problems.
So, if you want some simple cybersecurity predictions for 2023, you can go back three decades…
DOUG. [LAUGHS] Yes!
DUCK. …and learn from what happened then.
Because, sadly, those who cannot remember history are condemned to repeat it.
DOUG. Exactly.
Let’s stay with the future here, and talk about machine learning.
But this isn’t really about machine learning, it’s just a good old supply chain attack involving a machine learning toolkit.
DUCK. Now, this was PyTorch – it’s very widely used – and this attack was on users of what’s called the “nightly build”.
In many software projects, you will get a “stable build”, which might get updated once a month, and then you’ll get “nightly builds”, which is the source code as the developers are working on it now.
So you probably don’t want to use it in production, but if you’re a developer, you might have the nightly build along with a stable build, so you can see what’s coming next.
So, what these crooks did is… they found a package that PyTorch depended upon (it’s called torchtriton), and they went to PyPI, the Python Package Index repository, and they created a package with that name.
Now, no such package existed, because it was normally just bundled along with PyTorch.
But thanks to what you could consider a security vulnerability, or certainly a security issue, in the whole dependency-satisfying setup for Python package management…
…when you did the update, the update process would go, “Oh, torchtriton – that’s built into PyTorch. Oh, no, hang on! There’s a version on PyPI, there’s a version on the public Package Index; I’d better get that one instead! That’s probably the real deal, because it’s probably more up to date.”
DOUG. Ohhhhhhhh….
DUCK. And it was more “up to date”.
It wasn’t *PyTorch* that ended up infected with malware, it was just that when you did the install process, a malware component was injected into your system that sat and ran there independently of any machine learning you might do.
It was a program with the name triton.
And basically what it did was: it read a whole load of your private data, like the hostname; the contents of various important system files, like /etc/passwd (which on Linux doesn’t actually contain password hashes, fortunately, but it does contain a complete list of users on the system); and your .gitconfig, which, if you’re a developer, probably says a whole lot of stuff about projects that you’re working on.
And most naughtily-and-nastily of all: the contents of your .ssh directory, where, usually, your private keys are stored.
It packaged up all that data and it sent it out, Doug, as a series of DNS requests.
DUCK. They were going, “I’m not going to bother using LDAP and JNDI, and all those .class files, and all that complexity. That’ll get noticed. I’m not going to try and do any remote code execution… I’m just going to do an innocent-looking DNS lookup, which most servers will allow. I’m not downloading files or installing anything. I’m just converting a name into an IP number. How harmful could that be?”
Well, the answer is that if I’m the crook, and I am running a domain, then I get to choose which DNS server tells you about that domain.
So if I look up, against my domain, a “server” (I’m using air-quotes) called SOMEGREATBIGSECRETWORD dot MYDOMAIN dot EXAMPLE, then that text string about the SECRETWORD gets sent in the request.
So it is a really, really, annoyingly effective way of stealing (or to use the militaristic jargon that cybersecurity likes, exfiltrating) private data from your network, in a way that many networks don’t filter.
And much worse, Doug: that data was encrypted (using 256-bit AES, no less), so the string-that-actually-wasn’t-a-server-name, but was actually secret data, like your private key…
…that was encrypted, so that if you were just looking through your logs, you wouldn’t see obvious things like, “Hey, what are all those usernames doing in my logs? That’s weird!”
You’d just see crazy, weird text strings that looked like nothing much at all.
So you can’t go searching for strings that might have escaped.
However: [PAUSE] hard-coded key and initialisation vector, Doug!
Therefore. anybody on your network path who logged it could, if they had evil intention, go and decrypt that data later.
There was nothing involving a secret known only to the crooks.
The password you use to decrypt the stolen data, wherever it lives in the world, is buried in the malware – it’s five minutes’ work to go and recover it.
The crooks who did this are now saying, [MOCK HUMILITY] “Oh, no, it was only research. Honest!”
Yeah, right.
You wanted to “prove” (even bigger air-quotes than before) that supply chain attacks are an issue.
So you “proved”( even bigger air-quotes than the ones I just used) that by stealing people’s private keys.
And you chose to do it in a way that anybody else who got hold of that data, by fair means or foul, now or later, doesn’t even have to crack the master password like they do with LastPass.
DOUG. Wow.
DUCK. Apparently, these crooks, they’ve even said, “Oh, don’t worry, like, honestly, we deleted all the data.”
Well…
A) I don’t believe you. Why should I?
DOUG. [LAUGHS]
DUCK. And B) [CROSS] TOO. LATE. BUDDY.
DOUG. So where do things stand now?
Everything’s back to normal?
What do you do?
DUCK. Well, the good news is that if none of your developers installed this nightly build, basically between Christmas and New Year 2022 (the exact times are in the article), then you should be fine.
Because that was the only period that this malicious torchtriton package was on the PyPI repository.
The other thing is that, as far as we can tell, only a Linux binary was provided.
So, if you’re working on Windows, then I’m assuming, if you don’t have the Windows Subsystem for Linux (WSL) installed, then this thing would just be so much harmless binary garbage to you.
Because it’s an Elf binary, not a PE binary, to use the technical terms, so it wouldn’t run.
And there are also a bunch of things that, if you’re worried you can go and check for in the logs.
If you’ve got DNS logs, then the crooks used a specific domain name.
The reason that the thing suddenly became a non-issue (I think it was on 30 December 2022) is that PyTorch did the right thing…
…I imagine in conjunction with the Python Package Index, they kicked out the rogue package and replaced it essentially with a “dud” torchtriton package that doesn’t do anything.
It just exists to say, “This is not the real torchtriton package”, and it tells you where to get the real one, which is from PyTorch itself.
And this means that if you do download this thing, you don’t get anything, let alone malware.
We’ve got some Indicators of Compromise [IoCs] in the Naked Security article.
We have an analysis of the cryptographic part of the malware, so you can understand what might have got stolen.
And sadly, Doug, if you are in doubt, or if you think you might have got hit, then it would be a good idea, as painful as it’s going to be… you know what I’m going to say.
It’s exactly what you had to do with all your LastPass stuff.
Go and regenerate new private keys, or key pairs, for your SSH logins.
Because the problem is that what lots of developers do… instead of using password-based login, they use public/private key-pair login.
You generate a key pair, you put the public key on the server you want to connect to, and you keep the private key yourself.
And then, when you want to log in, instead of putting in a password that has to travel across the network(even though it might be encrypted along the way), you decrypt your private key locally in memory, and you use it to sign a message to prove that you’ve got the matching private key to the server… and it lets you in.
The problem is that, if you’re a developer, a lot of the time you want your programs and your scripts to be able to do that private-key based login, so a lot of developers will have private keys that are stored unencrypted.
DOUG. OK.
Well, I hesitate to say this, but we will keep an eye on this!
And we do have an interesting comment from an anonymous reader on this story who asks in part:
“Would it be possible to poison the crooks’ data cache with useless data, SSH keys, and executables that expose or infect them if they’re dumb enough to run them? Basically, to bury the real exfiltrated data behind a ton of crap they have to filter through?”
DUCK. Honeypots, or fake databases, *are* a real thing.
They’re a very useful tool, both in cybersecurity research… letting the crooks think they’re into a real site, so they don’t just go, “Oh, that’s a cybersecurity company; I’m giving up”, and don’t actually try the tricks that you want them to reveal to you.
And also useful for law enforcement, obviously.
The issue is, if you wish to do it yourself, just make sure that you don’t go beyond what is legally OK for you.
Law enforcement might be able to get a warrant to hack back…
…but where the commenter said, “Hey, why don’t I just try and infect them in return?”
The problem is, if you do that… well, you might get a lot of sympathy, but in most countries, you would nevertheless almost certainly be breaking the law.
So, make sure that your response is proportionate, useful and most importantly, legal.
Because there’s no point in just trying to mess with the crooks and ending up in hot water yourself.
That would be an irony that you could well do without!
DOUG. Alright, very good.
Thank you very much for sending that in, dear Anonymous Reader.
If you have an interesting story, comment, or question you’d like to submit, we’d love to read it on the podcast.
You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.
That’s our show for today.
Thanks very much for listening.
For Paul Ducklin, I’m Doug Aamoth reminding you, until next time, to…