Category Archives: News

OpenSSL fixes High Severity data-stealing bug – patch now!

OpenSSL, probably the best-known if not the most widely-used encryption library in the world, has just release a trifecta of security updates.

These patches cover the two current open-source versions that the organisation supports for everyone, plus the “old” 1.0.2-version series, where updates are only available to customers who pay for premium support.

(Getting into a position where you no longer need to pay for support is probably better for you, even if you don’t care about the cost, because it means you’ll finally be weaning yourself off a version that OpenSSL itself tried to retire years ago.)

The versions you want to see after you’ve updated are:

  • OpenSSL 3.0 series: new version will be 3.0.8.
  • OpenSSL 1.1.1 series: new version will be 1.1.1t (that’s T-for-Tango at the end).
  • OpenSSL 1.0.2 series: new version will be 1.0.2zg (Zulu-Golf).

If you’re wondering why the older versions have three numbers plus a letter at the end, it’s because the OpenSSL project used to have four-part version identifiers, with the trailing letter acting as a counter that could support 26 sub-versions.

As you can see from what’s happened to version 1.0.2, 26 sub-versions turned out not to be enough, leaving a quandary of what to do after version Z-for-Zulu: go back to Alpha-Alpha, which confusingly breaks alphabetic ordering, or just stick with Z-for-Zulu and start a sub-sub-version cycle of A-to-Z.

Also, as you may remember, the mismash of digits and lower-case letters was especially confusing when version 1.1.1l (L-for-Lima) appeared.

Naked Security happily uses a typeface based on the Bauhaus-era road sign lettering still used in many countries, where lower-case L characters are different from upper-case Is and the digit 1, entirely on purpose, but many typefaces render lower-L and upper-I identically.

When version 3 appeared, the OpenSSL team decided to adopt the popular-at-the-moment X.Y.Z three-number versioning system, so the current version series is 3.0 and the sub-version is now 8. (The next version, under development at the moment, will be 3.1.)

In case you’re wondering, there was no regular OpenSSL 2.x series , because that version number had already been used for something else, in the same sort of way that IPv4 was followed by IPv6, because v5 had appeared in another context for a short while, and might have caused confusion.

What went wrong?

There are eight CVE-numbered bug fixes in all, and you probably won’t be surprised to hear that seven of these were caused by memory mismanagement.

Like OpenSSH, which we wrote about at the end of last week, OpenSSL is written in C, and taking care of memory allocation and deallocation in C programs typically involves a lot of “do it yourself”.

Unfortunately, even experienced programmers can forget to match up their malloc() calls and their free() calls correctly, or can lose track of which memory buffers belong to what parts of their program.

The seven memory-related bugs are:

  • CVE-2023-0286: X.400 address type confusion in X.509 GeneralName. High severity; bug affects all versions (3.0, 1.0.1 and 1.0.2).
  • CVE-2023-0215: Use-after-free following BIO_new_NDEF. Moderate severity; bug affects all versions (3.0, 1.1.1, 1.0.2).
  • CVE-2022-4450: Double free after calling PEM_read_bio_ex. Moderate severity; bug affects versions 3.0 and 1.1.1 only.
  • CVE-2022-4203: X.509 Name Constraints read buffer overflow. Moderate severity; bug affects version 3.0 only.
  • CVE-2023-0216: Invalid pointer dereference in d2i_PKCS7 functions. Moderate severity; bug affects version 3.0. only.
  • CVE-2023-021: NULL dereference validating DSA public key. Moderate severity; bug affects version 3.0 only.
  • CVE-2023-0401: NULL dereference during PKCS7 data verification. Moderate severity; bug affects version 3.0 only.

Memory bugs explained

To explain.

A NULL dereference happens when you try to treat the number 0 as a memory address.

This often indicates an incorrectly initialised storage variable, because zero is never considered a valid place to store data.

Indeed, every modern operating system deliberately labels the first few thousand or more bytes of memory as unusable, so that trying to read or write the so-called “zero page” causes a hardware-level error, allowing the operating system to shut the offending program down.

There’s no sensible way to recover from this sort of mistake, because it’s impossible to guess what was really intended.

As a result, programs with remotely triggerable bugs of this type are prone to denial-of-service (DoS) attacks, where a cybercriminal deliberately provokes the vulnerability to force the program to crash, possibly over and over again.

An invalid pointer dereference is similar, but means you try to use a number that doesn’t represent a memory address as if it did.

Because the bogus memory address doesn’t actually exist, this sort of bug generally doesn’t corrupt anything – it’s like trying to defraud someone by mailing out a fake summons or a false invoice to a property that isn’t there.

But, like a NULL dereference, the side-effect (crashing the program) could be turned in an DoS attack.

Read buffer overflows means what they say, namely accessing data past where you’re supposed to, so they generally can’t be directly exploited to corrupt or to take over a running program.

But they’re always worrying in cryptographic applications, because the superfluous data an attacker gets to peek at might include decrypted information that they’re not supposed to see, or cryptographic material such as passwords or private keys.

One of the most famous read overflows in history was the OpenSSL bug known as Heartbleed, where a client could ask a server to “bounce back” a short message to prove it was still alive – a heartbeat, as it was known – but could trick the receiver into sending back up to 64Kbytes more data than the incoming message originally contained. By “bleeding” data from the server over and over again, an attacker could gradually piece together all sorts of data fragments that should never have been revealed, sometimes even including cryptographic keys.

A use-after-free means that you hand back memory to the system, which may well hand it out to another part of your program, but then continue to rely on what’s in that memory block even though it might have changed under your feet without you knowing.

In theory, this could allow an attacker to trigger apparently innocent-looking behaviour in another part of the program with the deliberate aim of provoking a memory change that misdirects or takes control of your code, given that you’re still trusting memory that you no longer control.

A double free is similar, though this means that you return to the system a block of memory that you already gave back earlier, and that might therefore already have been allocated elsewhere in the program.

As with a use-after-free, this can result in two parts of the program trusting the same block of memory, with each part being unware that the data it expects to be present (and that it may already have validated and therefore be willing to rely upon immediately) might have been malevolently switched out by the other part.

Finally, the type confusion bug is the most serious one here.

Type confusion, simply put, means that you supply a parameter to the program under the guise of it containing one type of data, but later trick the program into accepting it as a different sort of parameter.

As a very simple example, imagine that you could tell a “smart” household oven that the time should be set to, say, 13:37 by sending it the integer value 1337.

The receiving code would probably carefully test that the number was between 0 and 2359 inclusive, and that the remainder when divided by 100 was in the range 0 to 59 inclusive, to prevent the clock being set to an invalid time.

But now imagine that you could subsequently persuade the oven to use the time as the temperature instead.

You’d have sneakily bypassed the check that would have happened if you’d admitted up front that you were supplying a temperature (1337 is far too hot for a cooking oven on any of the common scales currently in use, whether K, °C or °F).

Misuse of memory comparisons

In C programs, type confusion is often particularly dangerous because you may be able to swap plain old numbers with memory pointers, thus sneakily either discovering memory addresses that were supposed to be secret or, much worse, reading from or writing to memory blocks that are supposed to be off-limits.

As the OpenSSL team admits, in respect of the High severity type confusion bug above, “When certificate revocation list checking is enabled, this vulnerability may allow an attacker to pass arbitrary pointers to a memcmp() [memory comparison] call, enabling them to read memory contents”.

If you can misdirect one of the two memory blocks compared in a memcmp(), then by comparing a secret memory buffer repeatedly against a memory block of your choice, you can gradually figure out what’s in the secret buffer. For example, “Does this string start with A?” If not, how about B? Yes? What’s next? How about BA? BB? And so on.

Timing bug rounds out the eight

The eighth bug is:

  • CVE-2022-4303: Timing Oracle in RSA Decryption. Moderate severity; bug affects all versions (3.0, 1.0.1 and 1.0.2).

Cryptographic code needs to be especially sensitive to how long its various calculations take, so that an attacker can’t guess which text strings or numbers are involved by probing to see if the speed of response indicates that some sort of “easy” case applies.

As a simple example, imagine that you been asked to multiply a given number by 13 in your head.

It will almost certainly take you a lot longer do this than it would to multiply the number by 0 (instant answer: zero!) or 1 (instant answer: the same number, unchanged), and a fair bit longer than multiplying by 10 (stick a zero on the end and read out the new number).

In cryptography, you have to ensure that all related tasks, such as looking up data in memory, comparing text strings, performing arithmetic, and so on, take the same amount of time, even if that means slowing down the “easy” cases instead of trying to save time by doing everything as quickly as possible.

What to do?

Easy.

Patch today: you need any or all of 1.0.2zg (Zulu-Golf), 1.1.1t (T-for-Tango) and 3.0.8.

Don’t forget that, for many Linux distros, you will need to install an operating system update that applies to the shared libraries used by many different applications, yet you may also have applications that bring along their own versions of OpenSSL and need updating too.

Some apps may even include two different versions of OpenSSL, both of which will need patching.

Don’t delay, do it today!


VMWare user? Worried about “ESXi ransomware”? Check your patches now!

Cybersecurity news, in Europe at least, is currently dominated by stories about “VMWare ESXi ransomware” that is doing the rounds, literally and (in a cryptographic sense at least) figuratively.

CERT-FR, the French government’s computer emergency response team, kicked off what quickly turned into a mini-panic at the tail end of last week, with a bulletin entitled simply: Campagne d’exploitation d’une vulnérabilité affectant VMware ESXi (Cyberattack exploiting a VMWare ESXi vulnerability).

Although the headline focuses directly on the high-level danger, namely that any remotely exploitable vulnerability typically gives attackers a path into your network to do something, or perhaps even anything, that they like…

…the first line of the report gives the glum news that the something the crooks are doing in this case is what the French call rançongiciel.

You probably don’t need to know that logiciel is the French word for “software” to guess that the word stem ranço- came into both modern French (rançon) and English (ransom) from the Old French word ransoun, and thus that the word translates directly into English as ransomware.

Back in the Middle Ages, one occupational hazard for monarchs in time of war was getting captured by the enemy and held for a ransoun, typically under punitive terms that effectively settled the conflict in favour of the captors.

These days, of course, it’s your data that gets “captured” – though, perversely, the crooks don’t actually need to go to the trouble of carrying it off and holding it in a secure prison on their side of the border while they blackmail you.

They can simply encrypt it “at rest”, and offer to give you the decrpytion key in return for their punitive ransoun.

Ironically, you end up acting as your own jailer, with the crooks needing to hold onto just a few secret bytes (32 bytes, in this case) to keep your data locked up in your very own IT estate for as long as they like.

Good news and bad news

Here’s the good news: the current burst of attacks seem to be the work of a boutique gang of cybercriminals who are relying on two specific VMWare ESXi vulnerabilities that were documented by VMware and patched about two years ago.

In other words, most sysadmins would expect to have been ahead of these attackers since early 2021 at the latest, so this is very definitely not a zero-day situation.

Here’s the bad news: if you haven’t applied the needed patches in the extended time since they came out, you’re not only at risk of this specific ransomware attack, but also at risk of cybercrimes of almost any sort – data stealing, cryptomining, keylogging, database poisoning, point-of-sale malware and spam-sending spring immediately to mind.

Here’s some more bad news: the ransomware used in this attack, which you’ll see referred to variously as ESXi ransomware and ESXiArgs ransomware, seems to be a general-purpose pair of malware files, one being a shell script, and the other a Linux program (also known as a binary or executable file).

In other words, although you absolutely need to patch against these old-school VMWare bugs if you haven’t already, there’s nothing about this malware that inextricably locks it to attacking only via VMWare vulnerabilities, or to attacking only VMWare-related data files.

In fact, we’ll just refer to the ransomware by the name Args in this article, to avoid giving the impression that it is either specifically caused by, or can only be used against, VMWare ESXi systems and files.

How it works

According to CERT-FR. the two vulnerabilities that you need to look out for right away are:

  • CVE-2021-21974 from VMSA-2021-0002. ESXi OpenSLP heap-overflow vulnerability. A malicious actor residing within the same network segment as ESXi who has access to port 427 may be able to trigger [a] heap-overflow issue in [the] OpenSLP service resulting in remote code execution.
  • CVE-2020-3992 from VMSA-2020-0023. ESXi OpenSLP remote code execution vulnerability. A malicious actor residing in the management network who has access to port 427 on an ESXi machine may be able to trigger a use-after-free in the OpenSLP service resulting in remote code execution.

In both cases, VMWare’s official advice was to patch if possible, or, if you needed to put off patching for a while, to disable the affected SLP (service location protocol) service.

VMWare has a page with long-standing guidance for working around SLP security problems, including script code for turning SLP off temporarily, and back on again once you’re patched.

The damage in this attack

In this Args attack, the warhead that the crooks are apparently unleashing, once they’ve got access to your ESXi ecosystem, includes the sequence of commands below.

We’ve picked the critical ones to keep this description short:

  • Kill off running virtual machines. The crooks don’t do this gracefully, but by simply sending every vmx process a SIGKILL (kill -9) to crash the program as soon as possible. We assume this is a quick-and-dirty way of ensuring all the VMWare files they want to scramble are unlocked and can therefore be re-opened in read/write mode.
  • Export an ESXi filesystem volume list. The crooks use the esxcli storage filesystem list command to get a list of ESXi volumes to go after.
  • Find important VMWare files for each volume. The crooks use the find command on each volume in your /vmfs/volumes/ directory to locate files from this list of extensions: .vmdk, .vmx, .vmxf, .vmsd, .vmsn, .vswp, .vmss, .nvram and .vmem.
  • Call a general-purpose file scrambling tool for each file found. A program called encrypt, uploaded by the crooks, is used to scramble each file individually in a separate process. The encryptions therefore happen in parallel, in the background, instead of the script waiting for each file to be scrambled in turn.

Once the background encryption tasks have kicked off, the the malware script changes some system files to make sure you know what to do next.

We don’t have our own copies of any actual ransom notes that the Args crooks have used, but we can tell you where to look for them if you haven’t seen them yourself, because the script:

  • Replaces your /etc/motd file with a ransom note. The name motd is short for message of the day, and your original version is moved to /etc/motd1, so you could use the presence of a file with that name as a crude indicator of compromise (IoC).
  • Replaces any index.html files in the /usr/lib/vmware tree with a ransom note. Again, the original files are renamed, this time to index1.html. Files called index.html are the home pages for any VMWare web portals you might openm in your browser.

From what we’ve heard, the ransoms demanded are in Bitcoin, but vary both in the exact amount and the wallet ID they’re to be paid into, perhaps to avoid creating obvious payment patterns in the BTC blockchain.

However, it seems that the blackmail payment is typically set at about BTC 2, currently just under US$50,000.


LEARN MORE: PAYMENT PATTERNS IN THE BLOCKCHAIN


The encryptor in brief

The encrypt program is, effectively, a standalone, one-file-at-a-time scrambling tool.

Given how it works, however, there is no conceivable legitimate purpose for this file.

Presumably to save time while encrypting, given that virtual machine images are typically many gigabytes, or even terabytes, in size, the program can be given parameters that tell it to scramble some chunks of the file, while leaving the rest alone.

Loosely speaking, the Args malware does its dirty work with a function called encrypt_simple() (in fact, it’s not simple at all, because it encrypts in a complicated way that no genuine security program would ever use), which goes something like this.

The values of FILENAME, PEMFILE, M and N below can be specified at runtime on the command line.

Note that the malware contains its own implementation of the Sosemanuk cipher algorithm, though it relies on OpenSSL for the random numbers it uses, and for the RSA public-key processing it does:

  1. Generate PUBKEY, an RSA public key, by reading in PEMFILE.
  2. Generate RNDKEY, a random, 32-byte symmetric encryption key.
  3. Go to the beginning of FILENAME
  4. Read in M megabytes from FILENAME.
  5. Scramble that data using the Sosemanuk stream cipher with RNDKEY.
  6. Overwrite those same M megabytes in the file with the encrypted data.
  7. Jump forwards N megabytes in the file.
  8. GOTO 4 if there is any data left to sramble.
  9. Jump to the end of FILENAME.
  10. Use RSA public key encyption to scramble RNDKEY, using PUBKEY.
  11. Append the scrambled decryption key to FILENAME.

In the script file we looked at, where the attackers invoke the encrypt program, they seem to have chosen M to be 1MByte, and N to be 99Mbytes, so that they only actually scramble 1% of any files larger than 100MBytes.

This means they get to inflict their damage quickly, but almost certainly leave your VMs unusable, and very likely unrecoverable.

Overwriting the first 1MByte typically makes an image unbootable, which is bad enough, and scrambling 1% of the rest of the image, with the damage distributed throughout the file, represents a huge amount of corruption.

That degree of corruption might leave some original data that you could extract from the ruins of the file, but probably not much, so we don’t advise relying on the fact that 99% of the file is “still OK” as any sort of precaution, because any data you recover this way should be considered good luck, and not good planning.

If the crooks keep the private-key counterpart to the public key in their PEMFILE secret, there’s little chance that you could ever decrypt RNDKEY, which means you can’t recover the scrambled parts of the file yourself.

Thus the ransomware demand.

What to do?

Very simply:

  • Check you have the needed patches. Even if you “know” you applied them right back when they first came out, check again to make sure. You often only need to leave one hole to give attackers a beachhead to get in.
  • Revisit your backup processes. Make sure that you have a reliable and effective way to recover lost data in a reasonable time if disaster should strike, whether from ransomware or not. Don’t wait until after a ransomware attack to discover that you are stuck with the dilemma of paying up anyway because you haven’t practised restoring and can’t do it efficiently enough.
  • If you aren’t sure or don’t have time, ask for help. Companies such as Sophos provide both XDR (extended detection and response) and MDR (managed detection and response) that can help you go beyond simply waiting for signs of trouble to pop up on your dashboard. It’s not a copout to ask for help from someone else, especially if the alternative is simply never having time to catch up on your own.

Tracers in the Dark: The Global Hunt for the Crime Lords of Crypto

DO WE REALLY NEED A NEW “WAR AGAINST CRYPTOGRAPHY”?

We talk to renowned cybersecurity author Andy Greenberg about his tremendous new book, Tracers in the Dark.

Hear Andy’s thoughtful commentary on cybercrime, law enforcement, anonymity, privacy, and whether we really need a “war against cryptography” – codes and ciphers that the government can easily crack if it thinks there’s an emergency – to cement our collective online security.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


[MUSICAL MODEM]

PAUL DUCKLIN. Hello, everybody.

Welcome to this very, very special episode of the Naked Security podcast, where we have the most amazing guest: Mr. Andy Greenberg, from New York City.

Andy is the author of a book I can very greatly recommend, with the fascinating title Tracers in the Dark: The Global Hunt for the Crime Lords of Cryptocurrency.

So, Andy, let’s start off…

..what made you write this book in the first place?

It seems fascinatingly complicated!


ANDY.GREENBERG.  Yes, well, thank you, Paul.

I guess [LAUGHS]… I’m not sure if that’s a compliment?


DUCK.  Oh, it is, it is!


ANDY.  Thank you.

So, I’ve covered this world of hackers, and cybersecurity, and encryption for about 15 years now.

And around, let’s see – I guess 2010 – I started working on a book, a different book, that was about the cypherpunk movement in the 1990s…

…and the ways that it gave rise to the modern internet, but also to things like WikiLeaks, and other kinds of encryption, anonymity tools, and ultimately what we now call the dark web, I suppose.

And I’ve always been fascinated with the ways, on this beat, that anonymity can play this fascinating, dramatic role – and allow people to become someone else, or to reveal to you in secret to who they truly are.

And as I dug into this cypherpunk world, around 2010 and 2011, I came upon this thing that seemed to be a new phenomenon in that world of online anonymity – which was Bitcoin.

I wrote, I think, the first print magazine piece about Bitcoin for Forbes magazine in 2011.

I interviewed one of the first Bitcoin developers, Gavin Andresen, for that piece.

And Gavin and many others at the time were describing Bitcoin as a kind-of anonymous digital cash for the internet.

You could actually use this new invention, Bitcoin, to put unmarked bills in a briefcase, basically, and send it across the internet to anyone in the world.

And, being the kind of reporter I am, I’m interested in the subversive and sometimes criminal, sometimes politically motivated… I don’t know, the underhanded and dark corners of the internet.

I just saw how this would enable a new world of… yes, people seeking financial privacy, but also money laundering, and drug dealing online, and all of this that would come to pass in the next few years.

But what I didn’t foresee is that, ten years later or so, it would be by then apparent that Bitcoin is actually the *opposite* of anonymous.

I mean, that is the big surprise, and the big reveal.

For me, it was a kind of slow-motion epiphany to realise that cryptocurrency was actually *extremely* traceable.

It was the opposite of this “anonymous cash for the internet” that many people once thought it was.

And the result, I think, was that it served as a kind of trap for many people seeking financial privacy… and criminals, over that decade.

And as I realised the extent of this… I fully realised it in 2020 or so.

I began, at the same time, to see that this one company, Chainalysis, a blockchain-analysis Bitcoin cryptocurrency tracing firm, was being venked in one US Department of Justice announcement after another in all of these major busts.

And so I started talking to Chainalysis, and then to their customers and law enforcement, and slowly realised that there had been this one small group of detectives that had figured this out much earlier than me.

They had started actually tracing Bitcoins years earlier, and had used this incredibly powerful investigative technique to go on this spree of one massive cybercriminal bust after another…

…using cryptocurrency as this surprise trap that had been laid for so many people on the dark web, and in the cybercriminal world as a whole.


DUCK.  Now, I suppose we shouldn’t really be surprised at that, should we, as you explain in the book?

Because the whole idea, at least of the Bitcoin blockchain, is that it is, by design, entirely and utterly public and irrevocable.

That’s how it can work as a ledger that is equivalent to something that would normally be held privately and individually by your bank.

It doesn’t actually have your name on it, but it has a magic identifier that, once tied to you, can’t really be cut loose…

…if there’s other evidence to say, “Yes, long-hexadecimal-string-of-stuff is Andy Greenberg, and here’s why.”

Now try denying it!

So, I think you’re right.

This idea that it’s *possible* to trade anonymously with Bitcoin – I think was taken by very many people to mean that it is fundamentally anonymous and ever-untraceable.

But the world is not like that, is it?


ANDY.  I sometimes look back on my 2011 self, and in that piece for Forbes, I *did* write that Bitcoin was potentially untraceable.

And I sort of scold myself, “How could you be such an idiot?”

The whole idea of Bitcoin is that there’s a blockchain that records every transaction.

But then I remind myself that even Satoshi Nakamoto, the mysterious creator of Bitcoin (whoever he, she or they are), in their first email to a cryptography mailing list introducing the idea of Bitcoin…

…listed among its features that participants can be anonymous.

That was a feature of Bitcoin as Satoshi described it.

So I think there’s always been this idea that Bitcoin, if it’s not anonymous, at least is pseudonymous, that you can hide behind the pseudonym of your Bitcoin address, and that if you can’t figure out somebody’s address, you can’t figure out their transactions.

I guess we all should have known… I should have known, and maybe even Satoshi should have known, that, given this massive corpus of data, there would be patterns in it that allow people to identify clusters of addresses that all belong to one person or service.

Or to follow the money from one address to another to find interesting giveaways in this massive collection of data.

The biggest giveaway of all is when you cash in or cash out at a cryptocurrency exchange that has Know-Your-Customer [KYC] requirements, as almost all of them do now.

They have your identity, so if somebody can just subpoena that exchange, then they have your actual driver’s licence in hand.

And any illusion of anonymity just completely backfires.

So that is the story, I think, of how Bitcoin’s anonymity turned out to be the opposite.


DUCK.  Andy, do you think, perhaps, though, that there’s nothing wrong with Satoshi Nakamoto saying, “You *can* be anonymous when you use Bitcoin?”

I think what’s wrong is that lots of people assume that because technology *can* let you do something that is desirable for your privacy, therefore, *however you use it*, it always will.

And the original idea of Bitcoin didn’t include exchanges, did it?

And so there wouldn’t be any exchanges that would take a copy of your driving licence if Bitcoin were used in its original sort of cypherpunk way, as far as I can see…


ANDY.  Well, I certainly don’t blame Satoshi for not predicting the entire cryptocurrency economy, including the ways that exchanges would interface with the traditional finance world.

It’s all incredibly complex economics; Bitcoin was brilliant enough as it is.

But I do think that it’s more than just, “You *can* be anonymous with Bitcoin if you’re careful, but most people are not careful.”

It turns out, I think, that the possibility, no matter how smart you are, of using Bitcoin anonymously is vanishingly small.

Also, there is the property of blockchain *that it is forever*.

So, if you use the kind of smartest ideas of the day to try to avoid any of these patterns that reveal your transactions on the blockchain, but then someone years later figures out a new trick to identify transactions…

…then you’re still screwed.

They can go back in time, and use their new ideas to foil your cutting-edge anonymity tricks from years earlier.


DUCK.  Absolutely.

With a bank fraud you can imagine you *could* get lucky, couldn’t you?

That just when you’re about to be investigated, years later, you find the bank’s had a data security disaster, and they’ve lost all their backups and, oh, they can’t recover the data…

With the blockchain, that ain’t never going to happen! [LAUGHS]

Because everybody’s got a copy, and that’s a requirement for the system to work as it does.

So, once locked in, always locked in: it can never be lost.


ANDY.  That’s the thing!

To be anonymous with cryptocurrency, you truly have to be perfect – perfect for all time.

And to catch someone who’s trying to be anonymous with cryptocurrency slipping up, you just have to be smart, and persistent, and work on it for years, which is what, first, Chainalysis…

…actually, first was academic researchers like Sarah Meiklejohn at the University of California at San Diego, who, as I document the book, came up with a lot of these techniques.

But then Chainalysis, this startup that’s now almost a nine-billion-dollar unicorn, selling polished cryptocurrency tracing tools to law enforcement agencies.

And now, all of these law enforcement agencies that have professional Bitcoin tracers – their savvy, their know-how in doing this, is just growing by leaps and bounds.

And I think it’s almost just a better rule to say, “No, you cannot be anonymous with cryptocurrency,” that it is fully transparent.

That’s a safer way to operate, almost.

To be fair, Satoshi Nakamoto said participants *can* be anonymous… but it turns out that the only participant who has *remained* anonymous is Satoshi Nakamoto.

And that is, in part, because very few people have that other-worldly restraint that Satoshi had to amass a million Bitcoins and then never spend them or move them.

If you do that… yes, I think you can perhaps be anonymous.

But if you ever want to use your cryptocurrency, or to put it in a liquid form where you can spend it, then I think you’re toast.


DUCK.  Yes, because there are some amazing things that have happened, one of which you allude to because it was in the works just at the end of the book…

…[LAUGHS] what I call the Crocodile Lady and her husband: Heather Morgan and Ilya Liechtenstein.

Self-styled “Crocodile of Wall Street” arrested with husband over Bitcoin megaheist

They’re alleged to have somehow received a whole load of cryptocoins from a cryptocurrency bank robbery against Bitfinex.

In their cases, they received stolen cryptocurrencies in vast quantities, so that they could quite literally have been billionaires *if they could have cashed it out*.

But when bust, they still had the vast majority of that stuff sitting around.

So it seems that, in a lot of cryptocurrency crimes, your eyes can be a lot bigger than your stomach.

You may live the high life a little bit… the Crocodile Lady and her husband, it does seem they were living quite a flash lifestyle.

But when they were bust, what was the amount?

It was more than $3 billions’ worth of Bitcoins that they had, but couldn’t cash out.


ANDY.  The Department of Justice said that they seized $3.6 billion from them.

That was the biggest seizure not just of cryptocurrency in history, but of money in the history of the Department of Justice.

In fact, as I document in the book… actually, one of these happened after the book, but the IRS criminal investigators, who are the main subjects of this book, have now pulled off the first, second, and third-biggest seizures of money in American criminal justice history, by following cryptocurrency and seizing Bitcoins.

Your point is absolutely right, which is that cryptocurrency is easy to steal, it turns out… that is, I think, one of its big drawbacks for the businesses, like exchanges, that have to hold sometimes billions of dollars in a kind of digital safe.

But then if you do steal it, if you pull off one of these massive heists – and two of the three of the cases that we’re discussing are actually people who stole money from the Silk Road dark web drug market…


DUCK.  Yes [LAUGHS]… when you steal from a crook, it’s still a crime, eh?


ANDY.  [LAUGHS] Yes, unfortunately – for those crooks, anyway.


DUCK.  One of the most intriguing bits for me in the book was somebody that you identify as “Individual X”, only because that’s the way they were identified by the court.

This individual had stolen 70,000 Bitcoins, and was busted, and basically gave them back… sort-of in return for getting let off.

They didn’t get prosecuted, they didn’t go to prison, they didn’t – I imagine – even get a criminal record.

And they were never named.


ANDY.  That’s right.


DUCK.  So that seems like an almost unreadable mystery, doesn’t it?

If we look forward a few years, now that Bitcoin’s… what, in the last year, it’s gone down to about a third of its value; Ether is down to about a third; Monero is about half.

Do you think that that gambit of saying, “I’ll give the money back, let me off” would have worked if the prices were reversed, and what they were handing back was now worth a fraction of what it was when it was stolen?

Or do you think that Individual X was lucky because what they had to hand back was actually worth much more than when they stole it?


ANDY.  I think it’s the latter.

Individual X stole that money while the Silk Road was still online…


DUCK.  Wow!

So that would have been when BTC was, what, hundreds [of dollars] then?


ANDY.  Yes, probably, or thousands at most – Silk road went offline in 2013, when Bitcoin had just broken through $1000, if I remember.

This person (I don’t want to say “guy” – who knows who Individual X is?) sat on these 70,000 Bitcoins for seven years, ultimately…

…probably, exactly as you said, just terrified to move them or cash them out for fear of being caught.


DUCK.  Yes, can you imagine?

“Hey, I’m a millionaire!”

“Hey, I’m a *billionaire*!”

“Oh, golly, but where am I going to get my rent money?”

[LAUGHS] Shouldn’t laugh….


ANDY.  As you say – like the hand stuck in the cookie jar!

The hand just gets bigger and bigger until it’s all-consuming, and you cannot move it, you can’t get it out.

In fact, even without trying to get it out, IRS criminal investigators found it through other means, including the seizure of the BTC-e exchange, which was a kind-of money-laundering, criminal Bitcoin exchange.


DUCK.  That was a rogue exchange that basically did as little as is humanly possible along the Know Your Customer front?

“Ask no questions, tell no lies,” that kind of thing?

Is that right?


ANDY.  Yes, exactly.

That was another surprise for many users who believed that, “Maybe I can use BTC-e a little bit and not get caught, because that doesn’t have Know Your Customer, that doesn’t co-operate with law enforcement.”

But, nonetheless, when that exchange was busted and its servers seized, that provided more clues to the IRS.

That helped, in fact, to figure out who Individual X was… I don’t know who they are, but the government does.

And to knock on his or her door and say, “Hey, hand over a billion dollars or you’re going to jail,” and that’s exactly what happened.

Now, poor James Zhong is a very similar case.

Silk Road drugs market hacker pleads guilty, faces 20 years inside

He seems to have taken 50,000 Bitcoins from the Silk Road, probably around the same time, and then held onto them for even longer.

And then, a year after Individual X, Zhong got a knock on his door…

Similarly, they had traced the money, even though he had just left it sitting on a USB drive in a popcorn tin under the floorboards of his closet.

In his case, he did not manage to make a deal somehow, and he’s being criminally charged.


DUCK.  *And* he has given the money back, obviously?

[WRY LAUGH] Aaaargh!


ANDY.  He was a Bitcoin billionaire, and now is facing criminal charges… and never got to even spend his loot.

The Bitfinex case, I don’t know… I have less sympathy for them because they truly were trying to launder a massive theft from a legitimate business.

And they did, I think, launder some of it.

They tried several different clever techniques.

They put the money through…. I mean, this is all alleged, I should say; they’re still innocent until proven guilty, this couple in New York.

But they tried to put the money through the AlphaBay dark web market as a kind of laundering technique, thinking that would be a black box that law enforcement would not be able to see through.

But then AlphaBay was busted and seized.

That’s perhaps the biggest story I tell in the book, the most exciting cloak-and-dagger story: how they tracked down the kingpin of AlphaBay in Bangkok and arrested him.


DUCK.  Yes… spoiler alert, that’s where the helicopter gunships come in!


ANDY.  lLAUGHS] Yes!

Yes, and much more!

I mean, that story is one of the craziest that I will probably tell in my career…

But then, also, this New York money-laundering couple tried to put some of the money through Monero, a cryptocurrency that is advertised as a privacy coin, a potentially truly untraceable cryptocurrency.

And yet, in the IRS documents where they describe how they caught this couple in New York, they show how they continued to follow the money, even after it’s exchanged for Monero.

So that was a sign to me that perhaps even Monero – this newer, “untraceable” cryptocurrency – is a bit traceable too, to some degree.

And perhaps this trap persists… that even coins that are designed to outstrip Bitcoin in terms of their anonymity are not all they’re cracked up to be.

Although I should say that Monero people hate it when I even say this out loud, and I don’t know how that worked…

…all I can say is that it looks very possible that Monero tracing was used in that case.


DUCK.  Well, there could be some operational security blunders that the Crocodile Lady and her husband made as well, that kind of tied it all together.

So, Andy, I’d like to ask you, if I may…

Thinking of cryptocurrency tokens like Monero, which as you say, is meant to be more privacy focused than Bitcoin because it inherently, if you like, joins transactions together.

And then there’s also Zcash, designed by cryptography experts specifically using technology known in the jargon as zero-knowledge proofs, which is at least supposed to work so that neither side can tell who the other is, yet it’s still impossible to double-spend…

With all eyes on these much more privacy-focused tokens, where do you think the future is going?

Not just for law enforcement, but where do you think it might drag our legislators?

There’s certainly been a fascination for decades, amongst sometimes very influential parliamentarians, to say, “You know what, this encryption thing, it’s actually a really, really bad idea!”

“We need backdoors; we need to be able to break it; somebody has to ‘think of the children’; et cetera, et cetera.”


ANDY.  Well, it’s interesting to talk about crypto backdoors and the legal debate over encryption that even law enforcement can’t crack.

I think that, in some ways, the story of this book shows that that is often not necessary.

I mean, the criminals in this book were using traditional encryption – they were using Tor and the dark web, and none of that was cracked to bust them.

Instead, investigators followed the money and *that* turned out to be the backdoor.

It’s an interesting parable, and a good example of how, very often, there is a side-channel in criminal operations, this “other leak” of information that, without cracking the main communications, offers a way in…

…and doesn’t necessitate any kind of backdoor in Tor, or the dark web, or Signal, or hard disk encryption, or whatever.

In fact, speaking of ‘thinking of the children’, one of the last major stories that I dig deeply into in the book is the bust of the Welcome To Video market for child sexual abuse videos that accepted cryptocurrency.

And as a result, the IRS investigators at the centre of the book were able to track down and arrest 337 people around the world who used that market.

It was the biggest bust of what we call child sexual abuse materials, by some measures, in history…

…all based on cryptocurrency tracing.


DUCK.  And they didn’t need to do anything that you would really consider privacy-violating, did they?

They quite literally followed the money, in a trail of evidence that was public by design.

And in conjunction, admittedly, with warrants and subpoenas from places where the money popped out, and where internet connections were made, they were able to identify the people involved…

…and largely to avoid trampling on millions of people who had absolutely no connection with the case whatsoever.


ANDY.  Yes!

I think that it is an example of a way to do… it is, in some ways, mass surveillance – but mass surveillance in a way that nonetheless does not require weakening anybody’s security.

I guess that cryptocurrency users, and people who believe in the power of cryptocurrency for enabling activists, and dissidents, and journalists, and money transmissions to countries like Ukraine, that need injections of money for survival…

They would argue that, nonetheless, we need to fix cryptocurrency to make it as untraceable as we once thought it might be.

And that’s where we get into the new, I would say *a* new, crypto-war over cryptocurrency.

We’re just starting to see the beginning of that with tools like Monero and Zcash, as you said.

I do think that there will probably still be surprises about the ways that Monero can be traced.

I’ve seen a leaked Chainalysis document where they told Italian law enforcement… it’s a presentation in Italian to the Italian police from Chainalysis, where they say that they can trace Monero, in the majority of cases, to find a usable lead.

I don’t know how they do that, but it does seem like it’s probabilistic more than definitive.

Now I don’t think a lot of people understand – that is often enough for law enforcement to get a subpoena, to start subpoenaing cryptocurrency exchanges, just based on a probabilistic guess.

They can just check every possibility, if there are a few enough of them.


DUCK.  Andy, I’m conscious of time, so I’d like to finish up now by just asking you one final question, and that is…

In ten years’ time, do you see yourself being in a position where you’ll be able to write a book like this one, but where the “unravelling” parts are even more fascinating, complicated, exciting, and amazing?


ANDY.  I tried, with this book, *not* to make too many predictions.

And, in fact, the book begins with this “mea culpa” that ten years ago I believed exactly the wrong thing about Bitcoin.

So nobody should listen to any ten-year prediction that I have!

[LAUGHTER]

But the simplest prediction to make, that *has* to be true, is that this cat-and-mouse game will still be going on in ten years.

People will still be using cryptocurrency thinking that they have outsmarted the tracers…

…and the tracers will still be coming up with new tricks to prove them wrong.

The stories, as you say, will, I think, be much more convoluted because they’ll be dealing with these cryptocurrencies like Monero, that build in vast mix-networks, and Zcash, that have zero-knowledge proofs.

But it does seem that there will always be some way – and maybe not even cryptocurrency, but in some other side channel… as I was saying, there will be a new one that unravels the whole thing.

But there’s no question that this cat-and-mouse game will go on.


DUCK.  And I’m sure there’ll be another Tigran Gambaryan sometime in the future for you to interview?


ANDY.  Well, I do think the game of anonymity…

…it does favour the Tigran Gambaryans of the world.

They, as I said, just have to be persistent and smart.

But the mice in this cat-and-mouse game have to be perfect.

And no one is perfect.


DUCK.  Absolutely.


ANDY.  So, if I do have to make a prediction…

…then I would just place my bet on the cats, on the Tigran Gambaryans of the world.


DUCK.  [LAUGHS] Andy, thank you so much.

Before we go, why don’t you tell our listeners where they can get your book?


ANDY.  Yes, thanks, Paul!

The book is called “Tracers in the Dark: The Global Hunt for the Crime Lords of Cryptocurrency.”

[ISBN 978-0-385-54809-0]

And it’s available at all the normal places books are sold.

But if you go to https://andygreenberg.net/, then you can just find links to a bunch of places.


DUCK.  Andy, thank you so much for your time.

It was as fascinating talking to you and listening to you as it was reading your book.

I recommend it to anybody who wants a galloping read that is nevertheless detailed and insightful about how law enforcement works…

…and, importantly, why criminal convictions for cybercrimes often only happen years after the crime occurred.

The devil really is in the details.


ANDY.  Thank you, Paul.

It’s been a super-fun conversation.

I’m just glad you enjoyed the book!


DUCK.  Excellent!

Thanks to everybody who listened.

And, as always: Until next time, stay secure!

[MUSICAL MODEM]


Finnish psychotherapy extortion suspect arrested in France

In October 2022, we asked you to imagine being stuck in the following awful situation:

Imagine that you’d spoken in what you thought was total confidence to a psychotherapist, but the contents of your sessions had been saved for posterity, along with precise personal identification details such as your unique national ID number, and perhaps including additional information such as notes about your relationship with your family…

…and then, as if that were not bad enough, imagine that the words you’d never expected to be typed in and saved at all, let alone indefinitely, had been made accessible over the internet, allegedly “protected” by little more than a default password giving anyone access to everything.

Sadly, for tens of thousands of trusting patients of the now-bankrupt Psychotherapy Centre Vastaamo, that really happened.

It gets worse

Worse, a cybercriminal found his way into the poorly-secured system and stole all that ultra-personal data.

Worse still, the company responsible for keeping that data secure decided to keep quiet about the intrusion, with the company CEO apparently deciding that he could get away with hiding the breach from the authorities as long as no publicly visible harm came of it.

But the breach couldn’t be denied any more once the company was hit up with a blackmail demand for €450,000 (about $0.5m at the time).

Ultimately, as reported in the Helsinki Times in late 2022 in an article entitled Prosecutors: Vastaamo’s information security was in absolute chaos, the now-former CEO was charged personally with data protection offences, even though the company itself was the victim of a cybercrime.

Worst of all was that when the company itself refused to pay the blackmail money (which, as we pointed out last year, wouldn’t have done much good given that the data had already been stolen), the extortionist turned their attention directly on the company’s patients.

Patients were blackmailed to the tune of €200 each, with cybersecurity journo-sleuth Brian Krebs reporting in 2022 that the demand jumped to €500 if the initial “fee” wasn’t paid within 24 hours, followed by publication of personal details 48 hours after that.

The hacker threatened to release not only the sort of information that would help other crooks to carry out identity theft, including contact details and ID data, but also the saved transcripts of patients’ conversations that we mentioned at the top of this article.

The Finnish authorities issued an arrest warrant for the suspected hacker in October 2022, noting that:

The police have established that the suspect currently resides abroad. For this reason, he was remanded in absentia. A European arrest warrant has been issued against the suspect. He can be arrested abroad under this warrant. After that the police will request his surrender to Finland. An Interpol notice will also be issued against the suspect, who is a Finnish citizen and about 25 years of age.

He appeared on Europol’s Most Wanted Fugitives list on 2022-11-03, charged with eight offences: aggravated computer break-in, attempted aggravated extortion, aggravated dissemination of information violating personal privacy, extortion, attempted extortion, computer break-in, message interception, and falsification of evidence:

Suspect apprehended

Well, the Finns have just announced that the suspect has been apprehended in France, where he has been locked up while his extradition to Finland is being processed.

Brian Krebs, who is well-known for digging into the histories of notorious hackers and hacking suspects, has published a report listing a string of previous cybercrimes for which Kivimäki has been convicted, apparently including denial-of-service attacks under the banner of Lizard Squad, theft of source code from Adobe, use of stolen credit cards, and more.

According to Krebs, the suspect was convicted of “orchestrating more than 50,000 cybercrimes”, but got away with a suspended sentence and a small fine, having been under 18 at the time of that criminal activity.

After he’d evaded a prison sentence, says Krebs, the Lizard Squad hacking group openly boasted on Twitter than “All the people that said we would rot in prison don’t want to comprehend what we’ve been saying since the beginning, we have free passes.”

If his extradition from France is approved in this case, and he’s convicted, we can’t imagine the consequences being quite so much of a “free pass” this time, now he’s 25 years old.

What to do?

  • Rehearse what you will do if you suffer a breach yourself. You are not preparing to fail if you do so, but you are failing to prepare if you don’t. Learn what your reporting obligations are, and practise what you would say to those affected by the breach. As this case suggests, prompt disclosure would at least have prevented tens of thousands of vulnerable people finding out about the breach from extortion demands made directly to them and their families.
  • Consider filing a personal report if you are caught up in a breach. This helps regulators and law enforcement collect evidence; helps to determine an appropriate level of response (if no one says anything, then it’s hard to convince a court that real harm was done); and helps the authorities demand higher cybersecurity standards in future.

OpenSSH fixes double-free memory bug that’s pokable over the network

The open source operating system distribution OpenBSD is well-known amongst sysadmins, especially those who manage servers, for its focus on security over speed, features and fancy front-ends.

Fittingly, perhaps, its logo is a puffer fish – inflated, with its spikes ready to repel any wily hackers who might come along.

But the OpenBSD team is probably best known not for its entire distro, but for the remote access toolkit OpenSSH that was written in the late 1990s for inclusion in the operating system itself.

SSH, short for secure shell, was originally created by Finnish computer scientist Tatu Ylönen in the mid-1990s in the hope of weaning sysadmins off the risky habit of using the Telnet protocol.

The trouble with Telnet

Telnet was remarkably simple and effective: instead of connecting physical wires (or using a modem over a telephone line) to make a teletype connection to remote servers, you used a TELetype NETwork connection instead.

Basically, the data that would usually flow back and forth over a dedicated serial connection or dial-up phone line was sent and received over the internet, using a packet-switched TCP network connection instead of a circuit-switched point-to-point link.

Same familiar login system, cheaper connections, no need for dedicated data lines!

The giant flaw in Telnet, of course, was its total lack of encryption, so that sniffing out your exact terminal session was trivial, allowing crackers to see every command you typed (even the mistakes you made, and all the times you hit [Backspace]), and every byte of output produced…

…and, of course, your username and password at the start of the session.

Anyone on your network path could not only easily reconstruct your sysadmin sessions in real time on their own screen, but probably also tamper with your session by modifying the commands you sent to the remote server and faking the replies coming back so you didn’t notice the subterfuge.

They could even set up an imposter server, lure you to it, and make it surprisingly difficult for you to spot the deception.

Strong encryption FTW

Ylönen’s SSH aimed to add a layer of strong encryption and authentication to each end of a Telnet-like session, creating a secure shell (that’s what the name stands for, if you’ve ever wondered, although almost everyone just calls it ess-ess-aitch these days).

It was an instant hit, and the protocol was quickly adopted by sysadmins everywhere.

OpenSSH soon followed, as we mentioned above, first appearing in late 1999 as part of the OpenBSD 2.6 release.

The OpenBSD team wanted to create a free, reliable, open-source implementation of the protocol that they and anyone else could use, without any of the licensing or commercial complications that had encumbered Ylönen’s original implementation in the years immediately after its release.

Indeed, if you run the Windows SSH server and connect to it from a Linux computer right now, you’ll almost certainly be relying on the OpenSSH implementation at both ends.

The SSH protocol is also used in other popular client-server services including SCP and SFTP, short for secure copy and secure FTP respectively. SSH loosely means, “connect Securely and run a command SHell at the other end”, typically for interactive logins, because the Unix program for a command shell is usually /bin/sh. SCP is similar, but for CoPying files, because the Unix file-copy command is generally called /bin/cp, and SFTP is named in much the same way.

OpenSSH isn’t the only SSH toolkit in town.

Other well-known implementations include: libssh2, for developers who want to build SSH support right into their own applications; Dropbear, a stripped-down SSH server from Australian coder Matt Johnston that’s widely found on so-called IoT (Internet of Things) devices such as home routers and printers; and PuTTY, a popular, free collection of SSH-related tools for Windows from indie open-source developer Simon Tatham in England.

But if you’re a regular SSH user, you’ve almost certainly connected to at least one OpenSSH server today, not least because most contemporary Linux distributions include it as their standard remote access tool, and Microsoft offers both an OpenSSH client and an OpenSSH server as official Windows components these days.

Double-free bug fix

OpenSSH version 9.2 just came out, and the release notes report as follows:

This release contains fixes for […] a memory safety problem. [This bug] is not believed to be exploitable, but we report most network-reachable memory faults as security bugs.

The bug affects sshd, the OpenSSH server (the -d suffix stands for daemon, the Unix name for the sort of background process that Windows calls a service):

sshd(8): fix a pre-authentication double-free memory fault introduced in OpenSSH 9.1. This is not believed to be exploitable, and it occurs in the unprivileged pre-auth process that is subject to chroot(2) and is further sandboxed on most major platforms.

A double-free bug means that a memory block you already returned to the operating system to be re-used in other parts of your program…

…will later get handed back again by a part of the program that no longer actually “owns” that memory, but doesn’t know it doesn’t.

(Or handed back deliberately at the prompting of code that is trying to provoke the bug on purpose in order to turn a vulnerability into an exploit.)

This can lead to subtle and hard-to-unravel bugs, especially if the system marks the freed-up block as available when the first free() happens, later allocates it to another part of your code when it asks for memory via malloc(), and then marks the block free once again when the superfluous call to free() appears.

That leaves you in the sort of situation you experience when you check into a hotel that says, “Oh, good news! We thought we were full up, but another guest just decided to check out early, so you can have their room.”

Even if the room is neatly cleaned and prepared for new occupants when you go in, and thus looks as though it was properly allocated for your exclusive use, youstill have to trust that the previous guest’s keycard did indeed get correctly cancelled, and that their “early checkout” wasn’t a cunning ruse to sneak back later the same day and steal your laptop.

Bug fix for bug fix

Ironically, if you look at the recent OpenSSH code history, you’ll see that OpenSSH had a modest bug in a function called compat_kex_proposal(), used to check what sort of key-exchange algorithm to use when setting up a connection.

But fixing that modest bug introduced a more severe vulnerability instead.

By the way, the presence of the bug in a part of the software that’s used during the setup of a connection is what makes this a so-called network-reachable pre-authentication vulnerability (or pre-auth bug for short).

The double-free bug happens in code that needs to run after a client has initiated a remote session, but before any key-agreement or authentication has taken place, so the vulnerability can, in theory, be triggered before any passwords or cryptographic keys have been presented for validation.

In OpenSSH 9.0, compat_kex_proposal looked something like this (greatly simplified here):

char* compat_kex_proposal(char* suggestion) { if (condition1) { return suggestion; } if (condition2) { suggestion = allocatenewstring1(); } if (condition3) { suggestion = allocatenewstring2(); } if (isblank(suggestion)) { error(); } return suggestion; }

The idea is that the caller passes in their own block of memory containing a text string suggesting a key-exchange setting, and gets back either an approval to use the very suggestion they sent in, or a newly-allocated text string with an updated suggestion.

The bug is that if condition 1 is false but conditions 2 and 3 are both true, the code allocates two new text strings, but only returns one.

The memory block allocated by allocatenewstring1() is never freed up, and when the function returns, its memory address is lost forever, so there’s no way for any code to free() it in future.

That block is essentially abandoned, causing what’s known as a memory leak.

Over time, this could cause trouble, perhaps even forcing the server to shut down to recover from memory overload.

In OpenSSH 9.1, the code was updated in an attempt to avoid allocating two strings but abandoning one of them:

/* Always returns pointer to allocated memory, caller must free. */ char* compat_kex_proposal(char* suggestion){ char* previousone = NULL; if (condition1) { return newcopyof(suggestion); } if (condition2) { suggestion = allocatenewstring1(); } if (condition3) { previousone = suggestion; suggestion = allocatenewstring2(); } free(previousone); } if (isblank(suggestion()) { error(); } return suggestion; }

This has the double-free bug, because if condition 1 and condition 2 are both false, but condition 3 is true, then the code allocates a new string to send back as its answer…

…but incorrectly frees up the string that the caller originally passed in, because the function allocatenewstring1() never gets called to update the variable suggestion.

The passed-in suggestion string is memory that belongs to the caller, and that the caller will therefore free up themeslves later on, leading to the double-free danger.

In OpenSSH 9.2, the code has become more cautious, keeping track of all three possible memory blocks used: the original suggestion (memory owned by someone else), and two possible new strings that might be allocated on the way:

/* Always returns pointer to allocated memory, caller must free. */ char* compat_kex_proposal(char* suggestion) { char* newone = NULL; char* newtwo = NULL; if (condition1) { return newcopyof(suggestion); } if (condition2) { newone = allocatenewstring1(); } if (condition3) { newtwo = allocatenewstring2(); } free(newone); newone = newtwo; } if (isblank(newone)) { error(); } return newone; }

If condition 1 is true, a new copy of the passed-in string is used, so the caller can later free() their passed-in string’s memory whenever they like.

If we get past condition 1, and condition 2 is true but condition 3 is false, then the alternative suggestion created by allocatenewstring1() gets returned, and the passed-in suggestion string is left alone.

If condition 2 is false and condition 3 is true, then a new string gets generated and returned, and the passed-in suggestion string is left alone.

If both condition 2 and condition 3 are true, then two new strings get allocated along the way; the first one gets freed up because it’s not needed; the second one is returned; and the passed-in suggestion string is left alone.

You can RTxM to confirm that if you call free(newone) when newone is NULL, then “no operation is performed”, because it’s always safe to free(NULL). Nevertheless, lots of programmers still robustly guard against it with code such as if (ptr != NULL) { free(ptr); }.

What to do?

As the OpenSSH team suggests, exploiting this bug will be hard, not least because of the limited privileges that the sshd program has while it’s setting up the connection for use.

Nevertheless, they reported it as a security hole because that’s what it is, so make sure you’ve updated to OpenSSH 9.2.

And if you’re writing code in C, remember that no matter how experienced you get, memory management is easy to get wrong…

…so take care out there.

(Yes, Rust and its modern friends will help you to write correct code, but sometimes you will still need to use C, and even Rust can’t guarantee to stop you writing incorrect code if you program injudiciously!)


go top