Category Archives: News

US passes the Quantum Computing Cybersecurity Preparedness Act – and why not?

Remember quantum computing, and the quantum computers that make it possible?

Along with superstrings, dark matter, gravitons and controlled fusion (hot or cold), quantum computing is a concept that many people have heard of, even if they know little more about any of these topics than their names.

Some us are vaguely better informed, or think we are, because we have an idea why they’re important, can recite short but inconclusive paragraphs about their basic underlying concepts, and broadly assume that they’ll either be proved, discovered or invented in due course.

Of course, practice sometimes lags far behind theory – controlled nuclear fusion, such as you might use for generating clean(ish) electrical energy, is no more than 20 years away, as the old joke goes, and has been since the 1930s.

And so it is with quantum computing, which promises to confront cryptographers with new and faster techniques for parallel password cracking.

Indeed, quantum computing enthusiasts claim the performance improvements will be so dramatic that encryption keys that could once comfortably have held out against even the richest and most antagonistic governments in the world for decades…

…might suddenly turn out to be breakable in half an afternoon by a modest group of spirited enthusiasts at your local makerspace.

Superpositions of all answers at once

Quantum computers pretty much claim to allow certain collections of calculations – algorithms that would usually need to be computed over and over again with ever-varying inputs until a correct output turned up – to be performed in a single iteration that simultaneously “evaluates” all possible outputs internally, in parallel.

This supposedly creates what’s known as a superposition, in which the correct answer appears right away, along with lots of wrong ones.

Of course, that’s not terribly exciting on its own, given that we already know at least one of the possible answers will be correct, but not which one.

In fact, we’re not much better off than Schrödinger’s famous cat, which is happily, if apparently impossibly, both dead AND alive until someone decides to check up on it, whereupon it immediately ends up alive XOR dead.

But quantum computing enthusiasts claim that, with sufficiently careful construction, a quantum device could reliably extract the right answer from the superposition of all answers, perhaps even for calculations chunky enough to chew through cryptographic cracking puzzles that are currently considered computationally infeasible.

Computationally infeasible is a jargon term that loosely means, “You will get there in the end, but neither you, nor perhaps the earth, nor even – who knows? – the universe, will survive long enough for the answer to serve any useful purpose.

Schrödinger’s computer

Some cryptographers, and some physicists, suspect that quantum computers of this size and computational power may not actually be possible, but – in a nice analogue of Schrödinger’s cat in that unopened box – no one can currently be certain either way.

As we wrote when we covered this topic earlier this year:

Some experts doubt that quantum computers can ever be made powerful enough to [be used against] real-world cryptographic keys.

They suggest that there’s an operational limit on quantum computers, baked into physics, that will eternally cap the maximum number of answers they can reliably calculate at the same time – and this upper bound on their parallel-processing capacity means they’ll only ever be any use for solving toy problems.

Others say, “It’s only a matter of time and money.”

Two main quantum algorithms are known that could, if reliably implemented, present a risk to some of the cryptographic standards we rely on today:

  • Grover’s quantum search algorithm. Usually, if you want to search a randomly-ordered set of answers to see if yours is on the list, you would expect to plough through the entire list, at worst, before getting a definitive answer. Grover’s algorithm, however, given a big and powerful enough quantum computer, claims to be able to complete the same feat with about the square root of the usual effort, thus doing lookups that would normally take 22N tries (think of using 2128 operations to forge a 16-byte hash) in just 2N tries instead (now imagine cracking that hash in 264 goes).
  • Shor’s quantum factorisation algorithm. Several contemporary encryption algorithms rely on the fact that multiplying two large prime numbers together can be done quickly, whereas dividing their product back into the two numbers that you started with is as good as impossible. Loosely speaking, you’re stuck with trying to divide a 2N-digit number by every possible N-digit prime number until you hit the jackpot, or find there isn’t an answer. But Shor’s algorithm, amazingly, promises to solve this problem with the logarithm of the usual effort. Thus factoring a number of 2048 binary digits should take just twice as long as factoring a 1024-bit number, not twice as long as factoring a 2047-bit number, representing a huge speedup.

When the future collides with the present

Clearly, part of the risk here is not only that we might need new algorithms (or bigger keys, or longer hashes) in the future…

…but also that digital secrets or attestations that we create today, and expect to remain secure for years or decades, might suddenly become crackable within the useful lifetime of the passwords or hashes concerned.

That’s why the US National Institute of Standards and Technology (NIST), back in 2016, started a long-running public competition for unpatented, open-source, free-for-all-uses cryptographic algorithms that are considered “post-quantum”, meaning that they can’t usefully be accelerated by the sort of quantum computing tricks described above.

The first algorithms to be accepted as standards in Post-Quantum Cryptography (PQC) emerged in mid-2022, with four secondary candidates put in the running for possible future official acceptance.

(Sadly, one of the four was cracked by Belgian cryptographers not long after the announcement, but that just drives home the importance of permitting global, long-term, public scrutiny of the standardisation process.)

Congress on the case

Well, last week, on 2022-12-21, US President Joe Biden enacted legislation entitled HR 7535: The Quantum Computing Cybersecurity Preparedness Act.

The Act doesn’t yet mandate any new standards, or give us a fixed time frame for switching away from any algorithms we’re currently using, so it’s more of a reminder than a regulation.

Notably, the Act is a reminder that cybersecurity in general, and cryptography in particular, should never be allowed to stand still:

Congress finds the following:

(1) Cryptography is essential for the national security of the United States and the functioning of the economy of the United States.

(2) The most widespread encryption protocols today rely on computational limits of classical computers to provide cybersecurity.

(3) Quantum computers might one day have the ability to push computational boundaries, allowing us to solve problems that have been intractable thus far, such as integer factorization, which is important for encryption.

(4) The rapid progress of quantum computing suggests the potential for adversaries of the United States to steal sensitive encrypted data today using classical computers, and wait until sufficiently powerful quantum systems are available to decrypt it.

It is the sense of Congress that –

(1) a strategy for the migration of information technology of the Federal Government to post-quantum cryptography is needed; and

(2) the governmentwide and industrywide approach to post-quantum cryptography should prioritize developing applications, hardware intellectual property, and software that can be easily updated to support cryptographic agility.

What to do?

The last two words above are the ones to remember: cryptographic agility.

That means you need not only to be able to switch algorithms, change key sizes, or adjust algorithm parameters quickly…

…but also to be willing to do so, and to do so safely, possibly at short notice.

As an example of what not to do, consider the recent LastPass announcement that its customers’ backed-up password vaults had been stolen, despite the company’s initial assumption that they hadn’t.

LastPass claims to use 100,100 iterations of the HMAC-SHA256 algorithm in its PBKDF2 password generation process (we currently recommend 200,000, and OWASP apparently recommends 310,000, but let’s accept “more than 100,000” as satisfactory, if not exemplary)…

…but that’s only for master passwords created since 2018.

It seems that the company never got round to advising users with master passwords created before then that theirs had been processed with just 5000 iterations, let alone requiring them to change their passwords and thereby to adopt the new iteration strength.

This leaves older passwords at much greater risk of exposure to attackers using contemporary cracking tools.

In other words, keep yourself cryptographically nimble, even if there never is a sudden quantum computing breakthrough.

And keep your customers nimble too – don’t wait for them to find out the hard way that they could have been safe, if only you’d kept them moving in the right direction.

You probably guessed, right at the top of this article, what we’d say at the end, so we shan’t disappoint:

CYBERSECURITY IS A JOURNEY, NOT A DESTINATION.


S3 Ep115: True crime stories – A day in the life of a cybercrime fighter [Audio + Text]

A DAY IN THE LIFE OF A CYBERCRIME FIGHTER

Once more unto the breach, dear friends, once more!

Paul Ducklin talks to Peter Mackenzie, Director of Incident Response at Sophos, in a cybersecurity session that will alarm, amuse and educate you, all in equal measure.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

[MUSICAL MODEM]


PAUL DUCKLIN.  Welcome to the Naked Security podcast, everybody.

This episode is taken from one of this year’s Security SOS Week sessions.

We’re talking to Peter Mackenzie, the Director of Incident Response at Sophos.

Now, he and his team… they are like a cross between the US Marine Corps and the Royal Navy Special Boat Service.

They go steaming in where angels fear to tread – into networks that are already under attack – and sort things out.

Because this episode was originally presented in video form for streaming, the audio quality isn’t great, but I think you’ll agree that the content is interesting, important and informative, all in equal measure.

[MORSE CODE]

[ROBOT VOICE: Sophos Security SOS]


DUCK.  Today’s topic is: Incident response – A day in the life of a cyberthreat responder.

Our guest today is none other than Peter Mackenzie.

And Peter is Director of Incident Response at Sophos.


PETER MACKENZIE.  Yes.


DUCK.  So, Peter… “incident response for cybersecurity.”

Tell us what that typically involves, and why (unfortunately) you often need to get called in.


PETER.  Typically, we’re brought in either just after an attack or while one is still unfolding.

We deal with a lot of ransomware, and victims need help understanding what happened.

How did the attacker get in?

How did they do what they did?

Did they steal anything?

And how do they get back to normal operations as quickly and as safely as possible?


DUCK.  And I guess the problem with many ransomware attacks is…

…although they get all the headlines for obvious reasons, that’s often the end of what could have been a long attack period, sometimes with more than one load of crooks having been in the network?


PETER.  Yes.

I describe ransomware as the “receipt” they leave at the end.


DUCK.  Oh, dear.


PETER.  And it is, really – it’s the ransom demand.


DUCK.  Yes, because you can’t help but notice it, can you?

The wallpaper has got flaming skulls on it… the ransom note.

That’s when they *want* you to realise…


PETER.  That’s them telling you they’re there.

What they wanted to hide is what they were doing in the days, weeks or months before.

Most victims of ransomware, if we ask, “When did this happen?”…

…they’ll say, “Last night. The encryption started at 1am”; they started getting alerts.

When we go in and investigate, we’ll find out that, actually, the crooks have been in the network for two weeks preparing.

It’s not automated, it’s not easy – they have to get the right credentials; they have to understand your network; they want to delete your backups; they want to steal data.

And then when *they’re* ready, that’s when they launch the ransomware – the final stage.


DUCK.  And it’s not always one lot of crooks, is it?

There will be the crooks who say, “Yes, we can get you into the network.”

There will be the crooks who go, “Oh, well, we’re interested in the data, and the screenshots, and the banking credentials, and the passwords.”

And then, when they’ve got everything they want, they might even hand it over to a third lot who go, “We’ll do the extortion.”


PETER.  Even in the simplest ransomware attacks, there are normally a few people involved.

Because you’ll have an initial access broker that may have gained access to the network… basically, someone breaks in, steals credentials, confirms they work, and then they’ll go and advertise those.

Someone else will buy those credentials…


DUCK.  That’s a dark web thing, I imagine?


PETER.  Yes.

And a couple of weeks or a couple of months later, someone will use those credentials.

They’ll come in and they’ll do their part of the attack, which could be understanding the network, stealing data, deleting backups.

And then maybe someone else will come in to actually do the ransomware deployment.

But then also you have the really unlucky victims…

We recently published an article on multiple attackers, where one ransomware group came in and they launched their attack in the morning around… I think it was around 10am.

Four hours later, a different ransomware group, completely unrelated to the first, launched theirs…


DUCK.  [LAUGHS] I shouldn’t be smiling!

So these guys… the two lots of crooks didn’t realise they were competing?


PETER.  They didn’t know they were there!

They both came in the same way, unfortunately: open Remote Desktop Protocol [RDP].

Two weeks after that, a *third* group came in while they were still trying to recover.


DUCK.  [GROANS] Ohhhhhhh…


PETER.  Which actually meant that when the first one came in, they started running their ransomware… it was BlackCat, also known as Alpha ransomware, that ran first.

They started encrypting their files.

Two hours later, Hive ransomware came in.

But because BlackCat was still running, Hive ended up encrypting BlackCat’s already-encrypted files.

BlackCat then encrypted Hive’s files that were already encrypted twice…

…so we basically ended up with *four* levels of encryption.

And then, two weeks later, because they hadn’t recovered everything yet, LockBit ransomware came in and ended up encrypting those files.

So some of these files were actually encrypted *five times*.


DUCK.  [LAUGHS] I musn’t laugh!

In that case, I presume it was that the first two lots of crooks got in because they happened to stumble across, or maybe buy from the same broker, the credentials.

Or they could have found it with an automated scanning tool…that bit can be automated, can’t it, where they find the hole?


PETER.  Yes.


DUCK.  And then how did the third lot get in?


PETER.  Same method!


DUCK.  Oh, not through a hole left by the first lot? [LAUGHS]


PETER.  No, same method.

Which then speaks to: This is why you need to investigate!


DUCK.  Exactly.


PETER.  You can’t just wipe machines and expect to bury your head in the sand.

The organisation brought us in after the third attack – they didn’t actually know they’d had a second attack.

They thought they had one, and then two weeks later had another.

It was us that pointed out, “Actually, four hours after first one, you had another one you didn’t even spot.”

Unfortunately they didn’t investigate – they didn’t identify that RDP was open and that that’s how the attackers were getting in.

So they didn’t know that that was something that needed to be fixed otherwise someone else would come in…

…which is exactly what they did.


DUCK.  So when you’re brought in, obviously it’s not just, “Hey, let’s find all the malware, let’s delete it, let’s tick it off, and let’s move on.”

When you’re investigating, when you’re trying to find out, “What holes have been left behind by accident or design?”…

…how do you know when you’ve finished?

How can you be certain that you’ve found them all?


PETER.  I don’t think you can ever be certain.

In fact, I’d say anyone that says they’re 100% confident of anything in this industry… they’re probably not being quite honest.


DUCK.  +1 to that! [LAUGHS]


PETER.  You have to try and find everything you can that the attacker did, so you can understand, “Did they set any backdoors up so they can get back in?”

You have to understand what they stole, because that could obviously have relevance for compliance and reporting purposes.


DUCK.  So let’s say that you’ve had a series of attacks, or that there have been crooks in the network for days, weeks… sometimes it’s months, isn’t it?


PETER.  Years, sometimes, but yes.


DUCK.  Oh, dear!

When you’re investigating what could have happened that might leave the network less resilient in future…

…what are the things that the crooks do that help them make their attack both broader and deeper?


PETER.  I mean, one of the first things an attacker will do when they’re in a network is: they’ll want to know what access they’ve got.


DUCK.  The analogy there would be, if they’d broken into your office building, they wouldn’t just be interested in going to two or three desk drawers and seeing if people had left wallets behind.

They’d want to know which departments live where, where are the cabling cabinets, where’s the server room, where’s the finance department, where are the tax records?


PETER.  Which, in the world of cyber, means they’re going to scan your network.

They’re going to identify names of servers.

If you’re using Active Directory, they’ll want to look your Active Directory so they can find out who’s got Domain Admin rights; who’s got the best access to get to where they want to get to.


DUCK.  If they need to create a new user, they won’t just call that user WeGotcha99?


PETER.  They might!

We’ve seen ones where they literally just created a new user, gave them Domain Admin and called the user hacker… but normally they will give a generic name.


DUCK.  So, they’ll look at your naming schedule and try and fit in with it?


PETER.  Yes, they’ll call it Administrat0r, spelled with a zero instead of an O, things like that.

For most ransomware… it’s not that advanced, because they simply don’t need to be that advanced.

They know that most companies are not looking at what’s going on on their network.

They may have security software installed that may be giving them alerts about some of the stuff the attackers are doing.

But unless someone’s actually looking, and investigating those alerts, and actually responding in real time, it doesn’t matter what the attackers do if no one’s actually stopping them.

If you’re investigating crime… let’s say you found a gun inside your house.

You can remove the gun – great.

But how did it get there?

That’s the bigger question.

Do you have software in place that’s going to alert you to suspicious behaviour?

And then when you see that, do you actually have the ability to isolate a machine, to block a file, block an IP address?


DUCK.  Presumably, the primary goal of your cybersecurity software will be to keep the crooks out indefinitely, forever…

…but on the assumption that somebody will make a mistake sooner or later, or the crooks will get in somehow, it’s still OK if that happens, *provided you catch them before they have enough time to do something bad*.


PETER.  As soon as you start getting humans involved… if they get blocked, they try something different.

If no one’s stopping them, they’re either going to get bored, or they’re going to succeed.

It’s just a matter of time.


DUCK.  What 10 or 15 years ago would have been signed off as a great success: malware file dropped on disk; detected; remediated; automatically removed; put in the log; tick off; let’s pat each other on the back…

…today, that could actually be deliberate.

The crooks could be trying something really minute, so you think you’ve beaten them, but what they’re *really* doing is trying to work out what things are likely to escape notice.


PETER.  There’s a tool called Mimikatz – some would class it as a legitimate penetration testing tool; some would just class it as malware.

It’s a tool for stealing credentials out of memory.

So, if Mimikatz is running on a machine, and someone logs onto that machine… it takes your username and password, simple as that.

It doesn’t matter if you’ve got 100-character password – it makes no difference.


DUCK.  It just lifts it out of memory?


PETER.  Yes.

So, if your security software detects Mimikatz and removes it, a lot of people go, “Great! I’m saved! [DRAMATIC] The virus is gone!”

But the root cause of the problem you’ve got is not that that one file was detected and removed…

…it’s that someone had the ability to put it there in the first place.


DUCK.  Because it needs sysadmin powers to be able to do its work already, doesn’t it?


PETER.  Yes.

I think that the bigger priority should be: assume you are going to get attacked, or you already have been.

Make sure you’ve got processes in place to deal with that, and that you’ve segmented your network as best you can to keep important documents in one place, not accessible to everyone.

Don’t have one big flat network where anyone can access anything – that’s perfect for attackers.

You have to think in the attackers mindset a little bit, and protect your data.

I have personally investigated hundreds, if not thousands, of different incidents for different companies…

…and I have never met a single company that had every single machine in their environment protected.

I’ve met a lot that *say* they do, and then we prove they don’t.

We even had a user or a company that only had eight machines and they said, “They’re all protected.”

Turns out one wasn’t!

There’s a tool called Cobalt Strike, which gives them great access to machines.

They’ll deploy Cobalt Strike….


DUCK.  That’s supposed to be a licence-only penetration testing tool, isn’t it?


PETER.  Yesssss… [PAUSE]

We could have a whole other podcast on my opinions of that.

[LOUD LAUGHTER]


DUCK.  Let’s just say the crooks don’t worry about piracy so much…


PETER.  They’re using a tool, and they deploy that tool across the network, let’s say on 50 machines.

It gets detected by the anti-virus and the attacker doesn’t know what happened… it just didn’t work.

But then two machines start reporting back, because those two machines are the ones that don’t have any protection on.

Well, now the attacker is going to move to those two machines, knowing that nobody is watching them, so no one can see what’s going on.

These are the ones where there’s no anti-virus.

They can now live there for as many days, weeks, months, years that they need to, to get access to the other machines on their network.

You have to protect everything.

You have to have tools in place so you can see what’s going on.

And then you have to have people in place to actually respond to that.


DUCK.  Because the crooks are getting quite organised in this, aren’t they?

We know from some of the fallout that’s happened recently in the ransomware gang world, where some of the affiliates (they’re the people who don’t write the ransomware; they do the attacks)…

…they felt they were being short-changed by the guys at the core of the gang.


PETER.  Yes.


DUCK.  And they leaked a whole load of their playbooks, their operating manuals.

Which gives a good indication that an individual crook doesn’t have to be an expert in everything.

They don’t have to learn all this by themselves.

They can join a ransomware crew, if you like, and they’ll be given a playbook that says, “Try this. If that doesn’t work, try that. Look for this; set that; here’s how you make a backdoor”… all of those things.


PETER.  Yes, the entry bar is incredibly low now.

You can go onto… not even onto the dark web – you can Google and watch YouTube videos on most of what you need to know to start this.

You’ve got the big ransomware names at the moment, like LockBit, and Alpha, and Hive.

They have quite tight rules around who they let in.

But then you’ve got other groups like Phobos ransomware, who is pretty much…

…they work off a script, and it’s almost like a call centre of people who can just join them, follow a script, do an attack, make some money.

It’s relatively easy.

There are tutorials, there are videos, you can live chat with the ransomware groups to get advice… [LAUGHS]


DUCK.  We know from, what was it, about a year ago?…

…where the REvil ransomware crew put $1 million in Bitcoins upfront into an online forum to recruit new ransomware operators or affiliates.

And you think, “Oh, they’ll be looking for assembly programming, and low level hacking skills, and kernel driver expertise.”

No!

They were looking for things like, “Do you have experience with backup software and virtual machines?”

They want people to know how to break into a network, find where your backups are, and ruin them!


PETER.  That’s it.

As I said earlier, you’ve got the initial access brokers that they might be buying the access from…

…now you’re in, it’s your job, as a ransomware affiliate, to cause as much damage as possible so that the victim has no other choice but to pay.


DUCK.  Let’s turn this to a positive…


PETER.  OK.


DUCK.  As an incident responder who generally is getting called in when somebody realises, “Oh dear, if only we’ve done it differently”…

…what are your three top tips?

The three things you can do that will make the biggest difference?


PETER.  I’d say the first one is: get around a table or on a Zoom with your colleagues, and start having these sorts of tabletop exercises.

Start asking questions of each other.

What would happen if you had a ransomware attack?

What would happen if all your backups were deleted?

What would happen if someone told you there was an attacker on your network?

Do you have the tools in place?

Do you have the experience and the people to actually respond to that?

Start asking those type of questions and see where it leads you…

…because you’ll probably quickly realise that you don’t have the experience, and don’t have the tools to respond.

And when you need them, you need to have them *ready in advance*.


DUCK.  Absolutely.

I couldn’t agree more with that.

I think a lot of people feel that to do that is “preparing to fail”.

But not doing it, which is “failing to prepare”, means that you’re really stuck.

Because, if the worst does happen, *then* it’s too late to prepare.

By definition, preparation is something you do upfront.


PETER.  You don’t read the fire safety manual while the building’s on fire around you!


DUCK.  And, particularly with a ransomware attack, there could be a lot more to it than just, “What does the IT team do?”

Because there are things like…

Who will talk to the media?

Who’ll put out official statements to customers?

Who will contact the regulator if necessary?

There’s an awful lot that you need to know.


PETER.  And secondly, as I mentioned earlier, you do need to protect everything.

Every single machine on your network.

Windows, Mac, Linux… doesn’t matter.

Have protection on it, have reporting capabilities.


DUCK.  [IRONIC] Oh, Linux is not immune from malware? [LAUGHS]


PETER.  [SERIOUS] Linux ransomware is increasing…


DUCK.  But, also, Linux servers are often used as a jumping off point, aren’t they?


PETER.  The big area for Linux at the moment is things like ESXi virtual host servers.

Most ransomware attacks nowadays are the big groups… they will go after your ESXi servers so they can actually encrypt your virtual machines at the the VMDK file level.

Meaning those machines won’t boot.

Incident responders can’t even really investigate them that well, because you can’t even boot them.


DUCK.  Oh, so they encrypt the whole virtual machine, so it’s like having a fully encrypted disk?


PETER.  Yes.


DUCK.  They’ll stop the VM, scramble the file… probably remove all your snapshots and rollbacks?


PETER.  So, yes, you do need to protect everything.

Don’t just assume!

If someone says, “All our machines are protected,” take that as probably inaccurate, and ask them how they verify that.

And then thirdly, accept that security is complicated.

It’s changing constantly.

You, in your role… you’re probably not there to deal with this on a 24/7 basis.

You probably have other priorities.

So, partner with companies like Sophos, and MDR Services…


DUCK.  That’s Managed Detection and Response?


PETER.  Managed Detection and Response… people 24/7 monitoring your network, if you can’t monitor it.


DUCK.  So it’s not just incident response where it’s already, “Something bad has happened.”

It could include, “Something bad looks like it’s *about* to happen, let’s head it off”?


PETER.  These are the the people that, in the middle of the night, because you don’t have the team to work on a Sunday at 2am…

…these are the people who are looking at what’s going on in your network, and reacting in real time to stop an attack.


DUCK.  They’re looking for the fact that somebody is tampering with the expensive padlock you put on the front door?


PETER.  They’re the 24/7 security guard who’s going to go and watch that padlock being tampered with, and they’re going to take their stick and… [LAUGHS]


DUCK.  And again, that’s not an admission of failure, is it?

It’s not saying, “Oh, well, if we hire someone in, it must mean we don’t know what we’re doing about security”?


PETER.  It’s an acceptance that this is a complicated industry; that having assistance will make you better prepared, better secured.

And it frees up some of your own resources to concentrate on what they need to concentrate on.


DUCK.  Peter, I think that’s an upbeat place on which to end!

So I would just like to thank everybody who has listened today, and leave you with one last thought.

And that is: until next time, stay secure!

[MORSE CODE]


Twitter data of “+400 million unique users” up for sale – what to do?

Hot on the heels of the LastPass data breach saga, which first came to light in August 2022, comes news of a Twitter breach, apparently based on a Twitter bug that first made headlines back in the same month.

According to a screenshot posted by news site Bleeping Computer, a cybercriminal has advertised:

I’m selling data of +400 million unique Twitter users that was scraped via a vulnerability, this data is completely private.

And it includes emails and phone numbers of celebrities, politicians, companies, normal users, and a lot of OG and special usernames.

OG, in case you’re not familiar with that term in the context of social media accounts, is short for original gangsta.

That’s a metaphor (it’s become mainstream, for all that it’s somewhat offensive) for any social media account or online identifier with such a short and funky name that it must have been snapped up early on, back when the service it relates to was brand new and hoi polloi hadn’t yet flocked to join in.

Having the private key for Bitcoin block 0, the so-called Genesis block (because it was created, not mined), would be perhaps the most OG thing in cyberland; owning a Twitter handle such as @jack or any short, well-known name or phrase, is not quite as cool, but certainly sought-after and potentially quite valuable.

What’s up for sale?

Unlike the LastPass breach, no password-related data, lists of websites you use or home addresses seem to be at risk this time.

Although the crooks behind this data sell-off wrote that the information “includes emails and phone numbers”, it seems likely that’s the only truly private data in the dump, given that it seems to have been acquired back in 2021, using a vulnerability that Twitter says it fixed back in January 2022.

That flaw was caused by a Twitter API (application programming interface, jargon for “an official, structured way of making remote queries to access specific data or perform specific commands”) that would allow you to look up an email address or phone number, and to get back a reply that not only indicated whether it was in use, but also, if it was, the handle of the account associated with it.

The immediately obvious risk of a blunder like this is that a stalker, armed with someone’s phone number or email address – data points that are often made public on purpose – could potentially link that individual back to a pseudo-anonymous Twitter handle, an outcome that definitely wasn’t supposed to be possible.

Although this loophole was patched in January 2022, Twitter only announced it publicly in August 2022, claiming that the initial bug report was a responsible disclosure submitted through its bug bounty system.

This means (assuming that the bounty hunters who submitted it were indeed the first to find it, and that they never told anyone else) that it wasn’t treated as a zero-day, and thus that patching it would proactively prevent the vulnerability from being exploited.

In mid-2022, however, Twitter found out otherwise:

In July 2022, [Twitter] learned through a press report that someone had potentially leveraged this and was offering to sell the information they had compiled. After reviewing a sample of the available data for sale, we confirmed that a bad actor had taken advantage of the issue before it was addressed.

A broadly exploited bug

Well, it now looks as though this bug may have been exploited more broadly than it first appeared, if indeed the current data-peddling crooks are telling the truth about having access to more than 400 million scraped Twitter handles.

As you can imagine, a vulnerability that lets criminals look up the known phone numbers of specific individuals for nefarious purposes, such as harassment or stalking, is likely also to allow attackers to look up unknown phone numbers, perhaps simply by generating extensive but likely lists based on number ranges known to be in use, whether those numbers have ever actually been issued or not.

You’d probably expect an API such as the one that was allegedly used here to include some sort of rate limiting, for example aimed at reducing the number of queries allowed from one computer in any given period of time, so that reasonable use of the API would not be hindered, but excessive and therefore probably abusive use would be curtailed.

However, there are two problems with that assumption.

Firstly, the API wasn’t supposed to reveal the information that it did in the first place.

Therefore it is reasonable to think that rate limiting, if indeed there were any, wouldn’t have worked correctly, given the attackers had already found a data access path that wasn’t being checked properly anyway.

Secondly, attackers with access to a botnet, or zombie network, of malware-infected computers could have used thousands, perhaps even millions, of other people’s innocent-looking computers, spread all over the world, to do their dirty work.

This would give them the wherewithal to harvest the data in batches, thus sidestepping any rate limiting by making a modest number of requests each from lots of different computers, instead of having a small number of computers each making an excessive number of requests.

What did the crooks get hold of?

In summary: we don’t know how many of those “+400 million” Twitter handles are:

  • Genuinely in use. We can assume there are plenty of shuttered accounts in the list, and perhaps accounts that never even existed, but were erroneously included in the cybercriminals’ unlawful survey. (When you’re using an unauthorised path into a database, you can never be quite sure how accurate your results are going to be, or how reliably you can detect that a lookup failed.)
  • Not already publicly connected with emails and phone numbers. Some Twitter users, notably those promoting their services or their business, willingly allow other people to connect their email address, phone number and Twitter handle.
  • Inactive accounts. That doesn’t eliminate the risk of connecting up those Twitter handles with emails and phone numbers, but there are likely to be a bunch of accounts in the list that won’t be of much, or even any, value to other cybercriminals for any sort of targeted phishing scam.
  • Already compromised via other sources. We regularly see huge lists of data “stolen from X” up for sale on the dark web, even when service X hasn’t had a recent breach or vulnerability, because that data had been stolen earlier on from somewhere else.

Nevertheless, the Guardian newspaper in the UK reports that a sample of the data, already leaked by the crooks as a sort of “taster”, does strongly suggest that at least part of the multi-million-record database on sale consists of valid data, hasn’t been leaked before, wasn’t supposed to be public, and almost certainly was extracted from Twitter.

Simply put, Twitter does have plenty of explaining to do, and Twitter users everywhere are likely to be asking, “What does this mean, and what should I do?”

What is it worth?

Apparently, the crooks themselves seem to have assessed the entries in their purloined database as having little individual value, which suggests that they don’t see the personal risk of having your data leaked this way as terribly high.

They’re apparently asking $200,000 for the lot for a one-off sale to a single buyer, which comes out at 1/20th of a US cent per user.

Or they’ll take $60,000 from one or more buyers (close to 7000 accounts per dollar) if no one pays the “exclusive” price.

Ironically, the crooks’ main purpose seems to be to blackmail Twitter, or at least to embarrass the company, claiming that:

Twitter and Elon Musk… your best option to avoid paying $276 million USD in GDPR breach fines… is to buy this data exclusively.

But now that the cat is out of the bag, given that the breach has been announced and publicised anyway, it’s hard to imagine how paying up at this point would make Twitter GDPR compliant.

After all, the crooks have apparently had this data for some time already, may well have acquired it from one or more third parties anyway, and have already gone out of their way to “prove” that the breach is real, and at the scale claimed.

Indeed, the message screenshot that we saw didn’t even mention deleting the data if Twitter were to pay up (forasmuch as you could trust the crooks to delete it anyway).

The poster promised merely that “I will delete this thread [on the web forum] and not sell this data again.”

What to do?

Twitter isn’t going to pay up, not least because there’s little point, given that any breached data was apparently stolen a year or more ago, so it could be (and probably is) in the hands of numerous different cyberscammers by now.

So, our immediate advice is:

  • Be aware of emails that you might not previously have thought likely to be scams. If you were under the impression that the link between your Twitter handle and your email address was not widely known, and therefore that emails that exactly identified your Twitter name were unlikely to come from untrusted sources… don’t do that any more!
  • If you use your phone number for 2FA on Twitter, be aware that you could be a target of SIM swapping. That’s where a crook who already knows your Twitter password gets a new SIM card issued with your number on it, thus getting instant access to your 2FA codes. Consider switching your Twitter account to a 2FA system that doesn’t depend on your phone number, such as using an authenticator app instead.
  • Consider ditching phone-based 2FA altogether. Breaches like this – even if the true total is well below 400 million users – are a good reminder that even if you have a private phone number that you use for 2FA, it’s surprisingly common for cybercrooks to be able to connect your phone number to specific online accounts protected by that number.

Critical “10-out-of-10” Linux kernel SMB hole – should you worry?

Just before the Christmas weekend – in fact, at about the same time that beleaguered password management service LastPass was admitting that, yes, your password vaults were stolen by criminals after all – we noticed a serious-sounding Linux kernel vulnerability that hit the news.

The alerts came from Trend Micro’s Zero Day Initiative (ZDI), probably best known for buying up zero-day security bugs via the popular Pwn2Own competitions, where bug-bounty hunting teams compete live on stage for potentially large cash prizes.

In return for sponsoring the prize money, the vendors of products ranging from operating systems and browsers to networked printers and internet routers hope to buy up brand new security flaws, so they can fix the holes responsibly. (To collect their prizes, participants have to provide a proper write-up, and agree not to share any information about the flaw until the vendor has had a fair chance to fix it.)

But ZDI doesn’t just deal in competitive bug hunting in its twice-a-year contests, so it also regularly puts out vulnerability notices for zero-days that were disclosed in more conventional ways, like this one, entitled Linux Kernel ksmbd Use-After-Free Remote Code Execution Vulnerability.

Serving Windows computers via Linux

SMB is short for server message block, and it’s the protocol that underpins Windows networking, so almost any Linux server that provides network services to Windows computers will be running software to support SMB.

As you can therefore imagine, SMB-related security bugs, especially ones that can be exploited over the network without the attacker needing to logon first, as is the case here, are potentially serious issues for most large corporate networks.

SMB support is also generally needed in home and small-business NAS (network attached storage) devices, which generally run Linux internally, and provide easy-to-use, plug-it-in-and-go file server features for small networks.

No need to learn Linux yourself, or to set up a full-blown server, or to learn how to configure Linux networking – just plug-and-play with the NAS device, which has SMB support built-in and ready to go for you.

Why the holiday timing?

In this case, the bug wasn’t deliberately disclosed on the night before the night before the night before Christmas in a not-so-ho-ho-ho bid to spoil your festive season by freaking you out.

And it wasn’t reported just before the weekend in a bid to bury bad PR by hoping you’d be vacation-minded enough either to miss the story completely or to shrug it off until the New Year.

The good news is that, as usually happens under the umbrella of responsible disclosure, the date for ZDI’s report was agreeed in advance, presumably when the flaw was disclosed, thus giving the Linux kernel team sufficient time to fix the problem properly, while nevertheless not allowing them to put the issue off indefinitely.

In this case, the bug report is listed as having happened on 2022-07-26, and what ZDI refers to as the “co-ordinated public release of [the] advisory” was set for 2022-12-22, which turns out to be a gap of exactly 150 days, if you count old-school style and include the full day at each end.

So, even though this bug has had some dramatic coverage over the holiday weekend, given that it was a remote code execution (RCE) hole in the Linux kernel itself, and came with a so-called CVSS score of 10/10, considered Critical

…it was patched in the Linux source code within just two days of disclosure, and the fix was accepted and packaged into the official Linux kernel source code in time for the release of Linux 5.15.61, back on 2022-08-17, just 23 days after the report first came in.

In other words, if you’ve updated your Linux kernel any time since then, you’re already safe, no matter what kernel configuration settings you or your distro used when compiling the kernel.

This period includes 24 subsequent updates to the kernel 5.15 series, now at 5.15.85, along with any versions of kernel 6.0, kernel 6.1 and the still-in-candidate-stage kernel 6.2, all of which had their first releases after August 2022.

Probably not the SMB software you suspect

Also, although it sounds at first glance as though this bug will inevitably affect any Linux server or device supporting Windows networking, that’s not true either.

Most sysadmins, and in our experience most NAS programmers, provide Windows SMB support via a long-running and well-respected open source toolkit called Samba, where the name Samba is simply the closest pronounceable word that the original developer, open-source luminary Andrew “Tridge” Tridgell OAM, could find to represent the abbreviation SMB.

Anyone who has used Samba will know that the software runs as a regular application, in what’s known as user space – in other words, without needing its own code running inside the kernel, where even modest bugs could have dangerous repercussions.

Indeed, the main Samba program file is called smbd, where the trailing -D is a typical Unixism standing for daemon, or background process – what Windows admins would call a service.

But this bug, as you can see from the ZDI report, is in a kernel module called ksmbd, where the -D denotes a background service, the -SMB- denotes Windows networking support, and the K- means runs in kernel space, i.e. right inside the kernel itself.

At this point, you’re probably asking yourself, “Why bury the complexity of supporting SMB right into the kernel, given that we’ve already got a reliable and well-respected user-space product in the form of Samba, and given that the risks are much greater?”

Why, indeed?

As so often, there seem to be two main reasons: [A] because we can! and [B] because performance.

By pushing what are typically high-level software features down into the kernel, you can often improve performance, though you almost always pay the price of a corresponding, and possibly considerable, decrease in safety and security.

What to do?

  • Check if you have a Linux kernel based on any release on or after 5.15.61 (dated 2022-08-17). If so, this bug is fixed in the source code. No matter what kernel compilation options you (or your distro maker) choose, the bug won’t appear in the kernel build.
  • Check if your Linux kernel build even includes ksmbd. Most popular distros neither compile it in, nor build it as a module, so you can’t load it or activate it, even by mistake.
  • Check with your vendor if you are using an applicance such as a NAS box or other device that supports connections from Windows computers. Chances are that your NAS device won’t be using ksmbd, even if it still has a kernel version that is vulnerable in theory. (Note to Sophos customers: as far as we are aware, no Sophos appliances use ksmbd.)
  • If you’re using ksmbd out of choice, consider re-evaluating your risk. Make sure you measure the true increase in performance you’ve achieved, and decide whether the payoff is really worth it.

COMMANDS YOU CAN USE TO CHECK YOUR EXPOSURE

Any Linux from 5.15.61 on, or any 6.x, is already patched. To check your Linux version: $ uname -o -r 6.1.1 GNU/Linux 
To see if this kernel feature is compiled in, you can dump the compile-time configuration of the running kernel: $ zcat /proc/config.gz | grep SMB_SERVER # CONFIG_SMB_SERVER is not set If this compile-time configuration setting is unset, or set to "n" for no, the feature wasn't built at all. If it says "y" for yes, then the kernel SMB server is compiled right into your kernel, so ensure you have a patched version. If it says "m" for module, then the kernel build probably includes a run-time module that can be loaded on demand.
To see if your kernel has a loadable module available: $ /sbin/modprobe --show ksmbd modprobe: FATAL: Module ksmbd not found in directory /lib/modules/6.1.1 Note that "--show" means "never actually do it, just show if loading it would work or not".
To see if your system has the ksmbd module already active: $ lsmod | grep ksmbd If you see no output, the module wasn't matched in the list.
To stop the module loading inadvertnatly in case it ever shows up, add a file with a name such as ksmbd.conf to the directory /lib/modules.d or /etc/modules.d with these lines in it: blacklist ksmbd install ksmbd /bin/false

LastPass finally admits: Those crooks who got in? They did steal your password vaults, after all…

Popular password management company LastPass has been under the pump this year, following a network intrusion back in August 2022.

Details of how the attackers first got in are still scarce, with LastPass’s first official comment cautiously stating that:

[A]n unauthorized party gained access to portions of the LastPass development environment through a single compromised developer account.

A folllow-up announcement about a month later was similarly inconclusive:

[T]he threat actor gained access to the Development environment using a developer’s compromised endpoint. While the method used for the initial endpoint compromise is inconclusive, the threat actor utilized their persistent access to impersonate the developer once the developer had successfully authenticated using multi-factor authentication.

There’s not an awful lot left in this paragraph if you drain out the jargon, but the key phrases seem to be “compromised endpoint” (in plain English, this probably means: malware-infected computer), and “persistent access” (meaning: the crooks could get back in later on at their leisure).

2FA doesn’t always help

Unfortunately, as you can read above, two-factor authentication (2FA) didn’t help in this particular attack.

We’re guessing that’s because LastPass, in common with most companies and online services, doesn’t literally require 2FA for every connection where authentication is needed, but only for what you might call primary authentication.

To be fair, many or most of the services you use, probably including your own employer, generally do something similar.

Typical 2FA exemptions, aimed at reaping most of its benefits without paying too high a price for inconvenience, include:

  • Doing full 2FA authentication only occasionally, such as requesting new one-time codes only every few days or weeks. Some 2FA systems may offer you a “remember me for X days” option, for example.
  • Only requiring 2FA authentication for initial login, then allowing some sort of “single sign-on” system to authenticate you automatically for a wide range of internal services. In many companies, logging on to email often also gives you access to other services such as Zoom, GitHub or other systems you use a lot.
  • Issuing “bearer access tokens” for automated software tools, based on occasional 2FA authentication by developers, testers and engineering staff. If you have an automated build-and-test script that needs to access various servers and databases at various points in the process, you don’t want the script continually interrupted to wait for you to type in yet another 2FA code.

We have seen no evidence…

In a fit of confidence that we suspect that LastPass now regrets, the company initially said, in August 2022:

We have seen no evidence that this incident involved any access to customer data or encrypted password vaults.

Of course, “we have seen no evidence” isn’t a very strong statement (not least because instransigent companies can make it come true by deliberately failing to look for evidence in the first place, or by letting someone else collect the evidence and then purposefully refusing to look at it), even though it’s often all that any company can truthfully say in the immediate aftermath of a breach.

LastPass did investigate, however, and felt able to make a definitive claim by September 2022:

Although the threat actor was able to access the Development environment, our system design and controls prevented the threat actor from accessing any customer data or encrypted password vaults.

Sadly, that claim turned out to be a little too bold.

The attack that led to an attack

LastPass did admit early on that the crooks “took portions of source code and some proprietary LastPass technical information”…

…and it now seems that some of that stolen “technical information” was enough to facilitate a follow-on attack that was disclosed in November 2022:

We have determined that an unauthorized party, using information obtained in the August 2022 incident, was able to gain access to certain elements of our customers’ information.

To be fair to LastPass, the company didn’t repeat its original claim that no passwords vaults had been stolen, referring merely to “customers’ information” being pilfered.

But in its previous breach notifications, the company had carefully spoken about customer data (which makes most of us think of information such as address, phone number, payment card details, and so on) and encrypted password vaults as two distinct categories.

This time, however, “customers’ information” turns out to include both customer data, in the sense above, and password databases.

Not literally on the night before Christmas, but perilously close to it, LastPass has admitted that:

The threat actor copied information from backup that contained basic customer account information and related metadata including company names, end-user names, billing addresses, email addresses, telephone numbers, and the IP addresses from which customers were accessing the LastPass service.

Loosely speaking, the crooks now know who you are, where you live, which computers on the internet are yours, and how to contact you electronically.

The admission continues:

The threat actor was also able to copy a backup of customer vault data.

So, the crooks did steal those password vaults after all.

Intriguingly, LastPass has now also admitted that what it describes as a “password vault” isn’t actually a scrambled BLOB (an amusing jargon word meaning binary large object) consisting only and entirely of encrypted, and therefore unintelligible, data.

Those “vaults” include unencrypted data, apparently including the URLs for the websites that go with each encrypted username and password.

The crooks therefore now not only know where you and your computer live, thanks to the leaked billing and IP address data mentioned above, but also have a detailed map of where you go when you’re online:

[C]ustomer vault data […] is stored in a proprietary binary format that contains both unencrypted data, such as website URLs, as well as fully-encrypted sensitive fields such as website usernames and passwords, secure notes, and form-filled data.

LastPass hasn’t given any other details about the unencrypted data that was stored in those “vault” files, but the words “such as website URLs” certainly imply that URLs aren’t the only information that the crooks acquired.

The good news

The good news, LastPass continues to insist, is that the security of your backed-up passwords in your vault file should be no different from the security of any other cloud backup that you encrypted on your own computer before you uploaded it.

According to LastPass, the secret data it backs up for you never exists in unencrypted form on LastPass’s own servers, and LastPass never stores or sees your master password.

Therefore, says LastPass, your backed-up password data is always uploaded, stored, accessed and downloaded in encrypted form, so that the crooks still need to crack your master password, even though they now have your scrambled password data.

As far as we can tell, passwords added into LastPass in recent years use a salt-hash-and-stretch storage system that’s close to our own recommendations, using the PBKDF2 algorithm with random salts, SHA-256 as the internal hashing system, and 100,100 iterations.



LastPass didn’t, or couldn’t, say, in its November 2022 update, how long it took for the second wave of crooks to get into its cloud servers following the first attack on its development system in August 2002.

But even if we assume that the second attack followed immediately but wasn’t noticed until later, the criminals have had at most four months to try to crack the master passwords of anyone’s stolen vault.

It’s therefore reasonable to infer that only users who had deliberately chosen easy-to-guess or early-to-crack passwords are at risk, and that anyone who has taken the trouble to change their passwords since the breach announcement has almost certainly kept ahead of the crooks.

Don’t forget that length alone is not enough to ensure a decent password. In fact, anecodal evidence suggests that 123456, 12345678 and 123456789 are all more commonly used these days than 1234, probably because of length restrictions imposed by today’s login screens. And remember that password cracking tools don’t simply start at AAAA and proceed like an alphanumeric odometer to ZZZZ...ZZZZ. They try to rank passwords on how likely they are to be chosen, so you shold assume they will “guess” long-but-human-friendly passwords such as BlueJays28RedSox5! (18 characters) long before they get to MAdv3aUQlHxL (12 characters), or even ISM/RMXR3 (9 characters).

What to do?

Back in August 2022, we said this: “If you want to change some or all of your passwords, we’re not going to talk you out of it. [… But] we don’t think you need to change your passwords. (For what it’s worth, neither does LastPass.)”

That was based on LastPass’s assertions not only that backed-up password vaults were encrypted with passwords known only to you, but also that those password vaults weren’t accessed anyway.

Given the change in LastPass’s story based on what it has discovered since then, we now suggest that you do change your passwords if you reasonably can.

Note that you need to change the passwords that are stored inside your vault, as well as the master password for the vault itself.

That’s so that even if the crooks do crack your old master password in the future, the stash of password data they will uncover will be stale and therefore useless – like a hidden pirate’s chest full of banknotes that are no longer legal tender.

While you’re about it, why not take the opportunity to ensure that you improve any weak or re-used passwords in your list at the same time, given that you’re changing them anyway.

One more thing…

Oh, and one more thing: an appeal to X-Ops teams, IT staff, sysadmins and technical writers everywhere.

When you want to say you’ve changed your passwords, or to recommend others to change theirs, can you stop using the misleading word rotate, and simply use the much clearer word change instead?

Don’t talk about “rotating credentials” or “password rotation”, because the word rotate, especially in computer science, implies a structured process that ultimately involves repetition.

For example, in a committee with a rotating chairperson, everyone gets a go at leading meetings, in a predetermined cycle, e.g. Alice, Bob, Cracker, Dongle, Mallory, Susan… and then Alice once again.

And in machine code, the ROTATE instruction explicitly circulates the bits in a register.

If you ROL or ROR (that denotes go leftwards or go rightwards in Intel notation) sufficiently many times, those bits will return to their original value.

That is not at all what you want when you set out to change your passwords!


WHAT IF MY PASSWORD MANAGER GETS HACKED?

Whether you’re a LastPass user or not, here’s a video we made with some tips on how to reduce the risk of disaster if either you or your password manager were to get hacked. (Click on the cog while playing to turn on subtitles or to speed up playback).

[embedded content]


WHY ‘ROTATE’ IS NOT A GOOD SYNONYM FOR ‘CHANGE’

Here’s the ROTATE (more precisely, the ROL) instruction in real life on 64-bit Windows.

If you assemble and run the code below (we used the handy, minimalistic, free assember and linker from GoTools)…

…then you should get the output below:

Rotated by 0 bits = C001D00DC0DEF11E
Rotated by 4 bits = 001D00DC0DEF11EC
Rotated by 8 bits = 01D00DC0DEF11EC0
Rotated by 12 bits = 1D00DC0DEF11EC00
Rotated by 16 bits = D00DC0DEF11EC001
Rotated by 20 bits = 00DC0DEF11EC001D
Rotated by 24 bits = 0DC0DEF11EC001D0
Rotated by 28 bits = DC0DEF11EC001D00
Rotated by 32 bits = C0DEF11EC001D00D
Rotated by 36 bits = 0DEF11EC001D00DC
Rotated by 40 bits = DEF11EC001D00DC0
Rotated by 44 bits = EF11EC001D00DC0D
Rotated by 48 bits = F11EC001D00DC0DE
Rotated by 52 bits = 11EC001D00DC0DEF
Rotated by 56 bits = 1EC001D00DC0DEF1
Rotated by 60 bits = EC001D00DC0DEF11
Rotated by 64 bits = C001D00DC0DEF11E

You can change the rotation direction and amount by changing ROL to ROR, and adjusting the number 4 on that line and the following one.


go top