Category Archives: News

S3 Ep114: Preventing cyberthreats – stop them before they stop you! [Audio + Text]

STOP THE CROOKS BEFORE THEY STOP YOU!

Paul Ducklin talks to world-renowned cybersecurity expert Fraser Howard, Director of Research at SophosLabs, in this fascinating episode, recorded during our recent Security SOS Week 2022.

When it comes to fighting cybercrime, Fraser truly is a “specialist in everything”, and he also has the knack of explaining this tricky and treacherous subject in plain English.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

[MORSE CODE]

[ROBOT VOICE: Sophos Security SOS]


PAUL DUCKLIN.  Hello, everybody.

Welcome to the Sophos Security SOS week.

Today’s topic is: Preventing cyber threats – stop them before they stop you!

And our guest today is none other than Mr. Fraser Howard, Director of Research at SophosLabs.

Now, those of you who have listened to SOS Week before will know that I like to describe Fraser as a “specialist in everything”, because his knowledge is not just broad, it is also incredibly deep.

He ticks every cell in the spreadsheet, you could say.

So, Fraser, welcome back to the SOS Week.

I wanted to start by focusing on something that goes by the name of LOLBIN, which I believe is short for “living-off-the-land binary”, which is jargon for software that’s there already that the cooks love to use.


FRASER HOWARD.  Exactly that.


DUCK.  And the big problem at the moment seems to be that the most likely LOLBIN, or the most likely pre-installed program that the crooks will dine out on, for want of a better phrase, is nothing other than PowerShell, which is built into Windows.

It’s available on every version of Windows as soon as you install it.

And it’s the medium of management these days for Windows itself.

So how do you live without it?


FRASER.  Exactly – just like you described, from the attackers’ perspective, LOLBINs are brilliant.

They either bring their own knife to the fight, and their knife might look very different to everything else that’s on the system…

…or they use a knife that just happens to be present on the system in the first place.

And that is advantageous to the attacker, for obvious reasons.

Any security software won’t see some brand new, shiny, unknown application suddenly being run and used in part of the attack.

But tools like PowerShell are already there – that’s when the games begin in terms of trying to work out, “Is it something good, or is it something bad?”

I wish there was a one-line answer to how we detect malicious PowerShell versus benign, but actually it’s quite a complex situation.

What exactly is the PowerShell process doing itself?

On one end of the spectrum, you could use technology like, for example, application control.

And as an admin, you could choose: “PowerShell, you should not be allowed to run in my environment.”

That’s kind of a panacea, if you like, and it would stop PowerShell being abused, but it would also break lots of legitimate activity, including the core management of most Windows machines today.


DUCK.  OK, so application control is Sophos’s name for the ability to detect, and optionally to block, software that is not malware, but that a well-informed administrator might not want to support in their environment?


FRASER.  Exactly.

And it’s not just about admins and their choice of “Which application should my users be allowed to use?”

It’s about basics.

If you think about security, what’s one of the things that we’ve been telling people for the last 5 or 10 years?

“Patch!”

If you’re an administrator and you’re allowing anybody to use whatever application they want for their browser, that’s maybe 5 to 10 different browsers that you have to patch.

Actually, for admins, technologies like application control let them narrow that threat surface.


DUCK.  But PowerShell… some people say, “Oh, just block PowerShell. Block all .PS1 files. Job done.”


FRASER.  It’s not quite as simple as that!


DUCK.  Could a sysadmin manage without PowerShell in a modern Windows network?


FRASER.  [PAUSE] No.

[LAUGHTER]

I mean, there are policy options that they could choose to only allow certain signed scripts, for example, to be run.

But there’s a whole variety of tips and techniques that the attackers know that try to bypass those mechanisms as well.

Some of the older scripting engines… the best example is Windows Scripting Host – most people don’t know it’s there.

It’s not the one-stop shop for admin that PowerShell is, but WSCRIPT and CSCRIPT

…those binaries, again, are on every single Windows box.

They are a lot more feasible to outright block, and they get abused, again by malware.


DUCK.  So the Windows Scripting Host includes things like JavaScript (not running in your browser, outside your browser), and good old Visual Basic Script?


FRASER.  There’s a whole host of them.


DUCK.  Now, Visual Basic script is discontinued by Microsoft, isn’t it?

But it’s still supported and still very widely used?


FRASER.  It’s very popular with the Bad Guys, yes.

And it’s not just scripting engines.

I can’t remember exactly how many binaries are on some of the main LOLBIN lists that are out there.

With the right combination of switches, all of a sudden, a binary that you might use to manage, for example, certificates locally…

…actually can be used to download any content from a remote server, and save it to disk locally.


DUCK.  Is that CERTUTIL.EXE?


FRASER.  Yes, CERTUTIL, for example.


DUCK.  Because that can also be used to do things like calculate file hashes.


FRASER.  It could be used to download, for example, base64-encoded executable content, save it locally, and decode it.

And then that content could be run – as a way of potentially getting through your web gateways, for example.


DUCK.  And that gets even worse with PowerShell, doesn’t it?

Because you can take a base64-encoded string and feed that into PowerShell as the input script, and it will quietly decode it for you.

And you can even put in a command line option, can you not, to say, “Hey, if the user said ‘don’t allow scripts to execute from the command line’, ignore it – I wish to override that”?


FRASER.  You mentioned .PS1 files.

That’s a physical script file that might exist on disk.

Actually, PowerShell is pretty adept at doing things filelessly, so just the command line itself can contain the entirety of the PowerShell command.


DUCK.  Now, my understanding is most so-called “fileless malware” does involve files, probably quite a lot of files in its operation…

…but there will be a key point at which something you might detect *only exists in memory*.

So, security software that is only able to monitor disk access will miss out.

How do you deal with that kind of situation, where the crooks have got all this semi-suspicious stuff, and then they’ve disguised the really dangerous bit with this fileless, memory-only trick?

How do you deal with that?


FRASER.  One of the ways we deal with that, particularly in regards to PowerShell, is Microsoft provides an interface which gives us visibility into the behaviour of PowerShell.

So AMSI is an interface which vendors, security vendors, can use to get a peep into malware.


DUCK.  AMSI is… Anti-Malware Scanning Interface?


FRASER.  Exactly.

It gives us a window into the behaviour of PowerShell at any point in time.

So, as it might be doing things filelessly… any traditional interception points which are looking for files on disk, they won’t be coming into play.

But the behaviour of PowerShell itself will generate activity, if you like, within the AMSI interface, which gives us the ability to recognise and block certain types of malicious PowerShell activity.

The other thing is that, although “fileless” is seen as a bit of a panacea for the bad guys…

…actually, one of the things that most attackers are after at some point is what we call persistence.

OK, they’ve got some code running on the machine… but what happens if that machine is restarted?

And so their fileless malware typically will seek to have add some level of persistence.

So, most of the fileless attacks that we’ve seen actually have interaction, typically with the Windows Registry – they use the registry as a way of achieving persistence.

Typically, they put some sort of BLOB [binary large object] of data in the registry, and modify some registry keys such that such that when that machine is restarted, that BLOB is decoded and malicious behaviour carries on again.

Today’s products are all about a whole range of technologies, from simple, right through to quite extraordinarily complex.


DUCK.  That also helps to explain why people take files that are kind-of the precursors of malware, but not overtly malicious themselves, upload them to an online service like, say, Virus Total…

…and go, “Hey, nobody detects this. All security products are useless.”

But it doesn’t mean that file can spring into life and start doing bad stuff without getting stopped…


FRASER.  That’s a very good point.

I think it’s something the security industry has tried… but the fact that we still talk about it – we’ve probably failed to get this point across:

What is protection?

What do we actually mean?

What does protecting someone against a threat typically mean?

Most people tend to think of it like this… OK, they have a threat; they want a file that is “the threat”; and they want to see if that file gets detected.

But that particular attack… let’s suppose it’s a bot.

There might be 10,000 of those files *every single day*, as the bad guys turn their handle and churn out lots of different replicas that are essentially all the same basic thing.

And so the fact that 1, or 10, or 100 of those files gets detected…

…it doesn’t really tell you very much about how well a product might protect against that threat.


DUCK.  “Bot” means software robot?.

Essentially, that’s something that sits on your computer regularly, calling home or polling some random server?


FRASER.  Exactly.


DUCK.  That server may change from day to day… and the bot will frequently download a list of instructions, such as “Here’s a list of email addresses to spam.”

Next, it could be, “Here is a list of file extensions I want you to scramble”, or it could be “Turn on the keylogger”?


FRASER.  Exactly.


DUCK.  Or “Take a screenshot right now, they’re in the banking app”.

It’s essentially an active backdoor…


FRASER.  It *is* a backdoor, yes.

And we spoke about backdoors 20 years ago… I remember doing customer presentations 20 years ago, talking about backdoors.


DUCK.  “Back Orifice”, if you remember…


FRASER.  Yes, yes!

We were trying to convince customers that, actually, a lot of the backdoors out there were more important than the high-profile malware of the day.

What you don’t want to get infected with are the backdoors, which allow some miscreant somewhere the ability to control your machine and do bad stuff, such as have a look through your file system, or modify data on your system.

That’s a far more frightening threat than, for example, a self-replicating worm that just spreads from computer to computer.

That might get the press, and it might cause problems in and in and of itself…

…but, actually, somebody having access to your system is arguably a much bigger threat indeed.


DUCK.  And thinking back to Back Orifice in… what was it 1999? 2000?

That famously it listened on port 13337, didn’t it?


FRASER.  You’ve got a good memory [LAUGHS]… yes, “elite”!


DUCK.  And as soon as people started getting onto DSL connections at home, and having a home router, Back Orifice was useless because inbound connections didn’t work.

And so people thought, “Oh, well, backdoors rely on inbound network connections – I’m protected by my ISP by default, so I don’t have to worry about it.”

But today’s zombies, today’s bots – they call home using some kind of encrypted or secretive channel, and they *download* the instructions…


FRASER.  And because it’s on HTTPS, they basically hide that network activity amongst the million-and-one other web packets that go out every minute on most home connections.


DUCK.  So that’s another reason why you want defence-in-depth or layered protection?


FRASER.  Yes.


DUCK.  Obviously, new files – you want to examine them; you don’t want to miss malware that you could have detected.

But the file could be innocent at the moment, and it could turn out to be rogue after it’s loaded; after it’s manipulated itself in memory; after it’s called out and downloaded stuff…


FRASER.  And so, to get back to the original point: how we measure security products today is more complex than it ever has been.


DUCK.  Because some people still have the idea that, well, if you really want to test a product, you just get a giant bucket full of malware, all in files…


FRASER.  Commmonly called “a zoo”.


DUCK.  …and you put that on a server in isolation somewhere.

Then you scan it with a static scanner, and you find out how many it detects, and that tells you how the product behaves.

The “Virus Total” approach.

But that: [A] will tend to underestimate good products, and [B] might overestimate bad products.


FRASER.  Or products that specialise in detecting files only, for the purpose of primarily looking good in those sort of zoo-based tests.

That doesn’t translate to a product in the real world that will actually provide good levels of protection!

In reality, we block files… of course we do – the file is still a very important currency, if you like, in terms of protection.

But there’s lots of other things, for example like the AMSI interface that lets us block malicious PowerShell activity, and a program’s behaviour itself.

So, within our product, the behavioural engine looks at the behaviour of processes, network, traffic, registry activity…

…and that combined picture lets us spot potentially malicious behaviour for the purpose of blocking not necessarily a specific family, or even a particular kind of kind of threat, but just *malicious activity*.

If there are certain types of behaviour that we can determine are just outright malicious, we will often try and block that.

We can block a certain type of malicious behaviour today, and then a threat family that has not even yet been written – in three months time, it might use that same behaviour, and we will proactively detect it.

So that’s the Holy Grail of what we do: proactive protection.

The ability for us to write something today that in the future will successfully block malicious behaviour.


DUCK.  I suppose a good example of that, to go back to what we mentioned before, is CERTUTIL.EXE – that certificate validation utility.

You might be using that in your own scripts, in your own sysadministration tools, yet there are some behaviours that you would not expect, although that program can be made to do those things.

They would stand out.


FRASER.  They would stand out, exactly.


DUCK.  So you can’t say, “The program is bad”, but at some point in its behaviour you can go, “Aha, now it’s gone too far!”


FRASER.  And that touches on another interesting aspect of today’s landscape.

Historically, EVIL.EXE runs; we might detect the file; we might detect some malicious behaviour; we clean it from your system.

You spoke about LOLBINs… obviously, when we detect PowerShell doing something malicious, we don’t remove POWERSHELL.EXE from that system.


DUCK.  “Ooh, I found Windows doing something bad – wipe the whole system!”

[LAUGHTER]


FRASER.  We basically block that process; we stop that process doing what it was about to do; and we terminate it.

But PowerShell still exists on the physical system.

Actually, today’s attackers are very different from yesterday’s attackers as well.

Today’s attackers are all about having a goal; having a purpose.

The old model was more spray-and-pray, if you like.

If somebody blocks the attack… bad luck, they give up – there’s no human presence there.

If the attack works, data is stolen, a machine becomes compromised, whatever it happens to be, but if the attack was blocked, nothing else happens on the system.

In today’s attacks, there actually is much more of a human element.

So, typically, in a lot of attacks we see today – this is typified by lots of the ransomware attacks, where the crooks are specifically trying to target certain organisations with their ransomware creations…

…when something is blocked, they try again, and they keep on retrying.

As we’re blocking stuff, and blocking different types of malicious behaviour, there’s something behind the scenes; some *person* behind the scenes; some threat group behind the scenes, retrying.


DUCK.  So 10 or 15 years ago, it was, “Oh, we found this brand-new, previously unknown Word malware. We’ve deleted the file and cleaned it up, and we wrote it in the log”.

And everyone goes into the meeting, and ticks it off, and pats each other on the back, “Great! Job done! Ready for next month.”


FRASER.  Now, it’s very different.


DUCK.  Today, *that wasn’t the attack*.


FRASER.  No!


DUCK.  That was just a precusor, an “I wonder what brand of smoke detectors they use?” kind of test.


FRASER.  Exactly.


DUCK.  And they’re not planning on using that malware.

They’re just trying to guess exactly what protection have you got?

What’s turned on; which directories are included; which directories are excluded from your scanning; what ambient settings have you got?


FRASER.  And what we talk about today is active adversaries.

Active adversaries… they get lots of press.

That’s the concept of the whole MITRE ATT&CK framework – that’s is essentially a bible, a dictionary, if you like, of combinations of tactics.

The tactics are the verticals; the horizontals are the techniques.

I think there are 14 tactics but I don’t know how many techniques… hundreds?


DUCK.  It can be a bit dizzying, that MITRE grid!


FRASER.  It’s essentially a dictionary of the different types of things, the different types of technique, that could be used on a system for good or bad, essentially.

But it’s essentially aligned to attackers and active adversaries.

If you like, it’s a taxonomy of what an active adversary might do when on the system.


DUCK.  Right, because in the old days (you and I will remember this, because we both spent time writing comprehensive malware descriptions, the kind of things that were necessary 15 or 20 years ago – you were talking about EVIL.EXE)…

…because most threats back then were viruses, in other words they spread themselves and they were self-contained.

Once we had it…


FRASER.  …you could document, A-to-Z, exactly what it did on the system.


DUCK.  So a lot of malware back in those days, if you look at how they hid themselves; how they went into memory; polymorphism; all that stuff – a lot of them were a lot more complicated to analyse that stuff today.

But once you knew how it worked, you knew what every generation would possibly look like, and you could write a complete description.


FRASER.  Yes.


DUCK.  Now, you just can’t do that.

“Well, this malware downloads some other malware.”

What malware?

“I don’t know.”


FRASER.  For example, consider a simple loader: it runs; it periodically connects out.

The attacker has the ability to fire in some sort of encoded BLOB – for example, let’s suppose it’s a DLL, a dynamic link library, a module… essentially, some executable code.

So, “What does that threat do?”

Well, it depends exactly and entirely on what the attacker sends down the wire.


DUCK.  And that could change day by day.

It could change by source IP: “Are you in Germany? Are you in Sweden? Are you in Britain?”


FRASER.  Oh, yes we see that quite often.


DUCK.  It could also say, “Hey, you already connected, so we’ll feed you NOTEPAD or some innocent file next time.”


FRASER.  Yes.

The attackers typically will have techniques they use to try and spot when it’s us [i.e. SophosLabs] trying to run their creation.

So they don’t feed us what might be the ultimate payload.

They don’t want us to see the payload – they only want victims to see that payload.

Sometimes things just exit quietly; sometimes they just run CALC, or NOTEPAD, or something obviously silly; sometimes we might get a rude message popping up.

But typically they’ll try and keep back the ultimate payload, and reserve that for their victims.


DUCK.  And that also means…

…I glibly used the word “polymorphism” earlier; that was very common in viruses back in the day, where every time the virus copied itself to a new file it would basically permute its code, often in a very complicated way, even rewriting its own algorithm.

But you could get the engine that did the scrambling.


FRASER.  Yes.


DUCK.  Now, the crooks keep that to themselves.


FRASER.  That’s on a server somewhere else.


DUCK.  And they’re turning the handle in the background.


FRASER.  Yes.


DUCK.  And also you mentioned loaders – people may have heard of things like BuerLoader, BazaarLoader, they’re sort of well-known “brand names”…

..in some cases, there are gangs of crooks, and that’s all they do.

They don’t write the malware that comes next.

They just say, “What would you like us to load? Give us the URL and we’ll inject it for you.”


FRASER.  The original bot operators from 15 or 20 years ago – how did they make money?

They compromised networks of machines – that’s essentially what a botnet is, lots of machines under their command – and then they could basically rent out that “network”.

It could be for distributed denial of service – get all of these infected machines to hit one web server for example, and take out that web server.

It could be quite commonly for spam, as you’ve already mentioned.

And so the natural evolution of that, in some sense, is today’s loader.

If somebody has a system infected with a loader, and that loader is calling home, you essentially have a bot.

You have the ability to run stuff on that machine…

…so, just like you say, those cybercriminals don’t need to be concerned with what the ultimate payload is.

Is it ransomware?

Is it data theft?

They have a vehicle… and ransomware is almost the final payout.

“We’ve done everything we wanted to do.” (Or we failed in everything else we were hoping to do.)

“Let’s just try ransomware…”


DUCK.  “We’ve logged all the passwords now, there are no more to get.” [LAUGHS]


FRASER.  There’s nowhere else to go!


DUCK.  “We’ve stolen all the data.”


FRASER.  Exactly… the final cash-out is ransomware!

At that point, the user is aware, and the administrators aware, there’s data loss.

So, today’s loader is almost an extension of, an evolution of, yesterday’s bot.


DUCK.  Fraser, I’m conscious of time…

So, given that you’ve painted a picture that clearly requires full-time work, full-time understanding – you’re an expert researcher, you’ve been doing this for years.

Not everybody can give up their day job in IT or sysadministration to have *another* day job to be like you in the organisation.

If you had to give three simple tips for what you should do (or what you should not do) today to deal with what is a more complicated, more fragmented way of attacking from the crooks – one that gives us many more planes on which we need to defend…

… what would those three things be?


FRASER.  That’s a tough question.

I think the first one has to be: having awareness and visibility into your organisation.

It sounds simple, but we quite often see attacks where the starting point of an attack was an unprotected box.

So, you have an organisation….

…they have a wonderful IT policy; they have products deployed across that network, properly configured; they might have a team of people that are watching for all the little sensors, and all the data coming back from these products.

But they have a domain controller that was unprotected, and the bad guys managed to get onto that.

And then, within the whole MITRE ATT&CK framework, there’s one technique called lateral movement

…once the attackes are on a box, they will continue to try to laterally move from there across the organisation.

And that initial kind of foothold gives them a point from which they can do that.

So, visibility is the first point.


DUCK.  You also have to know what you don’t know!


FRASER.  Yes – having visibility into all the devices on your network.

Number two is: configuration.

This is a bit of a thorny one, because no one likes to talk about policies and configuration – it’s frankly quite dull.


DUCK.  It’s kind of important, though!


FRASER.  Absolutely crucial.


DUCK.  “If you can’t measure it, you can’t manage it,” as the old saying goes.


FRASER.  I think my one recommendation for that would be: if at all possible, use the recommended defaults.

As soon as you deviate away from recommended defaults, you’re typically either turning stuff off (bad!), or you’re excluding certain things.


DUCK.  Yes.


FRASER.  For example, excluding a particular folder.

Now, that might be perfectly acceptable – you might have some custom application in it, some custom database application where you say, “I don’t want to scan files within this particular folder.”

It’s not quite so good if you’re excluding, for example, the Windows folder!


DUCK.  “Exclude C:\*.* and all subdirectories.” [LAUGHS]


FRASER.  It is.


DUCK.  You add one, you add another, and then you don’t go and review it…

…you end up where you basically have all the doors and all the windows propped open.


FRASER.  It’s a bit like a firewall.

You block everything; you poke a few holes: fine.

You keep on poking holes for next three years, and before you know where you are…

…you have Swiss cheese as your firewall.

[LAUGHTER]

It’s not going to work!

So, configuration is really important, and, if at all possible stick to the defaults.


DUCK.  Yes.


FRASER.  Stick to defaults, because… those recommended defaults – they’re recommended for a reason!

Within our own products, for example, when you deviate from defaults, quite often you’ll get a red bar warning that you’re basically disabling protection.


DUCK.  If you’re going to go off-piste, make sure you really meant to!


FRASER.  Make sure you have good visibility.

And I guess the third point, then, is: acknowledge the skill set required.


DUCK.  Don’t be afraid to call for help?


FRASER.  Yes: Don’t be afraid to call for help!

Security is complex.

We like to think of it’s simple: “What three things can we do? What simple things can we do?”

Actually, the reality is that today’s security is very complicated.

Products might try to package that up in a fairly simple way, and provide good levels of protection and good levels of visibility into different types of behaviour happening in a network.

But if you don’t have the skill set, or the resource for that matter, to work though the events that are coming in and hitting your dashboard…

…find someone that does!

For example, using a managed service can make a massive difference to your security, and it can just remove that headache.


DUCK.  That is not an admission of defeat, is it?

You’re not saying, “Oh, I can’t do it myself.”


FRASER.  We’re talking 24 x 7 x 365.

So, for someone to do that in-house is a massive undertaking.

And we’re also talking about complex data – and we spoke about active adversaries, and that sort of attack.

We know the Bad Guys, even when we block stuff, will continue to retry: they’ll change things up.

A good team that are looking at that data will recognise that type of behaviour, and they will not only know that something’s being blocked, those people will also think, “OK, there’s somebody repeatedly trying to get in through that door.”

That’s quite a useful indicator to them, and they’ll take action, and they’ll resolve the attack.

[PAUSE]

Three pretty good pieces of advice there!


DUCK.  Excellent, Fraser!

Thank you so much, and thank you for sharing your experience and your expertise with us.

To everybody who’s listening, thank you so much.

And it remains now only for me to say: “Until next time, stay secure.”

[MORSE CODE]


“Suspicious login” scammers up their game – take care at Christmas

Black Friday is behind us, that football thing they have every four years is done and dusted (congratulations – spoiler alert! – to Argentina), it’s the summer/winter solstice (delete as inapplicable)…

…and no one wants to get locked out of their social media accounts, especially when it’s the time for sending and receiving seasonal greetings.

So, even though we’ve written about this sort of phishing scam before, we thought we’d present a timely reminder of the kind of trickery you can expect when crooks try to prise loose your social media passwords.

We clicked through for you

Because a picture is supposed to be worth 1024 words, we’ll be showing you a sequence of screenshots from a recent social media scam that we ourselves received.

Simply put, we clicked through so you don’t have to.

This one started with an email that pretends to be looking out for your online safety and security, though it’s really trying to undermine your cybersecurity completely:

Even though you may have received similar-looking emails from one or more of your online account providers in the past, and even though this one doesn’t have any glaring spelling or grammatical errors…

…if fact, even if this really were a genuine email from Instagram (it isn’t!), you can protect yourself best simply by not clicking on any links in the email itself.

If you have your own bookmark for Instagram’s help pages, researched and saved when you weren’t under any cybersecurity pressure, you can simply navigate to Instagram directly, all by yourself.

That way, you neatly avoid any risk of being misdirected by the blue text (the clickable link) in the email, no matter whether it’s real or fake, working or broken, safe or dangerous.

The trouble with clicking through

If you do click through, perhaps because you’re in a hurry, or you’re worried about what might have happened to your account…

…well, that’s when the trouble starts, with a fake page that looks realistic enough.

The crooks are pretending that someone, presumably someone enjoying a vacation of their own in Paris, tried to login to your account:

You ought to be suspicious of the server name that shows up in the address bar in this scam (we’ve redacted it here, though it wasn’t anything like instagram.com), but we can understand why so many users get caught out by fake domains.

That’s because lots of legitimate online services make it as good as impossible to know what to expect in your address bar these days, as Sophos expert (and popular Naked Security podcast guest) Chester Wisniewski explained back in Cybersecurity Awareness Month:

In this scam, whether you click [This wasn't me] or [This was me], the crooks take you down the same path, asking first for your username:

The wording has started to get a bit clumsy on the next screen, where the crooks are going for your password, but it’s still believable enough:

A fake mistake

The scammers then pretend you made a mistake, asking you not only to type in your password a second time, but also to add a tiny bit more personal information about your location:

Not every phishing scam of this sort uses the “your password is wrong” trick, but it’s quite common.

We suspect that the crooks do this because there’s dubious security advice still going around that says, “You can easily detect a scam site by deliberately putting in a fake password first; if the site lets you in anyway, then obviously the site doesn’t know your real password.”

If you follow this advice (please don’t – it only ever gives you a false sense of security), you might jump to the dangerous conclusion that the site must surely know your real password, and must therefore be genuine, given that it seems to know that you put in the wrong password.

Of course, the crooks can safely say that you got your password wrong the first time, even if you didn’t.

If you deliberately got your password wrong, the crooks can simply pretend to “know” it was wrong in order to trap you into continuing with the scam.

But if you’re sure you really did put in the right password, and therefore the fake error message makes you suspicious…

…it’s too late, because the crooks have already scammed you.

One last question

If you keep going, then the crooks try to squeeze you for one more piece of personal information, namely your phone number:

And to let you out of the scam gently, the crooks finish off by redirecting you to the genuine Instagram home page, as if to invite you to confirm that your account still works correctly:

What to do?

  • Keep a record of the official “verify your account” and “how to deal with infringement challenges” pages of the social networks you use. That way, you never need to rely on links sent via email to find your way there in future. As well as fake login warnings like the one shown here, attackers often use concocted copyright violations, made-up breaches of your account’s Terms and Conditions, and other fake “problems” with your account.
  • Pick proper passwords. Don’t use the same password as you do on any other sites. If you think you may have given away your password on a fake site, change it as soon as you can before the crooks do. Consider using a password manager if you don’t have one already.
  • Turn on 2FA (two-factor authentication) if you can. This means that your username and password alone will not be enough to login, because you will need to include a one-time code, either every time, or perhaps only when you first try to use a new device. Although this doesn’t guarantee to keep the crooks out, because they may try to trick you into revealing your 2FA code as well as your password, it nevertheless makes things harder for an attacker.
  • Don’t overshare. As much as it seems to be common to share a lot of your life on Instagram nowadays, you don’t have to give away everything about yourself. Also, think about who or what is in the background of your photos before you upload them, in case you overshare information about your friends, family or household by mistake.
  • Stay vigilant. If an account or message seems suspicious to you, do not interact or reply to the account and do not click on any links they send you. If something seems too good to be true, assmue that it IS too good to be true.
  • Consider setting your Instagram account to private. If you aren’t trying to be an influencer whom everyone can see, and if you use Instagram more as a messaging platform to keep touch with your close friends than as a way to tell the world about yourself, you may want to make your account private. Only your followers will be able to see yout photos and videos. Review your list of followers regularly and kick off people you don’t recognise or don’t want following you any more.
Left. Use ‘Privacy’ option on the Instagram Settings page to make your account private.
Right. Toggle the ‘Private account’ slider on.
  • If in doubt, don’t give it out. Never rush to complete a transaction or confirm personal information because a message has told you you’re under time pressure. If you aren’t sure, ask someone you know and trust in real life for advice, so you don’t end up trusting the sender of the very message you aren’t sure you can trust. (And see the first tip above.)

Microsoft dishes the dirt on Apple’s “Achilles heel” shortly after fixing similar Windows bug

When we woke up this morning, our cybersecurity infofeed was awash with “news” that Apple had just patched a security hole variously described a “gnarly bug”, a “critical flaw” that could leave your Macs “defenceless”, and the “Achilles’ heel of macOS”.

Given that we usually check our various security bulletin mailing lists before even looking outside to check the weather, primarily to see if Apple has secretly unleashed a new advisory overnight…

…we were surprised, if not actually alarmed, at the number of writeups of a bug report we hadn’t yet seen.

Indeed, the coverage seemed to invite us to assume that Apple had just released yet another update, just a week after its previous “update for everything“, itself less than two weeks after a mysterious update for iOS 16, which turned out to have been a zero-day attack apparently being used to implant malware via booby-trapped web pages, though Apple neglected to mention that at the time:

This morning’s “news” semeed to imply that Apple had not merely pushed out another update, but also released it silently by not announcing it in an advisory email, and not even listing it on the company’s own HT201222 security portal page.

(Keep that link HT201222 link handy if you’re an Apple user – it’s a useful starting point when patch confusion arises.)

It’s a bug, but not a brand new one

The good news, however, is that if you followed our suggestion from a week ago to check your Apple devices had updated (even if you expected them to do so of their own accord), you’ve already got any fixes you may need to protect you from this “Achilles” bug, more particularly known as CVE-2022-42821.

This isn’t a new bug, it’s just some new information about a bug that Apple fixed last week.

To be clear, if Apple’s security bulletins have it right, this bug doesn’t apply to any of Apple’s mobile operating systems, and either never applied to, or had already been fixed, in the macOS 13 Ventura version.

In other words, the bug described was relevant only to users of macOS 11 Big Sur and macOS 12 Monterey, was never a zero-day, and has already been patched.

The reason for all the fuss seems to be the publication yesterday, now that the patch has been available for several days, of a paper by Microsoft rather dramatically entitled Gatekeeper’s Achilles heel: Unearthing a macOS vulnerability.

Apple had, admittedly, given only a cursory summary of this bug in its own advisories a week ago:

Impact: An app may bypass Gatekeeper checks Description: A logic issue was addressed with improved checks. CVE-2022-42821: Jonathan Bar Or of Microsoft

Exploiting this bug isn’t terribly difficult once you know what to do, and Microsoft’s report explains what’s needed pretty clearly.

Despite some of the headlines, however, it doesn’t exactly leave your Mac “defenceless”.

Simply put, it means a downloaded app that would normally provoke a pop-up warning that it wasn’t from a trusted source wouldn’t be correctly flagged by Apple’s Gatekeeper system.

Gatekeeper would fail to record the app as a download, so that running it would sidestep the usual warning.

(Any active anti-malware and threat-based behaviour monitoring software on your Mac would still kick in, as would any firewall settings or web filtering security software when you downloaded it in the first place.)

It’s a bug, but not really “critical”

It’s not exactly a “critical flaw” either, as one media report suggested, especially when you consider that Microsoft’s own Patch Tuesday updates for December 2022 fixed a very similar sort of bug that was rated merely “moderate”:

Indeed, Microsoft’s simiilar vulnerability was actually a zero-day hole, meaning that it was known and abused outside the cybersecurity community before the patch came out.

We described Microsoft’s bug as:

CVE-2022-44698: Windows SmartScreen Security Feature Bypass Vulnerability This bug is also known to have been expoited in the wild. An attacker with malicious content that would normally provoke a security alert could bypass that notification and thus infect even well-informed users without warning.

Simply put, the Windows security bypass was caused by a failure in Microsoft’s so-called Mark of the Web (MOTW) system, which is supposed to add extended attributes to downloaded files to denote that they came from an untrusted source.

Apple’s security bypass was a failure in the similar-but-different Gatekeeper system, which is supposed to add extended attributes to downloaded files to denote that they came from an untrusted source.

What to do?

To be fair to Microsoft, the researcher who responsibly disclosed the Gatekeeper flaw to Apple, and who wrote the just-published report, didn’t use the words “critical” or “defenceless” to describe either the bug or the condition in which it placed your Mac…

…although naming the bug Achilles and headlining it as as an Achilles’ heel was probably a metaphorical leap too far.

Proof-of-concept attack generator from Microsoft.

After all, in Ancient Greek legend, Achilles was almost totally immune to injury in battle due to his mother dipping him in the magical River Styx as a baby.

But she had to hold onto his heel in the process, leaving him with a single vulnerable spot that was ultimately exploited by Paris to kill Achilles – definitely a dangerous vulnerability and a critical exploit (as well as being a zero-day flaw, given that Paris seems to have known where to aim in advance).

Fortunately, in both these cases – Microsoft’s own zero-day bug, and Apple’s bug as found by Microsoft – the security bypass flaws are now patched

So, getting rid of both vulnerabilities (effectively dipping Achilles back into the River Styx while holding his other heel, which is probably what his mother should have done in the first place) is as easy as making sure you have the latet updates.

  • On Macs, use: Apple menu > About this Mac > Software Update…
  • On Windows: use Settings > Windows Update > Check for updates

You know in advance
     What we’re going to say
Which is, “Do not delay,
     Simply patch it today.”


OneCoin scammer Sebastian Greenwood pleads guilty, “Cryptoqueen” still missing

The “Missing Cryptoqueen” saga has made long-term headlines since co-founders Ruja Ignatova and Karl Sebastian Greenwood started a cryptocurrency scam known as OneCoin, way back in 2014.

Ignatova, who hails from Bulgaria, and who apparently liked to be known as The Cryptoqueen (her charge sheet even shows that name as an alias), has been wanted in the US on various wire fraud, money laundering and securities fraud charges since October 2017.

According to the US Department of Justice (DOJ), about two weeks after charges were filed against her in the US, Ignatova flew from Sofia in Bulgaria to Athens in Greece…

…and hasn’t been heard of since, thus her updated nickname of Missing Cryptoqueen.

In mid-2022, Ignotova was considered criminally significant enough – her scam is said to have pulled in more than $4 billion in “investments” from more than 3,000,000 people around the world – that she was added to the FBI’s Ten Most Wanted Fugitives list, with a $100,000 reward for her capture:

Greenwood, however, went to live in Thailand, where he was arrested by the Royal Thai Police on the tropical island of Koh Samui in June 2018, extradited to the US, and remanded in custody.

He’s been incarcerated ever since, and he looks set to stay locked up for many years to come, having just pleaded guilty to three criminal charges, including wire fraud and money laundering.

Building a pyramid

OneCoin appeared to be what’s known as a pyramid scheme, or MLM system, short for multi-level marketing, where the people who buy in at the start earn commission for bringing in the next wave of “investors”, who in turn earn commission from bringing in the third wave, and so on.

Many countries have regulatory restrictions on pyramid selling systems, not least because they look a lot better on paper than they often turn out in real life.

If nothing else, their business model is hard to sustain, given that each new recruit has to bring in N new recruits of their own, and so on, and so on, like a culture in biology class that expands to fill a petri dish at alarming speed, only to consume all its resources and die out just as dramatically.

For everyone to succeed, these schemes typically need to grow exponentially, like the culture in that petri dish: if the first person needs to bring in ten more people, and those ten need to find ten more, and so on, then the “pyramid” needs 1 + 10 + 100 + 1000 = 1111 participants after just three “generations”.

As exciting and as lucrative as that sounds, after a further three generations, you need 1,111,111 people on board to continue the revenue model that was sold to you, which is about 15% of the population of New York or London.

Three generations after that, you’d require about 15% of the world’s population to have bought into the scheme you’re already committed to…

…and even if you really could get that far, you’d quite literally run out of people in the very next generation, even if you signed up every new infant within seconds of birth.

A pyramid with no product

But OneCoin took the pyramid selling process one step further, turning it into what’s known in the jargon as a Ponzi scheme, after an early perpetrator of this type of scam called Charles Ponzi.

OneCoin didn’t generate huge profits for its founders by creating a pyramid of “investment partners” who ended up committed to selling the company’s products into a market that was ever more crowded with competing sellers.

OneCoin made its billions by not actually having a product at all.

The OneCoin cryptocurrency token that the company “sold” didn’t actually exist, had no so-called blockchain or ledger to prove its existence and activity, and couldn’t actually be traded at all.

As the DOJ’s report explains:

OneCoin falsely claimed that the value of OneCoin was based on market supply and demand, when in fact, the value of the cryptocurrency was simply set by OneCoin itself.

[Ignatova stated in emails to Greenwood that:] “We can manipulate the exchange by simulating some volatility and intraday pricing,” [… and:] “Goal 6: Trading coin, stable exchange, always close on a high price end of day open day with high price, build confidence – better manipulation so they are happy.”

As the DOJ explains, the purported value of a OneCoin grew steadily from €0.50 to approximately €29.95 per coin, and the purported price of OneCoins never decreased in value, and yet the DOJ states that “OneCoins were entirely worthless.”

Ignatova […] wrote to Greenwood, “We are not mining actually – but telling people shit,” to which Greenwood responded, “Can any member (trying to be clever) find out that we actually are not investing in machines to mine but it is merely a piece of software doing this for us?”

The scammers went out of their way to attract investors, with the charismatic Ignatova, in her “Cryptoqueen” persona, wowing the crowd and drawing in victims on the back of the exciting stories people had heard about cryptocurrency in general, and Bitcoin in particular:

Greenwood and Ignatova promoted OneCoin, including at official OneCoin events all over the globe. One such event, called “Coin Rush,” was held at Wembley Arena in London on June 11, 2016. Thousands of OneCoin members attended Coin Rush. During the event, Greenwood introduced Ignatova to the crowd, stating in part: “This is the creator, the mastermind, the founder of cryptocurrency, of OneCoin … Now, this will be the biggest welcoming on stage that we’ve ever done in history.” Then, to the tune of Alicia Keys’s “Girl on Fire,” and surrounded by actual onstage fireworks, Ignatova strode onto the Wembley Arena stage wearing a red ball gown. She proceeded to repeatedly and favorably compare her fraudulent cryptocurrency to Bitcoin, stating, among other things, “OneCoin … is supposed to be the Bitcoin killer” and “In two years, nobody will speak about Bitcoin any more.”

The power-plays and the stage drama seem to have done the trick, given that Greenwood is said by the DOJ to earned approximately €20 million a month in his role as the top MLM “distributor” of OneCoin.

If Greenwood were to get the maximum penalty for each of the crimes to which he’s pleaded guilty, he’d end up with 20 years for each; if served consecutively, he’d therefore get a 60-year custodial sentence.

As for the more than 3,000,000 people who parted with their money in good, if misguided faith…

…whether any of them will get their money back in the next 60 years is unknown, but sadly seems unlikely.

What to do?

  • Beware any online schemes that make promises that a properly regulated investment would not be allowed to do. Investment regulations generally exist to keep the lid on wild and unachievable claims, so be sceptical of any scheme that sets out to sidestep that sort of control and expects you to invest without any regulatory protection at all.
  • Don’t be taken in by cryptocoin jargon and a smart-looking website or app. Anyone can set up a believable-looking website or build an app to show upbeat but fictitious real-time “graphs” and made-up online “comments” that seem to be awash with upvotes and positivity. Open source website and blogging tools make it cheap and easy to create professional-looking content. But those tools can’t stop a crook filling a website with fake data.
  • Consider asking someone with an IT background whom you know and trust for advice. Find someone who isn’t already part of the scheme and doesn’t show any particular interest in it. Be wary of advice or endorsement from people who are (or claim to be) part of the scheme already. They could be paid shills, or fake personas, or they could be early winners who’ve been paid out with money ripped off from later investors, and thus co-opted into promoting the scam themselves.
  • If it sounds too good to be true, assume that it isn’t true. That advice applies whether it’s a new cryptocurrency, a special online offer, a new online service, a survey to win a prize, or even just the good old lure of “free stuff”. Take your time to understand what you’re signing up for.

Remember: If in doubt/Don’t give it out, and that definitely includes your money.

By the way, the DOJ asks:

If you have any information about Ruja Ignatova’s whereabouts, please contact your local FBI office or the nearest American Embassy or Consulate. Tips can be reported anonymously and can also be reported online at tips.fbi.gov.

Take care out there!


S3 Ep113: Pwning the Windows kernel – the crooks who hoodwinked Microsoft [Audio + Text]

PWNING THE WINDOWS KERNEL

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Wireless spyware, credit card skimming, and patches galore.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do?


DUCK.  I’m very well, Doug.

Cold, but well.


DOUG.  It’s freezing here too, and everyone is sick… but that’s December for you.

Speaking of December, we like to begin the show with our This Week in Tech History segment.

We have an exciting entry this week – on 16 December 2003, the CAN-SPAM Act was signed into law by then US President George W. Bush.

A backronym for controlling the assault of non-solicited pornography and marketing, CAN-SPAM was seen as relatively toothless for reasons such as not requiring consent from recipients to receive marketing email, and not allowing individuals to sue spammers.

It was believed that, by 2004, less than 1% of spam was actually complying with the Act.


DUCK.  Yes, it’s easy to say this with hindsight…

…but as some of us joked at the time, we reckoned they called it CAN-SPAM because that’s *exactly* what you could do. [LAUGHTER]


DOUG.  “You CAN spam!”


DUCK.  I guess the idea was, “Let’s start with a very softly-softly approach.”

[WRY TONE] So it was the start, admittedly, not of that much.


DOUG.  [LAUGHS] We’ll get there eventually.

Speaking of bad and worse…

…Microsoft Patch Tuesday – nothing to see here, unless you count a signed malicious kernel driver?!

Signed driver malware moves up the software trust chain


DUCK.  Well, several actually – the Sophos Rapid Response team found these artifacts in engagements that they did.

Not just Sophos – at least two other cybersecurity research groups are listed by Microsoft as having stumbled across these things lately: kernel drivers that were effectively given a digital seal of approval by Microsoft.

Microsoft now has an advisory out that’s blaming rogue partners.

Whether they actually created a company that pretended to make hardware, especially to join the driver programme with the intention of sneaking dodgy kernel drivers through?

Or whether they bribed a company that was already part of the programme to play ball with them?

Or whether they hacked into a company that didn’t even realise that it was being used as a vehicle for saying to Microsoft, “Hey, we need to produce this kernel driver – will you certify it?”…

The problem with certified kernel drivers, of course, is because they have to be signed by Microsoft, and because driver signing is compulsory on Windows, it means that if you can get your kernel driver signed, you don’t need hacks or vulnerabilities or exploits to be able to load one as part of a cyberattack.

You can just install the driver and the system will go, “Oh well, it’s signed. It is therefore permissible to load it.”

And of course, you can do a lot more damage when you’re inside the kernel than you can when you’re “merely” Administrator.

Notably, you get insider access to process management.

As an admin, you can run a program that says, “I want to kill XYZ program,” which might be, say, an anti-virus or a threat-hunting tool.

And that program can resist being shut down, because, assuming it too is admin-level, neither process can absolutely claim primacy over the other.

But if you’re inside the operating system, it’s the operating system that deals with starting and finishing processes, so you get much more power for killing off things like security software…

…and apparently that’s exactly what these crooks were doing.

In “history repeating itself”, I remember, years and years ago, when we would investigate software that crooks used to terminate security programs, they’d typically have lists of between 100 and 200 processes that they were interested in killing off: operating system processes, anti-virus programs from 20 different vendors, all that sort of stuff.

And this time, I think there were 186 programs that their driver was there to kill.

So a bit of an embarrassment for Microsoft.

Fortunately, they have now kicked those rogue coders out of their developer programme, and they have blocklisted at least all the known dodgy drivers.


DOUG.  So that’s not all that was revealed on Patch Tuesday.

There were also some zero-days, some RCE bugs, and other things of that nature:

Patch Tuesday: 0-days, RCE bugs, and a curious tale of signed malware


DUCK.  Yes.

Fortunately the zero-day bugs fixed this month weren’t what are known as RCEs, or remote code execution holes.

So they didn’t give a direct route for outside attackers just to jump into your network and run anything they wanted.

But there was a kernel driver bug in DirectX that would allow someone who wass already on your computer basically to promote themselves to have kernel-level powers.

So that’s a little bit like bringing your own signed driver – you *know* you can load it.

In this case, you exploit a bug in a driver that is trusted and that lets you do stuff inside the kernel.

Obviously, that’s the kind of thing that makes a cyberattack that’s already bad news into something very, very much worse.

So you definitely want to patch against that.

Intriguingly, it seems that that only applies to the very latest build, i.e. 2022H2 (second half of the year is what H2 stands for) of Windows 11.

You definitely want to make sure you’ve got that.

And there was an intriguing bug in Windows SmartScreen, which is basically the Windows filtering tool that when you try and download something that could be or is dangerous, gives you a warning.

So, obviously, if the crooks have found, “Oh, no! We’ve got this malware attack, and it was working really well, but now Smart Screen is blocking it, what are we going to do?”…

…either they can run away and build a whole new attack, or they can find a vulnerability that lets them sidestep Smart Screen so the warning doesn’t pop up.

And that’s exactly what happened in CVE-2022-44698, Douglas.

So, those are the zero-days.

As you said, there are some remote code execution bugs in the mix, but none of those are known to be in the wild.

If you patch against those, you get ahead of the crooks, rather than merely catching up.


DOUG.  OK, let’s stay on the subject of patches…

…and I love the first part of this headline.

It just says, “Apple patches everything”:

Apple patches everything, finally reveals mystery of iOS 16.1.2


DUCK.  Yes, I couldn’t think of a way of listing all the operating systems in 70 characters or less. [LAUGHTER]

So I thought, “Well, this is literally everything.”

And the problem is that last time we wrote about an Apple update, it was only iOS (iPhones), and only iOS 16.1.2:

Apple pushes out iOS security update that’s more tight-lipped than ever

So, if you had iOS 15, what were you to do?

Were you at risk?

Were you going to get the update later?

This time, the news about the last update finally came out in the wash.

It appears, Doug, that the reason that we got that iOS 16.1.2 update is that there was an in-the-wild exploit, now known as CVE-2022-42856, and that was a bug in WebKit, the web rendering engine inside Apple’s operating systems.

And, apparently, that bug could be triggered simply by luring you to view some booby-trapped content – what’s known in the trade as a driveby install, where you just glance at a page and, “Oh, dear”, in the background, malware gets installed.

Now, apparently, the exploit that was found only worked on iOS.

That’s presumably why Apple didn’t rush out updates for all the other platforms, although macOS (all three supported versions), tvOS, iPadOS… they all actually contained that bug.

The only system that didn’t, apparently, was watchOS.

So, that bug was in pretty much all of Apple’s software, but apparently it was only exploitable, as far as they knew, via an in-the-wild exploit, on iOS.

But now, weirdly, they’re saying, “Only on iOSes before 15.1,” which makes you wonder, “Why didn’t they put out an update for iOS 15, in that case?”

We just don’t know!

Maybe they were hoping that if they put out iOS 16.1.2, some people on iOS 15 would update anyway, and that would fix the problem for them?

Or maybe they weren’t yet sure that iOS 16 was not vulnerable, and it was quicker and easier to put out the update (which they have a well-defined process for), than to do enough testing to determine that the bug couldn’t be exploited on iOS 16 easily.

We shall probably never know, Doug, but it’s quite a fascinating backstory in all of this!

But, indeed, as you said, there’s an update for everybody with a product with an Apple logo on it.

So: Do not delay/Do it today.


DOUG.  Let us move to our friends at Ben-Gurion University… they are back at it again.

They’ve developed some wireless spyware – a nifty little wireless spyware trick:

COVID-bit: the wireless spyware trick with an unfortunate name


DUCK.  Yes… I’m not sure about the name; I don’t know what they were thinking there.

They’ve called it COVID-bit.


DOUG.  A little weird.


DUCK.  I think we’ve all been bitten by COVID in some way or another…


DOUG.  Maybe that’s it?


DUCK.  The COV is meant to stand for covert, and they don’t say what ID-bit stands for.

I guessed that it might be “information disclosure bit by bit”, but it is nevertheless a fascinating story.

We love writing about the research that this Department does because, although for most of us it’s a little bit hypothetical…

…they’re looking at how to violate network airgaps, which is where you run a secure network that you deliberately keep separate from everything else.

So, for most of us, that’s not a huge issue, at least at home.

But what they’re looking at is that *even if you seal off one network from another physically*, and these days go in and rip out all the wireless cards, the Bluetooth cards, the Near Field Communications cards, or cut wires and break circuit traces on the circuit board to stop any wireless connectivity working…

…is there still a way that either an attacker who gets one-time access to the secure area, or a corrupt insider, could leak data in a largely untraceable way?

And unfortunately, it turns out that sealing off one network of computer equipment entirely from another is much harder than you think.

Regular readers will know that we’ve written about loads of stuff that these guys have come up with before.

They’ve had GAIROSCOPE, which is where you actually repurpose a mobile phone’s compass chip as a low-fidelity microphone.


DOUG.  [LAUGHS] I remember that one:

Breaching airgap security: using your phone’s gyroscope as a microphone


DUCK.  Because those chips can sense vibrations just well enough.

They’ve had LANTENNA, which is where you put signals on a wired network that’s inside the secure area, and the network cables actually act as miniature radio stations.

They leak just enough electromagnetic radiation that you may be able to pick it up outside the secure area, so they’re using a wired network as a wireless transmitter.

And they had a thing that they jokingly called the FANSMITTER, which is where you go, “Well, can we do audio signalling? Obviously, if we just play tunes through the speaker, like [dialling noises] beep-beep-beep-beep-beep, it’ll be pretty obvious.”

But what if we vary the CPU load, so that the fan speeds up and slows down – could we use the change in fan speed almost like a sort of semaphore signal?

Can your computer fan be used to spy on you?

And in this latest attack, they figured, “How else can we turn something inside almost every computer in the world, something that seems innocent enough… how can we turn it into a very, very low-power radio station?”

And in this case, they were able to do it using the power supply.

They were able to do it in a Raspberry Pi, in a Dell laptop, and in a variety of desktop PCs.

They’re using the computer’s own power supply, which basically does very, very high-frequency DC switching in order to chop up a DC voltage, usually to reduce it, hundreds of thousands or millions of times a second.

They found a way to get that to leak electromagnetic radiation – radio waves that they could pick up up to 2 metres away on a mobile phone…

…even if that mobile phone had all its wireless stuff turned off, or even removed from the device.

The trick they came up with is: you switch the speed at which it’s switching, and you detect the changes in the switching frequency.

Imagine, if you want a lower voltage (if you want to, say, chop 12V down to 4V), the square wave will be on for one-third of the time, and off for two-thirds of the time.

If you want 2V, then you’ve got to change the ratio accordingly.

And it turns out the modern CPUs vary both their frequency and their voltage in order to manage power and overheating.

So, by changing the CPU load on one or more of the cores in the CPU – by just ramping up tasks and ramping down tasks at a comparatively low frequency, between 5000 and 8000 times a second – they were able to get the switched-mode power supply to *switch its switching modes* at those low frequencies.

And that generated very low-frequency radio emanations from circuit traces or any copper wire in the power supply.

And they were able to detect those emanations using a radio antenna that was no more sophisticated than a simple wire loop!

So, what do you do with a wire loop?

Well, you pretend, Doug, that it’s a microphone cable or a headphone cable.

You connect it to a 3.5mm audio jack, and you plug it into your mobile phone like it’s a set of headphones…


DOUG.  Wow.


DUCK.  You record the audio signal that’s generated from the wire loop – because the audio signal is basically a digital representation of the very low-frequency radio signal that you’ve picked up.

They were able to extract data from it at a rate anywhere between 100 bits per second when they were using the laptop, 200 bits per second with the Raspberry Pi, and anywhere up to 1000 bits per second, with a very low error rate, from the desktop computers.

You can get things like AES keys, RSA keys, even small data files out at that sort of speed.

I thought that was a fascinating story.

If you run a secure area, you definitely want to keep up with this stuff, because as the old saying goes, “Attacks only get better, or smarter.”


DOUG.  And lower tech. [LAUGHTER]

Everything is digital, except we’ve got this analogue leakage that’s being used to steal AES keys.

It’s fascinating!


DUCK.  Just a reminder that you need to think about what’s on the other side of the secure wall, because “out of sight is very definitely not necessarily out of mind.”


DOUG.  Well, that dovetails nicely into our final story – something that is out of sight, but not out of mind:

Credit card skimming – the long and winding road of supply chain failure

If you’ve ever built a web page, you know that you can drop analytics code – a little line of JavaScript – in there for Google Analytics, or companies like it, to see how your stats are doing.

There was a free analytics company called Cockpit in the early 2010s, and so people were putting this Cockpit code – this little line of JavaScript – in their web pages.

But Cockpit shut down in 2014, and let the domain name lapse.

And then, in 2021, cybercriminals thought, “Some e-commerce sites are still letting this code run; they’re still calling this JavaScript. Why don’t we just buy up the domain name and then we can inject whatever we want into these sites that still haven’t removed that line of JavaScript?”


DUCK.  Yes.

What could possibly go right, Doug?


DOUG.  [LAUGHS] Exactly!


DUCK.  Seven years!

They would have had an entry in all their test logs saying, Could not source the file cockpit.js (or whatever it was) from site cockpit.jp, I think it was.

So, as you say, when the crooks lit the domain up again, and started putting files up there to see what would happen…

…they noticed that loads of e-commerce sites were just blindly and happily consuming and executing the crooks’ JavaScript code inside their customers’ web browsers.


DOUG.  [LUAGHING] “Hey, my site is not throwing an error anymore, it’s working.”


DUCK.  [INCREDULOUS] “They must have fixed it”… for some special understanding of the word “fixed”, Doug.

Of course, if you can inject arbitrary JavaScript into somebody’s web page, then you can pretty much make that web page do anything you want.

And if, in particular, you are targeting e-commerce sites, you can set what is essentially spyware code to look for particular pages that have particular web forms with particular named fields on them…

…like passport number, credit card number, CVV, whatever it is.

And you can just basically suck out all the unencrypted confidential data, the personal data, that the user is putting in.

It hasn’t gone into the HTTPS encryption process yet, so you suck it out of the browser, you HTTPS-encrypt it *yourself*, and send it out to a database run by crooks.

And, of course, the other thing you can do is that you can actively alter web pages when they arrive.

So you can lure someone to a website – one that is the *right* website; it’s a website they’ve gone to before, that they know they can trust (or they think they can trust).

If there’s a web form on that site that, say, usually asks them for name and account reference number, well, you just stick in a couple of extra fields, and given that the person already trusts the site…

… if you say name, ID, and [add in] birthdate?

It’s very likely that they’re just going to put in their birthdate because they figure, “I suppose it’s part of their identity check.”


DOUG.  This is avoidable.

You could start by reviewing your web-based supply chain links.


DUCK.  Yes.

Maybe once every seven years would be a start? [LAUGHTER]

If you’re not looking, then you really are part of the problem, not part of the solution.


DOUG.  You could also, oh, I don’t know… check your logs?


DUCK.  Yes.

Again, once every seven years might be start?

Let me just say what we’ve said before on the podcast, Doug…

…if you’re going to collect logs that you never look at, *just don’t bother collecting them at all*.

Stop kidding yourself, and don’t collect the data.

Because, actually, the best thing that can happen to data if you’re collecting it and not looking at it, is that the wrong people won’t get at it by mistake.


DOUG.  Then, of course, perform test transactions regularly.


DUCK.  Should I say, “Once every seven years would be a start”? [LAUGHTER]


DOUG.  Of course, yes… [WRY] that might be regular enough, I suppose.


DUCK.  If you’re an e-commerce company and you expect your users to visit your website, get used to a particular look and feel, and trust it…

…then you owe it to them to be testing that the look and feel is correct.

Regularly and frequently.

Easy as that.


DOUG.  OK, very good.

And as the show begins to wind down, let us hear from one of our readers on this story.

Larry comments:

Review your web based supply chain links?

Wish Epic Software had done this before shipping the Meta tracking bug to all their customers.

I am convinced that there is a new generation of developers who think development is about finding code fragments anywhere on the internet and uncritically pasting them into their work product.


DUCK.  If only we didn’t develop code like that…

…where you go, “I know, I’ll use this library; I’ll just download it from this fantastic GitHub page I found.

Oh, it needs a whole load of other stuff!?

Oh, look, it can satisfy the requirements automatically… well, let’s just do that then!”

Unfortunately, you have to *own your supply chain*, and that means understanding everything that goes into it.

If you’re thinking along the Software Bill of Materials [SBoM], roadway, where you think, “Yes, I’ll list everything I use”, it’s not just enough to list the first level of things that you use.

You also need to know, and be able to document, and know you can trust, all the things that those things depend on, and so on and so on:

Little fleas have lesser fleas Upon their backs to bite 'em
And lesser fleas have lesser fleas And so ad infinitum.

*That’s* how you have to chase down your supply chain!


DOUG.  Well said!

Alright, thank you very much, Larry, for sending in that comment.

If you have an interesting story, comment, or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH.  Stay secure!

[MUSICAL MODEM]


go top