S3 Ep135: Sysadmin by day, extortionist by night

AN INSIDER ATTACK (WHERE THE PERP GOT CAUGHT)

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Inside jobs, facial recognition, and the “S” in “IoT” still stands for “security”.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do today?


DUCK.  Very well, Doug.

You know your catchphrase, “We’ll keep an eye on that”?


DOUG.  [LAUGHING] Ho, ho, ho!


DUCK.  Sadly, there are several things this week that we’ve been “keeping an eye on”, and they still haven’t ended well.


DOUG.  Yes, we have kind-of an interesting and non-traditional lineup this week.

Let’s get into it.

But first, we will start with our This Week in Tech History segment.

This week, on 19 May 1980, the Apple III was announced.

It would ship in November 1980, at which point the first 14,000 Apple IIIs off the line were recalled.

The machine would be reintroduced again in November 1981.

Long story short, the Apple III was a flop.

Apple co-founder Steve Wozniak attributed the machine’s failure to it being designed by marketing people instead of engineers.

Ouch!


DUCK.  I don’t know what to say to that, Doug. [LAUGHTER]

I’m trying not to smirk, as a person who considers himself a technologist and not a marketroid.

I think the Apple III was meant to look good and look cool, and it was meant to capitalise on the Apple II’s success.

But my understanding is that the Apple III (A) could not run all Apple II programs, which was a bit of a backward compatibility blow, and (B) just wasn’t expandable enough like the Apple II was.

I don’t know whether this is an urban legend or not…

…but I have read that the early models did not have their chips seated properly in the factory, and that recipients who were reporting problems were told to lift the front of the computer off their desk a few centimetres and let it crash back.

[LAUGHTER]

This would bang the chips into place, like they should have been in the first place.

Which apparently did work, but was not the best sort of advert for the quality of the product.


DOUG.  Exactly.

All right, let’s get into our first story.

This is a cautionary tale about how bad inside threats can be, and perhaps how difficult they can be to pull off as well, Paul.

Whodunnit? Cybercrook gets 6 years for ransoming his own employer


DUCK.  Indeed it is, Douglas.

And if you’re looking for the story on nakedsecurity.sophos.com, it’s the one that is captioned, “Whodunnit? Cybercrook gets 6 years for ransoming his own employer.”

And there you have the guts of the story.


DOUG.  Shouldn’t laugh, but… [LAUGHS]


DUCK.  It is kind-of funny and unfunny.

Because if you look at how the attack unfolded, it was basically:

“Hey, someone’s broken in; we don’t know what the security hole was that they used. Let’s burst into action and try and find out.”

“Oh, no! The attackers have managed to get sysadmin powers!”

“Oh, no! They’ve sucked up gigabytes of confidential data!”

“Oh, no! They’ve messed with the system logs so we don’t know what’s going on!”

“Oh, no! Now they’re demanding 50 bitcoins (which at the time was about $2,000,000 US) to keep things quiet… obviously we’re not going to pay $2 million as a hush job.”

And, bingo, the crook went and did that traditional thing of leaking the data on the dark web, basically doxxing the company.

And, unfortunately, the question “Whodunnit?” was answered by: One of the company’s own sysadmins.

In fact, one of the people who’d been drafted into the team to try and find and expel the attacker.

So he was quite literally pretending to fight this attacker by day and negotiating a $2 million blackmail payment by night.

And even worse, Doug, it seems that, when they became suspicious of him…

…which they did, let’s be fair to the company.

(I’m not going to say who it was; let’s call them Company-1, like the US Department of Justice did, although their identity is quite well known.)

His property was searched, and apparently they got hold of the laptop that later turned out was used to do the crime.

They questioned him, so he went on an “offence is the best form of defence” process, and pretended to be a whistleblower and contacted the media under some alter ego.

He gave a whole false story about how the breach had happened – that it was poor security on Amazon Web Services, or something like that.

So it made it seem, in many ways, much worse than it was, and the company’s share price tumbled quite badly.

It might have dropped anyway when there was news that they’d been breached, but it certainly seems that he went out of his way to make it seem much worse in order to deflect suspicion from himself.

Which, fortunately, did not work.

He *did* get convicted (well, he pleaded guilty), and, like we said in the headline, he got six years in prison.

Then three years of parole, and he has to pay back a penalty of $1,500,000.


DOUG.  You can’t make this stuff up!

Great advice in this article… there are three pieces of advice.

I love this first one: Divide and conquer.

What do you mean by that, Paul?


DUCK.  Well, it does seem that, in this case, this individual had too much power concentrated in his own hands.

It seems that he was able to make every little part of this attack happen, including going in afterwards and messing with the logs and trying to make it look as though other people in the company did it.

(So, just to show what a terribly nice chap he was – he did try and stitch up his co-workers as well, so they’d get into trouble.)

But if you make certain key system activities require the authorisation of two people, ideally even from two different departments, just like when, say, a bank is approving a big money movement, or when a development team is deciding, “Let’s see whether this code is good enough; we’ll get someone else to look at it objectively and independently”…

…that does make it much harder for a lone insider to pull off all these tricks.

Because they’d have to collude with everyone else that they’d need co-authorisation from along the way.


DOUG.  OK.

And along the same lines: Keep immutable logs.

That’s a good one.


DUCK.  Yes.

Those listeners with long memories may recall WORM drives.

They were quite the thing back in the day: Write Once, Read Many.

Of course they were touted as absolutely ideal for system logs, because you can write to them, but you can never *rewrite* them.

Now, in fact, I don’t think that they were designed that way on purpose… [LAUGHS] I just think nobody knew how to make them rewritable yet.

But it turns out that kind of technology was excellent for keeping log files.

If you remember early CD-Rs, CD-Recordables – you could add a new session, so you could record, say, 10 minutes of music and then add another 10 minutes of music or another 100MB of data later, but you couldn’t go back and rewrite the whole thing.

So, once you’d locked it in, somebody who wanted to mess with the evidence would either have to destroy the entire CD so it would be visibly absent from the chain of evidence, or otherwise damage it.

They wouldn’t be able to take that original disk and rewrite its content so it showed up differently.

And, of course, there are all sorts of techniques by which you can do that in the cloud.

If you like, this is the other side of the “divide and conquer” coin.

What you’re saying is that you have lots of sysadmins, lots of system tasks, lots of daemon or service processes that can generate logging information, but they get sent somewhere where it takes a real act of will and co-operation to make those logs go away or to look other than what they were when they were originally created.


DOUG.  And then last but certainly not least: Always measure, never assume.


DUCK.  Absolutely.

It looks as though Company-1 in this case did manage at least some of all of these things, ultimately.

Because this chap was identified and questioned by the FBI… I think within about two months of doing his attack.

And investigations don’t happen overnight – they require a warrant for the search, and they require probable cause.

So it looks as though they did do the right thing, and that they didn’t just blindly continue trusting him just because he kept saying he was trustworthy.

His felonies did come out in the wash, as it were.

So it’s important that you do not consider anybody as being above suspicion.


DOUG.  OK, moving right along.

Gadget maker Belkin is in hot water, basically saying, “End-of-life means end of updates” for one of its popular smart plugs.

Belkin Wemo Smart Plug V2 – the buffer overflow that won’t be patched


DUCK.  It does seem to have been a rather poor response from Belkin.

Certainly from a PR point of view, it hasn’t won them many friends, because the device in this case is one of those so called smart plugs.

You get a Wi-Fi enabled switch; some of them will also measure power and other things like that.

So the idea is you can then have an app, or a web interface, or something that will turn a wall socket on and off.

So it’s a little bit of an irony that the fault is in a product that, if hacked, could lead to someone basically flashing a switch on and off that could have an appliance plugged into it.

I think, if I were Belkin, I might have gone, “Look, we’re not really supporting this anymore, but in this case… yes, we’ll push out a patch.”

And it’s a buffer overflow, Doug, plain and simple.

[LAUGHS] Oh, dear…

When you plug in the device, it needs to have a unique identifier so that it will show up in the app, say, on your phone… if you’ve got three of them in your house, you don’t want them all called Belkin Wemo plug.

You want to go and change that, and put what Belkin calls a “friendly name”.

And so you go in with your phone app, and you type in the new name you want.

Well, it appears that there is a 68-character buffer in the app on the device itself for your new name… but there’s no check that you don’t put in a name longer than 68 bytes.

Foolishly, perhaps, the people who built the system decided that it would be good enough if they simply checked how long the name was *that you typed into your phone when you used the app to change the name*: “We’ll avoid sending names that are too long in the first place.”

And indeed, in the phone app, apparently you can’t even put in more than 30 characters, so they’re being extra-super safe.

Big problem!

What if the attacker decides not to use the app? [LAUGHTER]

What if they use a Python script that they wrote themselves…


DOUG.  Hmmmmm! [IRONIC] Why would they do that?


DUCK.  …that doesn’t bother checking for the 30-character or 68-character limit?

And that’s exactly what these researchers did.

And they found out, because there’s a stack buffer overflow, they could control the return address of a function that was being used.

With enough trial and error, they were able to deviate execution into what’s known in the jargon as “shellcode” of their own choice.

Notably, they could run a system command which ran the wget command, which downloaded a script, made the script executable, and ran it.


DOUG.  OK, well…

…we’ve got some advice in the article.

If you have one of these smart plugs, check that out.

I guess the bigger question here is, assuming Belkin follows through on their promise to not fix this… [LOUD LAUGHTER]

…basically, how hard of a fix is this, Paul?

Or would it be good PR to just plug this hole?


DUCK.  Well, I don’t know.

There might be many other apps that, oh, dear, they have to do the same sort of fix to.

So they might just not want to do this for fear that someone will go, “Well, let’s dig deeper.”


DOUG.  A slippery slope…


DUCK.  I mean, that would be a bad reason not to do it.

I would have thought, given that this is now well-known, and given that it seems like an easy enough fix…

…just (A) recompile the apps for the device with stack protection turned on, if possible, and (B) at least in this particular “friendly name” changing program, don’t allow names longer than 68 characters!

It doesn’t seem like a major fix.

Although, of course, that fix has to be coded; it has to be reviewed; it has to be tested; a new version has to be built and digitally signed.

It then has to be offered to everybody, and lots of people won’t even realise it’s available.

And what if they don’t update?

It would be nice if those who are aware of this issue could get a fix, but it remains to be seen whether Belkin will expect them to simply upgrade to a newer product.


DOUG.  Alright, on the subject of updates…

…we have been keeping an eye, as we say, on this story.

We’ve talked about it several times: Clearview AI.

Zut alors! Raclage crapuleux! Clearview AI in 20% more trouble in France

France has this company in its sights for repeated defiance, and it’s almost laughable how bad this has gotten.

So, this company scrapes photos off the internet and maps them to their respective humans, and law enforcement uses this search engine, as it were, to look up people.

Other countries have had problems with this too, but France has said, “This is PII. This is personally identifiable information.”


DUCK.  Yes.


DOUG.  “Clearview, please stop doing this.”

And Clearview didn’t even respond.

So they got fined €20 million, and they just kept going…

And France is saying, “OK, you can’t do this. We told you to stop, so we’re going to come down even harder on you. We’re going to charge you €100,000 every day”… and they backdated it to the point that it’s already up to €5,200,000.

And Clearview is just not responding.

It’s just not even acknowledging that there’s a problem.


DUCK.  That certainly seems to be how it’s unfolding, Doug.

Interestingly, and in my opinion quite reasonably and very importantly, when the French regulator looked into Clearview AI (at the time they decided the company wasn’t going to play ball voluntarily and fined them €20 million)…

…they also found that the company wasn’t just collecting what they consider biometric data without getting consent.

They were also making it incredibly, and needlessly, and unlawfully difficult for people to exercise their right (A) to know that their data has been collected and is being used commercially, and (B) to have it deleted if they so desire.

Those are rights that many countries have enshrined in their regulations.

It’s certainly, I think, still in the law in the UK, even though we are now outside the European Union, and it is part of the well known GDPR regulation in the European Union.

If I don’t want you to keep my data, then you have to delete it.

And apparently Clearview was doing things like saying, “Oh, well, if we’ve had it for more than a year, it’s too hard to remove it, so it’s only data we’ve collected within the last year.”


DOUG.  Aaaaargh. [LAUGHS]


DUCK.  So that, if you don’t notice, or you only realise after two years?

Too late!

And then they were saying, “Oh, no, you’re only allowed to ask twice a year.”

I think, when the French investigated, they also found that people in France were complaining that they had to ask over, and over, and over again before they managed to jog Clearview’s memory into doing anything at all.

So who knows how this will end, Doug?


DOUG.  This is a good time to hear from several readers.

We usually do our comment-of-the-week from one reader, but you asked at the end of this article:

If you were {Queen, King, President, Supreme Wizard, Glorious Leader, Chief Judge, Lead Arbiter, High Commissioner of Privacy}, and could fix this issue with a {wave of your wand, stroke of your pen, shake of your sceptre, a Jedi mind-trick}…

…how would you resolve this stand-off?

And to just pull some quotes from our commenters:

  • “Off with their heads.”
  • “Corporate death penalty.”
  • “Classify them as a criminal organisation.”
  • “Higher-ups should be jailed until the company complies.”
  • “Declare customers to be co-conspirators.”
  • “Hack the database and delete everything.”
  • “Create new laws.”

And then James dismounts with: “I fart in your general direction. Your mother was an ‘amster, and your father smelt of elderberries.” [MONTY PYTHON AND THE HOLY GRAIL ALLUSION]

Which I think might be a comment on the wrong article.

I think there was a Monty Python quote in the “Whodunnit?” article.

But, James, thank you for jumping in at the end there…


DUCK.  [LAUGHS] Shouldn’t really laugh.

Didn’t one of our commenters say, “Hey, apply for an Interpol Red Notice? [A SORT-OF INTERNATIONAL ARREST WARRANT]


DOUG.  Yes!

Well, great… as we are wont to do, we will keep an eye on this, because I can assure you this is not over yet.

If you have an interesting story, comment, or question you’d like to submit, we’d love to read on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today; thank you very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


US offers $10m bounty for Russian ransomware suspect outed in indictment

He goes by many names, according to the US Department of Justice.

Mikhail Pavlovich Matveev, or just plain Matveev as he’s repeatedly referred to in his indictment, as well as Wazawaka, m1x, Boriselcin and Uhodiransomwar.

From that last alias, you can guess what he’s wanted for.

In the words of the charge sheet: conspiring to transmit ransom demands; conspiring to damage protected computers; and intentionally damaging protected computers.

Simply put, he’s accused of carrying out or enabling ransomware attacks, notably using three different malware strains known as LockBit, Hive, and Babuk.

Babuk makes regular headlines these days because its source code was released back in 2021, soon finding its way onto Github, where you can download it still.

Babuk therefore serves as a sort-of instruction manual that teaches (or simply enables, for those who don’t feel the need to understand the cryptographic processes involved) would-be cybercrimals how to handle the “we can decrypt this but you can’t, so pay us the blackmail money or you’ll never see your data again” part of a ransomware attack.

In fact, the Babuk source code includes options for malicious file scrambling tools that target Windows, VMWare ESXi, and Linux-based network attached storage (NAS) devices.

Three specific attacks in evidence

The US indictment explicitly accuses Matveev of two ransomware attacks in the State of New Jersey, and one in the District of Columbia (the US federal capital).

The alleged attacks involved the LockBit malware unleashed against law enforcement in Passaic County, New Jersey, the Hive malware used against a healthcare organisation in Mercer County, New Jersey, and a Babuk attack on the Metropolitan Police Department in Washington, DC.

According to the DOJ, Matveev and his fellow conspirators…

…allegedly used these types of ransomware to attack thousands of victims in the United States and around the world. These victims include law enforcement and other government agencies, hospitals, and schools. Total ransom demands allegedly made by the members of these three global ransomware campaigns to their victims amount to as much as $400 million, while total victim ransom payments amount to as much as $200 million.

With that much at stake, it’s perhaps not surprising that the DOJ’s press release concludes by reporting that:

The [US] Department of State has also announced an award of up to $10 million for information that leads to the arrest and/or conviction of this defendant. Information that may be eligible for this award can be submitted at tips.fbi.gov or RewardsForJustice.net.

Interestingly, Matveev has also been declared a “designated” individual, meaning that he’s subject to US sanctions, and therefore presumably also that US businesess aren’t allowed to send him money, which we’re guessing prohibits Americans from paying any ransomware blackmail demands that he might make.

Of course, with the ransomware crime ecosystem largely operating under a service-based or franchise-style model these days, it seems unlikely that Matveev himself would directly ask for or receive any extortion money that was paid out, so it’s not clear what effect this sanction will have on ransomware payments, if any.

What to do?

If you do suffer the misfortune of having your files scrambled and held to ransom…

…do bear in mind the findings of the Sophos State of Ransomware Report 2023, where ransomware victims revealed that the median average cost of recovering by using backups was $375,000, while the median cost of paying the crooks and relying on their decryption tools instead was $750,000. (The mean averages were $1.6m and $2.6m respectively.)

As we put it in the Ransomware Report:

Whichever way you look at the data, it is considerably cheaper to use backups to recover from a ransomware attack than to pay the ransom. […] If further evidence is needed of the financial benefit of investing in a strong backup strategy, this is it.

In other words, sanctions or no sanctions, paying the ransomware criminals isn’t the end of your outlay when you need to recover in a hurry, because you need to add the cost of actually using those decryption tools onto the blackmail money you paid up in the first place.



A DAY IN THE LIFE OF A CYBERCRIME FIGHTER

Once more unto the breach, dear friends, once more!

Peter Mackenzie, Director of Incident Response at Sophos, talks about real-life cybercrime fighting in a session that will alarm, amuse and educate you, all in equal measure. (Full transcript available.)

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.


Belkin Wemo Smart Plug V2 – the buffer overflow that won’t be patched

Researchers at IoT security company Sternum dug into a popular home automation mains plug from well-known device brand Belkin.

The model they looked at, the Wemo Mini Smart Plug (F7C063) is apparently getting towards the end of its shelf life, but we found plenty of them for sale online, along with detailed advice and instructions on Belkin’s site on how to set them up.

Old (in the short-term modern sense) though they might be, the researchers noted that:

Our initial interest in the device came from having several of these lying around our lab and used at our homes, so we just wanted to see how safe (or not) they were to use. [… T]his appears to be a pretty popular consumer device[; b]ased on these numbers, it’s safe to estimate that the total sales on Amazon alone should be in the hundreds of thousands.

Simply put, there are lots of people out there who have already bought and plugged these things in, and are using them right now to control electrical outlets in their homes.

A “smart plug”, simply put, is a power socket that you plug into an existing wall socket and that interposes a Wi-Fi-controlled switch between the mains outlet on the front of the wall socket and an identical-looking mains outlet on the front of the smart plug. Think of it like a power adapter that instead of converting, say, a round Euro socket into a triangular UK one, converts, say, a manually-switched US socket into an electronically-switched US socket that can be controlled remotely via an app or a web-type interface.

The S in IoT…

The problem with many so-called Internet of Things (IoT) devices, as the old joke goes, is that the it is the letter “S” in “IoT” that stands for security…

…meaning, of course, that there often isn’t as much cybersecurity as you might expect, or even any at all.

As you can imagine, an insecure home automation device, especially one that could allow someone outside your house, or even on the other side of the world, to turn electrical appliances on and off at will, could lead to plenty of trouble.

We’ve written about IoT insecurity in a wide range of different products before, from internet kettles (yes, really) that could leak your home Wi-Fi password, to security cameras that crooks can use to keep their eye on you instead of the other way around, to network-attached disk drives at risk of getting splatted by ransomware directly across the internet.

In this case, the researchers found a remote code execution hole in the Wemo Mini Smart Plug back in January 2023, reported it in February 2023, and received a CVE number for it in March 2023 (CVE-2023-27217).

Unfortunately, even though there are almost certainly many of these devices in active use in the real world, Belkin has apparently said that it considers the device to be “at the end of its life” and that the security hole will therefore not be patched.

(We’re not sure how acceptable this sort of “end of life” dismissal would be if the device turned out to have a flaw in its 120V AC or 230V AC electrical circuitry, such as the possibility of overheating and emitting noxious chemicals or setting on fire, but it seems that faults in the low-voltage digital electronics or firmware in the device can be ignored, even if they could lead to a cyberattacker flashing the mains power switch in the device on and off repeatedly at will.)

When friendly names are your enemy

The problem that the researchers discovered was a good old stack buffer overflow in the part of the device software that allows you to change the so-called FriendlyName of the device – the text string that is displayed when you connect to it with an app on your phone.

By default, these devices start up with a friendly name along the lines of Wemo mini XYZ, where XYZ denotes three hexadecimal digits that we’re guessing are chosen pseudorandomly.

That means that if even you own two or three of these devices, they’ll almost certainly start out with different names so you can set them up easily.

But you’ll probably want to rename them later on so they’re easier to tell apart in future, by assigning then friendly names such as TV power, Laptop charger and Raspberry Pi server.

The Belkin programmers (or, more precisely, the programmers of the code that ended up in these Belkin-branded devices, who might have supplied smart plug software to other brand names, too) apparently reserved 68 bytes of temporary storage to keep track of the new name during the renaming process.

But they forgot to check that the name you supplied would fit into that 68-byte slot.

Instead, they assumed that you’d use their official phone app to perform the device renaming process, and thus that they could restrict the amount of data sent to the device in the first place, in order to head off any buffer overflow that might otherwise arise.

Ironically, they took great care not merely to keep you to the 68-byte limit required for the device itself to behave properly, but even to restrict you to typing in just 30 characters.

We all know why letting the client side do the error checking, rather than checking instead (or, better yet, as well) at the server side, is a terrible idea:

  • The client code and the server code might drift out of conformity. Future client apps might decide that 72-character names would be a nice option, and start sending more data to the server than it can safely handle. Future server-side coders might notice that no one ever seemed to use the full 68 bytes reserved, and unilterally decide that 24 should be more than enough.
  • An attacker could choose not to bother with the app. By generating and trasmitting their own requests to the device, they would trivially bypass any security checks that rely on the app alone.

The researchers were quickly able to try ever-longer names to the point that they could crash the Wemo device at will by writing over the end of the memory buffer reserved for the new name, and corrupting data stored in the bytes that immediately followed.

Corrupting the stack

Unfortunately, in a stack-based operating system, most software ends up with its stack-based temporary memory buffers laid out so that most of these buffers are closely followed by another vital block of memory that tells the program where to go when it’s finished what it’s doing right now.

Technically, these “where to go next” data chunks are known as return addresses, and they’re automatically saved when a program calls what’s known as a function, or subroutine, which is a chunk of code (for example, “print this message” or “pop up a warning dialog”) that you want to be able to use in several parts of your program.

The return address is magically recorded on the stack every time the subroutine is used, so that the computer can automatically “unwind” its path to get back to where the subroutine was called from, which could be different every time it is activated.

(If a subroutine had a fixed return address, you could only ever call it from one place in your program, which would make it pointless to bother packaging that code into a separate subroutine in the first place.)

As you can imagine, if you trample on that magic return address before the subroutine finishes running, then when it does finish, it will trustingly but unknowingly “unwind” itself to the wrong place.

With a bit (or perhaps a lot) of luck, an attacker might be able to predict in advance how to trample on the return address creatively, and thereby misdirect the program in a deliberate and malicious way.

Instead of merely crashing, the misdirected program could be tricked into running code of the attacker’s choice, thus causing what’s known as a remote code execution exploit, or RCE.

Two common defences help protect against exploits of this sort:

  • Address space layout randomisation, also known as ASLR. The operating system deliberately loads programs at slightly different memory locations every time they run. This makes it harder for attackers to guess how to misdirect buggy programs in a way that ultimately gets and retains control instead of merely crashing the code.
  • Stack canaries, named after the birds that miners used to take with them underground because they would faint in the presence of methane, thus providing a cruel but effective early warning of the risk of an explosion. The program deliberately inserts a known-but-random block of data just in front of the return address every time a subroutine is called, so that a buffer overflow will unavoidably and detectably overwrite the “canary” first, before it overruns far enough to trample on the all-important return address.

To get their exploit to work quickly and reliably, the researchers needed to force the Wemo plug to turn ASLR off, which remote attackers would not be able to do, but with lots of tries in real life, attackers might nevertheless get lucky, guess correctly at the memory addresses in use by the program, and get control anyway.

But the researchers didn’t need to worry about the stack canary problem, because the buggy app had been compiled from its source code with the “insert canary-checking safety instructions” feature turned off.

(Canary-protected programs are typically slightly bigger and slower than unprotected ones because of the extra code needed in every subroutine to do the safety checks.)

What to do?

  • If you’re a Wemo Smart Plug V2 owner, make sure you haven’t configured your home router to allow the device to be accessed from “outside”, over the internet. This reduces what’s known in the jargon as your attack surface area.
  • If you’ve got a router that supports Universal Plug and Play, also known as UPnP, make sure that it’s turned off. UPnP makes it notoriously easy for internal devices to get opened up inadvertently to outsiders.
  • If you’re a programmer, avoid turning off software safety features (such as stack protection or stack canary checking) just to save a few bytes. If you are genuinely running out of memory, look to reduce your footprint by improving your code or removing features rather than by diminishing security so you can cram more in.

Zut alors! Raclage crapuleux! Clearview AI in 20% more trouble in France

Here’s how the French data protection regulator describes controversial facial recognition service Clearview AI, in its own words, in clear and plain English:

CLEARVIEW AI collects photographs from a wide range of websites, including social networks, and sells access to its database of images of people through a search engine in which an individual can be searched using a photograph. The company offers this service to law enforcement authorities. Facial recognition technology is used to query the search engine and find an individual based on [their] photograph.

The French regulator we are referring to here is officially known as the CNIL, short for Commission Nationale de l’Informatique et des Libertés, a phrase that needs no translation, even though English is, historically at least, a Germanic and not a Romance language.

Back in October 2022, we reported that CNIL had fined Clearview AI €20,000,000 for deploying its image scraping technology in France, arguing (convicingly, in our opinion) that constructing data templates for recognising individials amounted to collecting biomnetric data, and that biometric data of this sort is unarguably PII, or personally identifiable information:

Facial recognition technology is used to query the search engine and find a person based on their photograph. In order to do so, the company builds a “biometric template”, i.e. a digital representation of a person’s physical characteristics (the face in this case). These biometric data are particularly sensitive, especially because they are linked to our physical identity (what we are) and enable us to identify ourselves in a unique way.

The vast majority of people whose images are collected into the search engine are unaware of this feature.

No consent, no fair, concluded CNIL.

Not just collection, but concealment, too

Worse still, CNIL castigated Clearview for trying to cling onto the very data it shouldn’t have collected in the first place.

The regulator ruled that Clearview made it unacceptably difficult for French people to exercise their rights not only to request full details of PII collected about them, but also to have any or all of that data deleted if they wanted.

CNIL determined that Clearview placed artificial restrictions on letting individuals get at their own data, including: by refusing to delete data collected more than a year earlier; by allowing people to request their data only twice a year; and by “only responding to certain requests after an excessive number of requests from the same person.”

CNIL even summarised these problems in a neat, English-language infographic:

Penalties added to penalty

As well as ordering Clearview to delete all existing data on Frech residents, and to stop collecting data in future, CNIL noted back in 2022 that it had already tried to engage with the face-scraping company but had been ignored, and had therefore run out of patience:

Following a formal notice which remained unaddressed, the CNIL imposed a penalty of 20 million Euros and ordered CLEARVIEW AI to stop collecting and using data on individuals in France without a legal basis and to delete the data already collected.

Apparently, Clearview has still made no effort to comply with the French regulator’s ruling, and the regulator has yet again decided it has had enough.

Last week, CNIL invoked a “thou shalt not ignore us this time” clause in its previous settlement, allowing for fines of up to €100,000 for every day that the company refsed to comply, stating that:

CLEARVIEW AI had two months to comply with the order and justify compliance to the CNIL. However, the company did not send any proof of compliance within this time limit.

On 13 April 2023, [CNIL] considered that the company had not complied with the order and consequently imposed an overdue penalty payment of €5,200,000.

What next?

We can’t help but wonder what’s going to happen next.

If you were {Queen, King, President, Supreme Wizard, Glorious Leader, Chief Judge, Lead Arbiter, High Commissioner of Privacy}, and could fix this issue with a {wave of your wand, stroke of your pen, shake of your sceptre, Jedi mind-trick}…

…how would you resolve this stand-off?


Whodunnit? Cybercrook gets 6 years for ransoming his own employer

This wasn’t your typical cyberextortion situation.

More precisely, it followed what you might think of as a well-worn path, so in that sense it came across as “typical” (if you will pardon the use of the word typical in the context of a serious cybercrime), but it didn’t happen in the way you would probably have assumed at first.

Starting in December 2020, the crime unfolded as follows:

  • Attacker broke in via an unknown security hole.
  • Attacker acquired sysadmin powers on the network.
  • Attacker stole gigabytes of confidential data.
  • Attacker messed with system logs to cover their tracks.
  • Attacker demanded 50 Bitcoins (then worth about $2,000,000) to hush things up.
  • Attacker doxxed the victim when the blackmail wasn’t paid.

Doxxing, if you’re not familiar with the term, is shorthand jargon for deliberately releasing documents about a person or company to put them at risk of physical, financial or other harm.

When cybercriminals doxx individuals they don’t like, or with whom they they have a score they want to settle, the idea is often to put the victim at risk from (or at least in fear of) a physical attack, for example by accusing them of a heinous crime, wishing vigilante justice on them, and then telling everyone where they live.

When the victim is a company, the criminal intent is usually to create operational, reputational, financial or regulatory stress for the victim by not only exposing that the company suffered a breach in the first place, but also deliberately releasing confidential information that other criminals can abuse right away.

If you do the right thing and report a breach to your local regulator, the regulator won’t demand that you immediately publish details that amount to a guide on “how to hack into company X right now”. If the security hole exploited is later deemed to have been easily avoidable, the regulator might ultimately decide to fine you for not preventing the breach, but will nevertheless work with you at the outset to try to minimise the damage and risk.

Hoist by his own petard

The good news in this case (good for law and order, albeit not for the perpetrator) is that the victim wasn’t quite as gullible as the criminal seemed to think.

Company-1, as the US Department of Justice (DOJ) calls them and we shall too, even though their identity has been widely disclosed on the public record, quickly seemed to have suspected an inside job.

Within three months of the start of the attack, the FBI had raided the home of soon-to-be-ex-senior-coder Nickolas Sharp, then in his mid-30s, suspecting him of being the perpetrator.

In fact, Sharp, in his capacity as a senior developer at Company-1, was apparently “helping” (we use the term loosely here) to “remediate” (ditto) his own attack by day, while trying to extort a $2m ransom payment by night.

As part of the bust, the cops seized various computer devices, including what turned out to be the laptop that Sharp used when attacking his own employer, and questioned Sharp about his alleged role in the crime.

Sharp, it seems, not only told the Feds a pack of lies (or made numerous false statements, in the more dispassionate words of the DOJ) but also went on what you might call a “fake news” PR counter-offensive, apparently hoping to throw the investigation off track.

As the DOJ puts it:

Several days after the FBI executed the search warrant at SHARP’s residence, SHARP caused false news stories to be published about the Incident and Company-1’s response to the Incident. In those stories, SHARP identified himself as an anonymous whistleblower within Company-1 who had worked on remediating the Incident and falsely claimed that Company-1 had been hacked by an unidentified perpetrator who maliciously acquired root administrator access to Company-1’s AWS accounts.

In fact, as SHARP well knew, SHARP himself had taken Company-1’s data using credentials to which he had access, and SHARP had used that data in a failed attempt to extort Company-1 for millions of dollars.

Almost immediately after news broke about the data breach, Company-1’s share price dropped very suddenly from about $390 to about $280.

Although the price might have fallen notably on account of any sort of breach notification, the DOJ report quite reasonably implies (though it stops short of stating as a fact) that this false narrative, as peddled to the media by Sharp, made the devaluation worse than it otherwise would have been.

Sharp pleaded guilty in February 2023; he was sentenced this week to spend six years in prison followed by three years on parole, and instructed to pay restitution of just over $1,500,000.

(He’s also never going to get any of his confiscated computer equipment back, though just how useful that kit would still be if it were returned to him after six years in prison and a further three years on supervised release is anyone’s guess.)

What to do?

  • Divide and conquer. Try to avoid situations where individual sysadmins have unfettered access to everything. The additional hassle of requiring two independent authorisations for important system operations is a small price to pay for the additional safety and control it gives you.
  • Keep immutable logs. In this case, Sharp was able to mess with system logs in an attempt to hide his own access and to cast suspicions on coworkers instead. Given the speed with which he was caught out, however, we’re assuming that Company-1 had kept at least some “write only” logs that formed a permanent, undeniable record of key system activities.
  • Always measure, never assume. Get independent, objective confirmation of security claims. The vast majority of sysadmins are honest, unlike Nickolas Sharp, but few of them are 100% right all the time.

Most sysadmins we know would be delighted to have regular access to a second opinion to verify their assumptions.

It’s a help, not a hindrance, to have critical cybersecurity work double-checked to make sure not only that it was started correctly, but completed correctly, too.


ALWAYS MEASURE, NEVER ASSUME

Short of time or expertise to take care of cybersecurity threat response?
Worried that cybersecurity will end up distracting you from all the other things you need to do?

Take a look at Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response  ▶


LEARN MORE ABOUT ACTIVE ADVERSARIES

Read our Active Adversary Report.
This is a fascinating study of real-life attacks by Sophos Field CTO John Shier.


go top