S3 Ep136: Navigating a manic malware maelstrom

A PYTHON PERSPECTIVE VORTEX

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Cybercrime after cybercrime, some Apple updates, and an attack on a source code repository.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do?


DUCK.  Very well, thank you. Douglas!

Was that cheery enough?


DOUG.  That was pretty good.

Like, a 7/10 on the happiness scale, which is a pretty good baseline.


DUCK.  Oh, I wanted it to feel higher than that.

What I said, plus 2.5/10.


DOUG.  [EXAGGERATED AMAZEMENT] Oh, Paul, you sound great!


DUCK.  [LAUGHS] Thank you, Doug.


DOUG.  Well, this might push you up to a 10/10, then… This Week in Tech History.

On 22 May, 1973, at the Xerox Palo Alto Research Center [PARC], researcher Robert Metcalfe wrote a memo proposing a new way to connect computers together.

Inspired by its precursor, AlohaNet, which Metcalfe studied as part of his PhD dissertation, the new technology would be called Ethernet, a nod to the substance “luminiferous aether”, which was once believed to be a medium for propagating light waves.


DUCK.  It was certainly a lot faster than 160 KB, single sided, single density floppy diskettes! [LAUGHTER]


DOUG.  Could be worse!

Anyhow, speaking of “worse” and “badness”, we’ve got our first crime update of the day.

The US is offering a $10 million bounty for a Russian ransomware suspect.

US offers $10m bounty for Russian ransomware suspect outed in indictment

That’s a lot of money, Paul!

This guy must have done something pretty bad.

The DOJ’s statement:

[This person and his fellow conspirators] allegedly used these types of ransomware to attack thousands of victims in the United States and around the world. These victims include law enforcement and other government agencies, hospitals and schools.

Total ransom demands allegedly made by the members of these three global ransomware campaigns to their victims amount to as much as $400 million, while total victim ransom payments amount to as much as $200 million.

Big time attacks… lots of money changing hands here, Paul.


DUCK.  When you’re trying to track down somebody who’s doing dastardly stuff overseas and you think, “How on earth are we going to do this? They’re never going to show up in court here”…

Maybe we just offer some filthy lucre to people in that other person’s country, and somebody will turn him in?

And if they’re offering $10 million (well, that’s the maximum you can get), they must be quite keen.

And my understanding, in this case, is the reason that they are keen is this particular suspect is accused of being, if not the heart and the soul, at least one of the two of those things for three different ransomware strains: LockBit, Hive and Babuk.

Babuk famously had its source code leaked (if I’m not wrong, by a disaffected affiliate), and has now found its way onto GitHub, where anybody who wants to can grab the encryption part.

And although it’s hard to feel any sympathy at all for people who are in the sights of the DOJ and the FBI for ransomware attacks…

…if there were any latent, droplets of sympathy left, they evaporate pretty quickly when you start reading about hospitals and schools amongst their many victims.


DOUG.  Yes.


DUCK.  So you have to assume it’s unlikely that they’ll ever see him in a US Court…

…but I guess they figured it’s too important not to try.


DOUG.  Exactly.

We will, as we like to say, keep an eye on that.

And while we’re waiting, please go and take a look at our State of Ransomware 2023 report.

It’s got a bunch of facts and figures that you can use to help protect your organisation against attacks.

That’s available at: sophos.com/ransomware2023.


DUCK.  One little hint that you can learn from the report: “Surprise, surprise; it costs you about half as much to recover from backups as it does from paying the ransom.”

Because even after you’ve paid the ransom, you still have as much work as you would have to restore your backup still to do.

And it also means you don’t pay the crooks.


DOUG.  Exactly!

Alright, we have another crime update.

This time, it’s our friends over at iSpoof, who, I have to admit, have a pretty good marketing team.

Except for everyone getting busted and all that kind of stuff…

Phone scamming kingpin gets 13 years for running “iSpoof” service


DUCK.  Yes, this is a report from the Metropolitan Police in London about a case that’s been going on since November 2022, when we first wrote about this on nakedsecurity.sophos.com.

A chap called Tejay Fletcher, and I think 169 other people who thought they were anonymous but it turned out they weren’t, got arrested.

And this Fletcher fellow, who was the kingpin of this, has just been sentenced to 13 years and 4 months in prison, Doug.

That is a pretty big sentence by any country’s standards!

And the reason is that this service was all about helping other cybercriminals, in return for bitcoinage, to scam victims very believably.

You didn’t need any technical ability.

You could just sign up for the service, and then start making phone calls where you could choose what number would show up at the other end.

So if you had an inkling that somebody banked with XYZ Banking Corporation, you could make their phone light up saying, “Incoming call from XYZ Banking Corporation”, and then launch into your schpiel.

It seems, from the National Crime Agency’s reports at the time, that their “customers” made millions of calls through this service. and they had something like a 10% success rate, where success is measured that the caller was on the line for at least a minute.

And when you think something is a scam call… you hang up pretty jolly quickly, don’t you?


DOUG.  A minute is a long time!


DUCK.  And that means they’ve probably hooked the person.

And you can see why, because everything seems believable.

If you are not aware that the Caller ID (or Calling Line Identification) number that shows up on your phone is nothing more than a hint, that anybody can put in anything, and that anybody with your worst interests at heart who wants to stalk you can, for a modest monthly outlay, buy into a service that will help them do it automatically…

If you don’t know that that’s the case, you’re probably going to have your guard way, way down when that call comes through and says, “I’m calling from the bank. You can see that from the number. Oh dear, there’s been fraud on your account”, and then the caller talks you into doing a whole load of things that you wouldn’t listen to for a moment otherwise.

The reach of this service, the large number of people who used it (he had tens of thousands of “customers”, apparently), and the sheer number of calls and amount of financial damage done, which ran into the millions, is why he got such a serious sentence.


DOUG.  Part of the reason they were able to attract so many customers is that this was on a public facing website.

It wasn’t on the dark web, and it was pretty slick marketing.

If you head over to the article, there’s a 53-second marketing video that’s got a professional voiceover actor, and some fun animations.

It’s a pretty well done video!


DUCK.  Yes!

I spotted one typo in it… they wrote “end to encryption” rather than “end-to-end encryption”, which I noticed because it was quite an irony.

Because the whole premise of that video – it says, “Hey, as a customer you’re completely anonymous.”

They made a big pitch of that.


DOUG.  I think it probably was an “end to encryption”. [LAUGHS]


DUCK.  Yes… you may have been anonymous to your victims, but you weren’t anonymous to the service provider.

Apparently the cops, in the UK at least, decided to start with anybody who had already spent more than £100’s worth of Bitcoins with the service.

So there may be people who dabbled in this, or used it just for a couple of things, who are still on the list.

The cops want people to know that they started at the top and they’re working their way down.

The anonymity promised in the video was illusory.


DOUG.  Well, we do have some tips, and we have said these tips before, but these are great reminders.

Including one of my favourites, because I think people just assume that Caller ID is an accurate reporter…. tip number one is: Treat Caller ID as nothing more than a hint.

What do you mean by that, Paul?


DUCK.  If you still get snail-mail at your house, you’ll know that when you get an envelope, it has your address on the front, and usually, when you turn it over, on the back of the envelope, there’s a return address.

And everyone knows that the sender gets to choose what that says… it might be genuine; it might all be a pack of lies.

That is how much you can trust Caller ID.

And as long as you bear that in mind, and think of it as a hint, then you’re golden.

But if it comes up and says “XYZ Banking Corporation” because the crooks have deliberately picked a number that you specially put in your contact list to come up to tell you it’s the bank… that doesn’t mean anything.

And the fact that they start telling you that they’re from the bank doesn’t mean that they are.

And that segues nicely into our second tip, doesn’t it, Doug?


DOUG.  Yes.

Always initiate official calls yourself, using a number you can trust.

So, if you get at one of these calls, say, “I’m going to call you right back”, and use the number on the back of your credit card.


DUCK.  Absolutely.

If there’s any way in which they have led you to believe this is the number you should call… don’t do it!

Find it out for yourself.

Like you said, for reporting things like bank frauds or bank problems, the number on the back of your credit card is a good start.

So, yes, be very, very careful.

It’s really easy to believe your phone, because 99% of the time, that Caller ID number will be telling the truth.


DOUG.  Alright, last but certainly not least, not quite as technical, but more a softer skill, tip number three is: Be there for vulnerable friends and family.

That’s a good one.


DUCK.  There are obviously people who are more at risk of this kind of scam.

So it’s important that you let people in your circle of friends and family, who you think might be at risk of this kind of thing… let them know that if they have any doubt, they should get in touch with you and ask you for advice.

As every carpenter or joiner will tell you, Douglas, “Measure twice, cut once.”


DOUG.  I like that advice. [LAUGHS]

I tend to measure once, cut thrice, so don’t follow my lead there.


DUCK.  Yes. You can’t “cut things longer”, eh? [LAUGHTER]


DOUG.  Nope, you sure can’t!


DUCK.  We’ve all tried. [LAUGHS]


DOUG.  That’s two updates down; one to go.

We’ve got an update… if you recall, earlier this month, Apple surprised us with a new Rapid Security Response, but it didn’t say what the updates actually fixed, but now we know, Paul.

Apple’s secret is out: 3 zero-days fixed, so be sure to patch now!


DUCK.  Yes.

Two 0-days, plus a bonus 0-day that wasn’t fixed before.

So if you had, what was it, macOS 13 Ventura (the latest), and if you had iOS/iPadOS 16, you got the Rapid Security Response

You got that “version number (a)” update, and “here is the detail about this update: (blank text string)”.

So you had no idea what was fixed.

And you, like us, probably thought, “I bet you it’s a zero-day in WebKit. That means a drive-by install. That means someone could be using it for spyware.”

Lo and behold, that’s exactly what those two 0-days were.

And there was a third zero-day, which was, if you like, another part of that equation, or another type of exploit that often goes along with the first two zero-days that were fixed.

This one was a Google Threat Response/Amnesty International thing that certainly smells of spyware to me… someone investigating a real-life incident.

That bug was what you call in the jargon a “sandbox escape”.

It sounds as though the three zero-days that are now fixed for all Apple platforms were…

One that might allow a crook to figure out what was where on your computer.

In other words, they’re greatly increasing the chance that their subsequent exploits will work.

A second exploit that does remote code execution inside your browser, as I say, aided and abetted by that data leakage in the first bug that might tell you what memory addresses to use.

And then a third zero day that essentially lets you jump out of the browser and do much worse.

Well, I’m going to say, Patch early, patch often, aren’t I, Doug?


DOUG.  Do it!

Yes.


DUCK.  Those are not the only reasons why you want these patches.

There are a bunch of proactive fixes as well.

So even if they weren’t the zero-days, I’d say it again anyway.


DOUG.  OK, great.

Our last story of the day… I had written my own little intro here, but I’m throwing that in the trash and I’m going to go with your headline, because it’s much better.

And it really captures the essence of this story: PyPI open source code repository deals with manic malware maelstrom.

That is what happened, Paul!

PyPI open-source code repository deals with manic malware maelstrom


DUCK.  Yes, I have to admit, I did have to work on that headline to get it to fit exactly onto two lines in the nakedsecurity.sophos.com WordPress template. [LAUGHTER]

The PyPI team now have got over this, and I think they’ve got rid of all the stuff.

But it seems that somebody had an automated system that was just generating new accounts, then, in those accounts, creating new projects…

…and just uploading poisoned source package after poisoned source package.

And remember that in most of these repositories (PyPI is an example), you can have malware that’s in the actual code that you want to download and later use as a module in your code (in other words, the programming library), and/or you can have malware in the actual installer or update script that delivers the thing to you.

So, unfortunately, it’s easy for crooks to clone a legitimate project, give it a realistic looking name and hope that if you download it by mistake…

…then after you’ve installed it, and once you start using it in your software, and once you start shipping it to your customers, it will all be fine, and you won’t find any malware in it.

Because the malware will have already infected your computer, by being in the script that ran to get the thing installed properly in the first place.

So there’s a double-whammy for the crooks.

What we don’t know is…

Were they hoping to upload so many infectious packages that some of them wouldn’t get spotted, and they’d have a fighting chance that a couple would just get left behind?

Or were they actually hoping that they could freak out the PyPI team so much that they had to take the whole site off the air, and that would be a full-on denial of service attack?

Neither of those were the outcome.

The PyPI team were able to mitigate the attack by shutting down just some aspects of the site.

Namely, for a while, you couldn’t create a new account, and you couldn’t add a new project, but you could still get old ones.

And that gave them just enough breathing room, over a 24-hour period, that it looks as though they were able to clean up entirely.


DOUG.  We do have some advice for attacks like this where it doesn’t get cleaned up in time.

So if you’re pulling from repositories like this, the first thing you can do is: Don’t choose a repository package just because the name looks right.

That’s a tactic used by the attackers often.


DUCK.  Indeed, Douglas.

It’s basically what we used to call in the jargon “typosquatting” for websites.

Instead of registering example.com, you might register something like examole.com, because O is next to P on the keyboard, in the hope that someone will go to type “example”, make a slight mistake and you’ll grab their traffic and get them onto a lookalike site.

Be careful what you choose.

It’s a little bit like our advice about Caller ID: it tells you something, but only so much.

And, for the rest, you really have to do your due diligence.


DOUG.  Such as: Don’t blindly download package updates into your own development or build systems.


DUCK.  Yes, DevOps and Continuous Integration is all the thing these days, isn’t it, where you automate everything?

And there’s something appealing about saying, “Well, I don’t want to fall behind, so why don’t I just tell my build system to take my code from my local repository where I’m looking after it, and then just always automatically get the latest version from the public repository of all the other people’s code I’m using?”

The problem is, if any of those third-party packages that you’re using get pwned, then your build system is going to get itself into trouble entirely automatically.

So don’t do that if you can possibly avoid it.


DOUG.  Which leads us to: Don’t make it easy for attackers to get into your own packages.


DUCK.  Yes.

Nobody can really stop someone who’s determined to set up, by hand, 2000 new PyPI accounts and put 1000 new packages into each of those.

But you can make attacks where crooks take over existing packages and compromise them… you can do your bit to help the rest of the community by making it as hard as possible for your projects to get compromised.

Do go and revisit the security you have on this account or on that package, just in case someone decides it would be a masterful place to insert badware that could affect other people… and of course that would at least temporarily tarnish your reputation at the same time.


DOUG.  And our last tip may fall on some deaf ears, but if it’s enough to just change a few minds, we’ve done some good work here today: Don’t be a you-know-what.


DUCK.  Proving how clever you are by reminding us all about supply-chain attacks by making unnecessary work for volunteer teams… like the Linux kernel crew (they’ve suffered from this in the past), PyPI and other popular open source repositories?

If you have a genuine reason why you think you need to tell them about a security vulnerability, find their security disclosure contact details and contact them properly, professionally, responsibly.

Don’t be a ****.


DOUG.  Excellemt.

Alright, good advice, and as the sun begins to set on our show for the day, it’s time to hear from one of our readers.

On the previous episode of the podcast, you may recall we talked a bit about the trials and tribulations of the Apple III computer. Let’s take a listen:

I don’t know whether this is an urban legend or not, but I have read that the early [Apple III] models did not have their chips seated properly in the factory, and that recipients who were reporting problems were told to lift the front of the computer off their desk a few centimeters and let it crash back, which would bang them into place like they should have been in the first place. Which apparently did work, but was not the best sort of advert for the quality of the product.


DOUG.  In response, listener S31064 (not sure if that’s a true birth name) chimes in:

I don’t know about that, but the company I was working for at the time was using them for offline library circulation terminals. And nine times out of ten, if there was a problem with it, the fix was to reseat the chips.


DUCK.  Yes, going over your motherboard and (crackle, crackle) pressing all the chips down… that was considered routine maintenance back then.

But it seems that for the Apple III, it was not just routine maintenance, preventative maintenance, it was actually a recognised recovery technique.

So I was fascinated to read that, Doug.

Someone who had actually been there, and done that!


DOUG.  Well, thank you very much, dear listener, for sending that in.

And if you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure.

[MUSICAL MODEM]


Ransomware tales: The MitM attack that really had a Man in the Middle

It’s taken more than five years for justice to be served in this case, but the cops and the courts got there in the end.

The UK law enforcement office SEROCU, short for South East Regional Organised Crime Unit, this week reported the peculiar tale of one Ashley Liles, the literal Man in the Middle whom we referred to in the headline.

These days, we usually expand the jargon term MitM to mean Manipulator in the Middle, not merely to avoid the gendered term “man”, but also because many, if not most, MitM attacks these days are performed by machines.

Some techies have even adopted the name Machine in the Middle, but we prefer “manipulator” because we think it usefully decribes how this sort of attack works, and because (as this story shows) sometimes it really is man, and not a machine, in the middle.

MitM explained

A MitM attack depends on someone or something that can intercept messages sent to you, and modify them on the way through in order to deceive you.

The attacker typically also modifies your replies to the original sender, so that they don’t spot the deception, and get sucked into the trickery along with you.

As you can imagine, cryptography is one way to avoid MitM attacks, the idea being that if the data is encrypted before it’s sent, then whoever or whatever is in the middle can’t make sense of it at all.

The attacker would not only need to decrypt the messages from each end to figure out what they meant, but also to re-encrypt the modified messages correctly before passing them on, in order to avoid detection and maintain the treachery.

One classic, and fatal, MitM tale dates back to the late 1580s, when spymasters of England’s Queen Elizabeth I were able to intercept and manipulate secret correspondence from Mary, Queen of Scots.

Mary, who was Elizabeth’s cousin and political arch-rival, was at the time under strict house arrest; her secret messages were apparently smuggled in and out in beer barrels delivered to the castle where she was detained.

Fatally for Mary, Queen Bess’s spymasters were not only able to intercept and read Mary’s messages, but also to send falsified replies that lured Mary into putting sufficient details in writing to cook her own goose, as it were, revealing that she was aware of, and actively supported, a plot to have Elizabeth assassinated.

Mary was sentenced to death, and executed in 1587.

Fast forward to 2018

This time, fortunately, there were no assassination plans, and England abolished the death penalty in 1998.

But this 21st-century message interception crime was as audacious and as devious as it was simple.

A business in Oxford, England, just north of Sophos (we’re 15km downriver in Abingdon-on-Thames, in case you were wondering) was hit by ransomware in 2018.

By 2018, we had already entered the contemporary ransomware era, where criminals breaking into and blackmail entire companies at a time, asking for huge sums of money, instead of going after tens of thousands of individual computer owners for $300 each.

That’s when the now-convicted perpetrator went from being a Sysadmin-in-the-Affected-Business to a Man-in-the-Middle cybercriminal.

While working with both the company and the police to deal with the attack, the perpetrator, Ashely Liles, 28, turned on his colleagues by:

  • Modifying email messages from the original crooks to his bosses, and editing the Bitcoin addreses listed for the blackmail payment. Liles was thereby hoping to intercept any payments that might be made.
  • Spoofing messages from the original crooks to increase the pressure to pay up. We’re guessing that Liles used his insider knowledge to create worst-case scenarios that would be more believable than any threats that original attackers could have come up with.

It’s not clear from the police report exactly how Liles intended to cash out.

Perhaps he intended simply to run off with all the money and then act as though the encryption crook had cut-and-run and absconded with the cryptocoins themselves?

Perhaps he added his own markup to the fee and tried to negotiate the attackers’ demand down, in the hope of clearing a massive payday for himself while nevertheless acquiring the decryption key, becoming a hero in the “recovery” process, and thereby deflecting suspicion?

The flaw in the plan

As it happened, Liles’s dastardly plan was ruined by two things: the company didn’t pay up, so there were no Bitcoins for him to intercept, and his unauthorised fiddling in the company email system showed up in the system logs.

Police arrested Liles and searched his computer equipment for evidence, only to find that he’d wiped his computers, his phone and a bunch of USB drives a few days earlier.

Nevertheless, the cops recovered data from Liles’s not-as-blank-as-he-thought devices, linking him directly to what you can think of as a double extortion: trying to scam his employer, while at the same time scamming the scammers who were already scamming his employer.

Intriguingly, this case dragged on for five years, with Liles maintaining his innocence until suddenly deciding to plead guilty in a court hearing on 2023-05-17.

(Pleading guilty earns a reduced sentence, though under current regulations, the amount of “discount”, as it is rather strangely but officially known in England, decreases the longer the accused holds out before admitting they did it.)

What to do?

This is the second insider threat we’ve written about this month, so we’ll repeat the advice we gave before:

  • Divide and conquer. Try to avoid situations where individual sysadmins have unfettered access to everything. This makes it harder for rogue employees to concoct and execute “insider” cybercrimes without co-opting other people into their plans, and thus risking early exposure.
  • Keep immutable logs. In this case, Liles was apparently unable to remove the evidence showing that someone had tampered with other people’s email, which led to his arrest. Make it as hard as you can for anyone, whether insider or outsider, to tamper with your official cyberhistory.
  • Always measure, never assume. Get independent, objective confirmation of security claims. The vast majority of sysadmins are honest, unlike Ashley Liles, but few of them are 100% right all the time.

    ALWAYS MEASURE, NEVER ASSUME

    Short of time or expertise to take care of cybersecurity threat response?
    Worried that cybersecurity will end up distracting you from all the other things you need to do?

    Take a look at Sophos Managed Detection and Response:
    24/7 threat hunting, detection, and response  ▶


    LEARN MORE ABOUT RESPONDING TO ATTACKS

    Once more unto the breach, dear friends, once more!

    Peter Mackenzie, Director of Incident Response at Sophos, talks about real-life cybercrime fighting in a session that will alarm, amuse and educate you, all in equal measure. (Full transcript available.)

    Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.


PyPI open-source code repository deals with manic malware maelstrom

Public source code repositories, from Sourceforge to GitHub, from the Linux Kernel Archives to ReactOS.org, from PHP Packagist to the Python Package Index, better known as PyPI, are a fantastic source (sorry!) of free operating systems, applications, programming libraries, and developers’ toolkits that have done computer science and software engineering a world of good.

Most software projects need “helper” code that isn’t a fundamental part of the problem that the project itself is trying to solve, such as utility functions for writing to the system log, producing colourful output, uploading status reports to a web service, creating backup archives of old data, and so on.

In cases like that, you can save time (and benefit for free from other people’s expertise) by searching for a package that already exists in one of the many available repositories, and hooking that external package into your own tree of source code.

In the other direction, if you’re working on a project of your own that includes some useful utilities you couldn’t find anywhere else, you might feel inclined to offer something to the community in return by packaging up your code and making it available for free to everyone else.

The cost of free

As you’re no doubt aware, however, community source code repositories bring with them a number of cybersecurity challenges:

  • Popular packages that suddenly vanish. Sometimes, packages that a well-meaning programmer has donated to the community become so popular that they become a critical part of thousands or even hundreds of thousands of bigger projects that take them for granted. But if the original programmer decides to withdraw from the community and to delete their projects (which they have every right to do if they have no formal contractual obligations to anyone who’s chosen to rely on them), the side-effects can be temporarily disastrous, as other people’s projects suddenly “update” to a state in which a necessary part of their code is missing.
  • Projects that get actively hijacked for evil. Cybercriminals who guess, steal or buy passwords to other people’s projects can inject malware into the code, and anyone who already trusts the once-innocent package will unwittingly infect themselves (and perhaps their own customers) with malware if they download the rogue “update” automatically. Crooks can even take over old projects using social engineering trickery, by joining the project and being really helpful for a while, until the original maintainer decides to trust them with upload access.
  • Rogue packages that masquerade as innocent ones. Crooks regularly upload packages that have names that are sufficiently close to well-known projects that other users download and use them by mistake, in an attack jocularly known as typosquatting. (The same trick works for websites, hoping that a user who mistypes a URL even slightly will end up on a bogus look-alike site instead.) The crooks generally clone the genuine package first, so it still performs all the functions of the original, but with some additional malicious behaviour buried deep in the code.
  • Petulant behaviour by so-called “researchers”. We’ve sadly had to write about this sort of probably-legal-but-ethically-dubious behaviour several times. Examples include a US PhD student and their supervisor who deliberately uploaded fake patches to the Linux kernel as part of an unauthorised experiment that the core Linux team were left to sort out, and a self-serving “expert” with the nickname Supply Chain Risks who uploaded a booby-trapped fake project to the PyPI repository as a reminder of the risk of so-called supply chain attacks. SC Risks then followed up their proof-of-concept “research” package with a further 3950 packages, leaving the PyPI team to find and delete them all.

Rogue uploaders

Unfortunately, PyPI seems to have been hammered by a bunch of rogue, automated uploads over the past weekend.

The team has, perhaps understandably, not yet given any details of how the attack was carried out, but the site temporarily blocked anyone new from joining up, and blocked existing users from creating new projects:

New user and new project name registration on PyPI is temporarily suspended. The volume of malicious users and malicious projects being created on the index in the past week has outpaced our ability to respond to it in a timely fashion, especially with multiple PyPI administrators on leave.

While we re-group over the weekend, new user and new project registration is temporarily suspended. [2023-05-20T16:02:00Z]

We’re guessing that the attackers were using automated tools to flood the site with rogue packages, presumably hoping that if they tried hard enough, some of the malicious content would escape notice and get left behind even after the site’s cleanup efforts, thus completing what you might call an Security Bypass Attack

…or perhaps that the site administrators would feel compelled to take the entire site offline to sort it out, thus causing a Denial of Service Attack, or DoS.

The good news is that in just over 24 hours, the team got on top of the problem, and was able to announce, “Suspension has been lifted.”

In other words, even though PyPI was not 100% functional over the weekend, there was no true denial of service against the site or its millions of users.

What to do?

  • Don’t choose a repository package just because the name looks right. Check that you really are downloading the right module from the right publisher. Even legitimate modules sometimes have names that clash, compete or confuse.
  • Don’t blindly download package updates into your own development or build systems. Test and review everything you download before you approve it for use. Remember that packages typically include update-time scripts that run when you do the update, so malware infections could be delivered via the update process itself, not as part of the package source code that gets left behind afterwards.
  • Don’t make it easy for attackers to get into your own packages. Choose proper passwords, use 2FA whenever you can, and don’t blindly trust newcomers to your project as soon as they start angling to get maintainer access, no matter how keen you are to hand the reins to someone else.
  • Don’t be a you-know-what. As this story reminds us all, volunteers in the open source community have enough trouble with genuine cybercriminals without having to deal with “researchers” who conduct proof-of-concept attacks for their own benefit, whether for academic purposes or for bragging rights (or both).

Phone scamming kingpin gets 13 years for running “iSpoof” service

In November 2022, we wrote about a multi-country takedown against a Cybercrime-as-a-Service (CaaS) system known as iSpoof.

Although iSpoof advertised openly for business on a non-darkweb site, reachable with a regular browser via a non-onion domain name, and even though using its services might technically have been legal in your country (if you’re a lawyer, we’d love to hear your opinion on that issue once you’ve seen the historical website screenshots below)…

…a UK court had no doubt that the iSpoof system was implemented with life-ruining, money-draining malfeasance in mind.

The site’s kingpin, Tejay Fletcher, 35, of London, was given a prison sentence of well over a decade to reflect that fact.

Show any number you like

Until November 2022, when the domain was taken down after a seizure warrant was issued to US law enforcement, the site’s main page looked something like this:

You can show any number you wish on call display, essentially faking your caller ID.

And an explanatory section further down the page made it pretty clear that the service wasn’t merely there to enhance your own privacy, but to help you mislead the people you were calling:

Get the ability to change what someone sees on their caller ID display when they receive a phone call from you. They’ll never know it was you! You can pick any number you want before you call. Your opposite will be thinking you’re someone else. It’s easy and works on every phone worldwide!

In case you were still in any doubt about how you could use iSpoof to help you rip off unsuspecting victims, here’s the site’s own marketing video, provided courtesy of the Metropolitan Police (better known as “the Met”) in London, UK:

As you will see below, and in our previous coverage of this story, iSpoof users weren’t actually anonymous at all.

More than 50,000 users of the service have been identified already, with close to 200 people already arrested and under investigation in the UK alone.

Pretend to be a bank…

Simply put, if you signed up for iSpoof’s service, no matter how technical or non-technical you were, you could immediately start placing calls that would show up on victims’ phones as if those calls were coming from a company that they already trusted.

As the Metropolitan Police put it:

Users of iSpoof, who had to pay to use its services, posed as representatives of banks including Barclays, Santander, HSBC, Lloyds and Halifax [well-known British banks], pretending to warn of suspicious activity on their accounts.

Scammers would encourage the unsuspecting members of the public to disclose security information such as one-time passcodes to obtain their money.

The total reported loss from those targeted via iSpoof is £48 million in the UK alone, with average loss believed to be £10,000. Because fraud is vastly under reported, the full amount is believed to be much higher.

In the 12 months until August 2022 around 10 million fraudulent calls were made globally via iSpoof, with around 3.5 million of those made in the UK.

Interestingly, the Met says that about 10% of those UK calls (about 350,000 in all), made to 200,000 different potential victims, lasted more than a minute, suggesting a surprisingly high success rate for scammers who used the iSpoof service to give their bogus calls a fraudulent air of legitimacy.

When calls arrive from a number you’re inclined to trust – for example, a number you use sufficiently often that you’ve added it into your own contact list so it comes up with an identifier of your choice, such as Credit Card Company, rather than something generic-looking such as +44.121.496.0149

…you’re unsurprisingly more likely to trust the caller implicitly before you hear what they’ve got to say.

After all, the system that transmits away the caller’s number to the recipient before the call is even answered is known in the jargon as Caller ID, or Calling Line Identification (CLI) outside North America.

It’s not any sort of ID

Those magic words ID and identification shouldn’t really be there, because a technically savvy caller (or a completely non-technical caller who was using the iSpoof service) could insert any number they liked when initiating the call.

In other words, Caller ID not only tells you nothing about the person using the phone that’s calling you, but also tells you nothing trustworthy about the number of the phone that’s calling you.

Caller ID “identifies” the caller and the calling number no more reliably that the return address that’s printed on the back of a snail-mail envelope, or the Reply-To address that’s in the headers of any emails you receive.

All those “identifications” can be chosen by the originator of the communication, and can say pretty much anything that the sender or caller chooses.

They should really be called What the Caller Wants you to Think, Which Could Be a Pack of Lies, rather than being referred to as an ID or an identification.

And there was an awful lot of lying going on, thanks to iSpoof, with the Met claiming:

Before it was shut down in November 2022, iSpoof was constantly growing. 700 new users were registering with the site every week and it was earning on average £80,000 per week. At the point of closure it had 59,000 registered users.

The website offered a number of packages for users who would buy, in Bitcoin, the number of minutes they wanted to use the software for to make calls.

The site raked in loads of profit, according to the Met:

iSpoof made just over £3 million with Fletcher profiting around £1.7-£1.9 million from running and enabling fraudsters to ruin victim’s lives. He lived an extravagant lifestyle, owning a Range Rover worth £60,000 and a Lamborghini Urus worth £230,000. He regularly went on holiday, with trips to Jamaica, Malta and Turkey in 2022 alone.

Earlier in 2023, Fletcher pleaded guilty to the offences of making or supplying articles for use in fraud, encouraging or assisting the commission of an offence, possessing criminal property and transferring criminal property.

Last week he was given a prison sentence of 13 years and 4 months; 169 other people in the UK “have now been arrested on suspicion of using iSpoof [and] remain under police investigation.”

What to do?


  • TIP 1. Treat Caller ID as nothing more than a hint.

The most important thing to remember (and to explain to any friends and family you think might be vulnerable to this sort of scam) is this: THE CALLER’S NUMBER THAT SHOWS UP ON YOUR PHONE BEFORE YOU ANSWER PROVES NOTHING.


  • TIP 2. Always initiate official calls yourself, using a number you can trust.

If you genuinely need to contact an organisation such as your bank by phone, make sure that you initiate the call, and use a number than you worked out for yourself.

For example, look at a recent official bank statement, check the back of your bank card, or even visit a branch and ask a staff member face-to-face for the official number that you should call in future emergencies.


  • TIP 3. Be there for vulnerable friends and family.

Make sure that friends and family whom you think could be vulnerable to being sweet-talked (or browbeaten, confused and intimidated) by scammers, no matter how they’re first contacted, know that they can and should turn to you for advice before agreeing to anything over the phone.

And if anyone asks them to do something that’s clearly an intrusion of their personal digital space, such as installing Teamviewer to let them onto the computer, reading out a secret access code off the screen, or telling them a personal identification number or password…

…make sure they know it’s OK simply to hang up without saying a single word further, and getting in touch with you to check the facts first.


Apple’s secret is out: 3 zero-days fixed, so be sure to patch now!

Remember that zipped-lipped but super-fast update that Apple pushed out three weeks ago, on 2023-05-01?

That update was the very first in Apple’s newfangled Rapid Security Response process, whereby the company can push out critical patches for key system components without going through a full-size operating system update that takes you to a new version number.

As we pondered in the Naked Securirty podcast that week:

Apple have just introduced “Rapid Security Responses.” People are reporting that they take seconds to download and require one super-quick reboot. [But] as for being tight-lipped [about the update], they are zipped-lipped. Absolutely no information what it was about. But it was nice and quick!

Good for some

Unfortunately, these new Rapid Security Responses were only available for the very latest version of macOS (currently Ventura) and the latest iOS/iPadOS (currently on version 16), which left users of older Macs and iDevices, as well as owners of Apple Watches and Apple TVs, in the dark.

Apple’s description of the new rapid patches implied that they’d typically deal with zero-day bugs that affected core software such as the Safari browser, and WebKit, which is the web rendering engine that every browser is obliged to use on iPhones and iPads.

Technically, you could create an iPhone or iPad browser app that used the Chromium engine, as Chrome and Edge do, or the Gecko engine, as Mozilla’s browsers do, but Apple wouldn’t let it into the App Store if you did.

And because the App Store is the one-and-only “walled garden” source of apps for Apple’s mobile devices, that’s that: it’s the WebKit way, or no way.

The reason that critical WebKit bugs tend to be more dangerous than bugs in many other applications is that browsers quite intentionally spend their time fetching content from anywhere and everywhere on the internet.

Browsers then process these untrusted files, supplied remotely by other people’s web servers, convert them into viewable, clickable content, and display them as web pages you can interact with.

You expect that your browser will actively warn you, and explicitly request permission, before performing actions that are considered potentially dangerous, such as activating your webcam, reading in files already stored on your device, or installing new software.

But you also expect content that’s not considered directly dangerous, such as images to be displayed, videos to be shown, audio files to be played, and so on, to be processed and presented to you automatically.

Simply put, merely visiting a web page shouldn’t put you at risk of having malware implanted on your device, your data stolen, your passwords sniffed out, your digital life subjected to spyware, or any malfeasance of that sort.

Unless there’s a bug

Unless, of course, there’s a bug in WebKit (or perhaps several bugs that can be strategically combined), so that merely by preparing a deliberately booby-trapped image file, or video, or JavaScript popup, your browser could be tricked into doing something it shouldn’t.

If cybercriminals, or spyware sellers, or jailbreakers, or the security services of a government that doesn’t like you, or indeed anyone with your worst interests at heart, uncovers an exploitable bug of this sort, they may be able to compromise the cybersecurity of your entire device…

…simply by luring you to an otherwise innocent-looking website that ought to be perfectly safe to visit.

Well, Apple just followed up its latest Rapid Security Resonse patches with full-on updates for all its supported products, and inamongst the security bulletins for those patches, we’ve finally found out what those Rapid Responses were there to fix.

Two zero-days:

  • CVE-2023-28204: WebKit. An out-of-bounds read was addressed with improved input validation. Processing web content may disclose sensitive information. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2023-32373: WebKit. A use-after-free issue was addressed with improved memory management. Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.

Generally speaking, when two zero-days of this sort show up at the same time in WebKit, it’s a good bet that they’ve been combined by criminals to create a two-step takeover attack.

Bugs that corrupt memory by overwriting data that shouldn’t be touched (e.g. CVE-2023-32373) are always bad, but modern operating systems include many runtime protections that aim to stop such bugs being exploited to take control of the buggy program.

For example, if the operating system randomly chooses where programs and data end up in memory, cybercriminals often can’t do much more than crash the vulnerable program, because they can’t predict how the code they’re attacking is laid out in memory.

But with precise information about what’s where, a crude, “crashtastic” exploit can sometimes be turned into a “crash-and-keep-control” exploit: what’s known by the self-descriptive name of a remote code execution hole.

Of course, bugs that let attackers read from memory locations that they’re not supposed (e.g. CVE-2023-28204) can not only lead directly to data leakage and data theft exploits, but also lead indirectly to “crash-and-keep-control” attacks, by revealing secrets about the memory layout inside a program and making it easier to take over.

Intriguingly, there’s a third zero-day patched in the latest updates, but this one apparently wasn’t fixed in the Rapid Security Response.

  • CVE-2023-32409: WebKit. The issue was addressed with improved bounds checks. A remote attacker may be able to break out of Web Content sandbox. Apple is aware of a report that this issue may have been actively exploited.

As you can imagine, combining these three zero-days would be the equivalent of a home run to an attacker: the first bug reveals the secrets needed to exploit the second bug reliably, and the second bug allows code to be implanted to exploit the third…

…at which point, the attacker has not merely taken over the “walled garden” of your current web page, but grabbed control of your entire browser, or worse.

What to do?

Make sure you’re patched! (Go to Settings > General > Software Update.)

Even devices that already received a Rapid Security Response at the start of March 2023 have a zero-day still to be patched.

And all platforms have received many other security fixes for bugs that could be exploited for attacks as varied as: bypassing privacy preferences; accessing private data from the lockscreen; reading your location information without permission; spying on network traffic from other apps; and more.

After updating, you should see the following version numbers:

  • watchOS: now at version 9.5
  • tvOS: now at version 16.5
  • iOS 15 and iPadOS 15: now at version 15.7.6
  • iOS 16 and iPadOS 16: now at version 16.5
  • macOS Big Sur: now at 11.7.7
  • macOS Monterey: now at 12.6.6
  • macOS Ventura: now at 13.4

Important note: if you have macOS Big Sur or macOS Monterey, those all-important WebKit patches aren’t bundled in with the operating system version update but are supplied in a separate update package called Safari 16.5.

Have fun!


go top