Category Archives: News

S3 Ep124: When so-called security apps go rogue [Audio + Text]

A ROGUES’ GALLERY

Rogue software packages. Rogue “sysadmins”. Rogue keyloggers. Rogue authenticators.

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Scambaiting, rogue 2FA apps, and we haven’t heard the last of LastPass.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do today?


DUCK.  Chilly, Doug.

Apparently, March is going to to be colder than February.


DOUG.  We are having the same problem here, the same challenge.

So, fret not – I have a very interesting This Week in Tech History segment.

This week, on 05 March 1975, the first gathering of the Homebrew Computer Club took place in Menlo Park, California, hosted by Fred Moore and Gordon French.

The first meeting saw around 30 technology enthusiasts discussing, among other things, the Altair.

And about a year later, on 01 March 1976, Steve Wozniak showed up to a meeting with a circuit board he created, aiming to give away the plans.

Steve Jobs talked him out of it, and the two went on to start Apple.

And the rest is history, Paul.


DUCK.  Well, it certainly is history, Doug!

Altair, eh?

Wow!

The computer that persuaded Bill Gates to drop out of Harvard.

And in true entrepreneurial fashion, together with Paul Allen and Monty Davidoff – I think that was the trio who wrote the Altair Basic – decamped to New Mexico.

Go and work at the hardware vendor’s property in Albuquerque!


DOUG.  Perhaps something that’s maybe not going to make history…

…we’ll start the show off with an unsophisticated yet interesting scambaiting campaign, Paul.

NPM JavaScript packages abused to create scambait links in bulk


DUCK.  Yes, I wrote this up on Naked Security, Doug, under the headline NPM JavaScript packages abused to create scambait links in bulk (it’s a lot wordier to say than it seemed at the time when I wrote it)…

…because I felt it was an interesting angle on the sort of web property that we tend to associate directly, and only, with so-called supply-chain source code attacks.

And in this case, the crooks figured, “Hey, we don’t want to distribute poisoned source code. We’re not into that kind of supply-chain attack. What we’re looking for is just a series of links that people can click on that won’t arouse any suspicions.”

So, if you want a Web page that someone can visit that has a load of links to dodgy sites… like “Get your free Amazon bonus codes here” and “Get your free bingo spins” – there were literally tens of thousands of these…

…why not choose a site like the NPM Package Manager, and create a whole load of packages?

Then you don’t even need to learn HTML, Doug!

You could just use good old Markdown, and there you’ve got essentially a good-looking, trusted source of links you can click through to.

And those links that they were using, as far as I can make out, went off to essentially unsuspicious blog sites, community sites, whatever, that had unmoderated or poorly moderated comments, or where they were easily able to create accounts and then make comments that had links in.

So they’re basically building a chain of links that wouldn’t arouse suspicion.


DOUG.  So, we have some advice: Don’t click freebie links, even if you find you are interested or intrigued.


DUCK.  That’s my advice, Doug.

Maybe there are some free codes, or maybe there’s some coupon stuff that I could get… maybe there’s no harm in having a look.

But if there’s some kind of affiliated ad revenue with that, that the cooks are making just by enticing you bogusly to a particular site?

No matter how minuscule the amount is that they’re making, why give them anything for nothing?

That’s my advice.

“Best way to avoid punch is no be there,” as always.


DOUG.  [LAUGHS] And then we have: Don’t fill in online surveys, no matter how harmless they seem.


DUCK.  Yes, we’ve said that many times on Naked Security.

For all you know, you might be giving your name here, your phone number there, you maybe give your date of birth to something for a free gift there, and you think, “What’s the harm?”

But if all that information is actually ending up in one giant bucket, then, over time, the crooks are just getting more and more about you, sometimes perhaps including data that it’s very difficult to change.

You can get a new credit card tomorrow, but it’s rather harder to get a new birthday or to move house!


DOUG.  And last, but certainly not least: Don’t run blogs or community sites that allow unmoderated posts or comments.

And if anyone’s ever run, say, a WordPress site, the thought of allowing unmoderated comments is just short of mind-blowing, because there will be thousands of them.

It is an epidemic.


DUCK.  Even if you’ve got an automated anti-spamming service on your comment system, that will do a great job…

…but don’t let the other stuff through and think, “Oh, well, I’ll go back and remove it, if I see that it looks dodgy afterwards,” because, like you said, it’s at epidemic proportions…


DOUG.  That’s a full time job, yes!


DUCK.  …and has been for ages.


DOUG.  And you were able, I’m delighted to see, to work in two of our favourite mantras around here.

At the end of the article: Think before you click, and: If in doubt…


DUCK.  …don’t give it out.

It really is as simple as that.


DOUG.  Speaking of giving things out, three youngsters allegedly made off with millions in extortion money:

Dutch police arrest three cyberextortion suspects who allegedly earned millions


DUCK.  Yes.

They were busted in the Netherlands for crimes that they are alleged to have started committing… I think it’s two years ago, Doug.

And they are 18 years, 21 years, and 21 years old now.

So they were pretty young when they started.

And the prime suspect, who is 21 years old… the cops allege he has made about two-and-a-half-million Euros.

That is a lot of money for a youngster, Doug.

It’s a lot of money for anybody!


DOUG.  I don’t know what you were making at 21, but I was not making that much, not even close. [LAUGHS]


DUCK.  Maybe two Euros fifty an hour? [LAUGHTER]

It seems that their modus operandi was not to end up with ransomware, but to leave you with the *threat* of ransomware because they were already in.

So they’d come in, they’d do all the data theft, and then instead of actually bothering to encrypt your files, it sounds as though what they’d do is they’d say, “Look, we’ve got the data; we can come back and ruin everything, or you can pay.”

And the demands were somewhere between €100,000 and €700,000 per victim.

And if it’s true that one of them made €2,500,000 in the past two years out of his cybercriminality, you can imagine that they probably blackmailed quite a few victims into paying up, for fear of what might get revealed…


DOUG.  We’ve said around here, “We’re not going to judge, but we urge people not to pay up in instances like this, or in instances like ransomware.”

And for good reason!

Because, in this case, the police note that paying the blackmail didn’t always work out.

They said:

In many cases, stolen data was leaked online even after the affected companies had paid up.


DUCK.  So. if you ever thought, “I wonder if I can trust those guys not to leak the data, or for it not to appear online?”…

…I think you’ve got your answer there!

And bear in mind that it may not be that these particular crooks were just ultra-duplicitous, and that they took the money and leaked it anyway.

We don’t know that *they* were necessarily the people who leaked it.

They could have just been so bad at security themselves that they stole it; they had to put it somewhere; and while they were negotiating, telling you, “We’ll delete the data”…

…for all we know, someone else could have stolen it in the meantime.

And that’s always a risk, so paying for silence rarely works out well.


DOUG.  And we’ve seen more and more attacks like this where ransomware actually looks a little bit more straightforward: “Pay me for the decryption key; you pay me; I’ll give it to you; you can unlock your files.”

Well, now they’re going in and saying, “We’re not going to lock anything up, or we’re going to lock it up but we’re also going to leak it online if you don’t pay…”


DUCK.  Yes, it’s three sorts of extortion, isn’t it?

There’s, “We locked up your files, pay the money or your business will stay derailed.”

There’s, “We stole your files. Pay up or we’ll leak them, and then we might come back and ransomware you anyway.”

And there’s the double-ground that some crooks seem to like, where they steal your data *and* they scramble the files, and they say, “You might as well pay up to decrypt your files, and no extra charge, Doug, we’ll delete the data as well!”

So, can you trust them?

Well, here’s your answer…

Probably not!


DOUG.  All right, head over and read about that.

There’s further insight and context at the bottom of that article… Paul, you did an interview with our own Peter Mackenzie, who is the Director of Incident Response here at Sophos. (Full transcript available.)

No audio player below? Listen directly on Soundcloud.

And, as we always say in cases like these, if you’re affected by this, report the activity to the police so that they have as much information as they can get in order to put their case together.

I’m happy to report that we said we’d keep an eye on it; we did; and we’ve got a LastPass update:

LastPass: Keylogger on home PC led to cracked corporate password vault


DUCK.  We have indeed, Doug!

This is indicating how the breach of their corporate passwords allowed the attack to go from being a “little thing” where they got source code to something rather more dramatic.

LastPass seem to have figured out how that actually happened… and in this report, there are effectively, if not words of wisdom, at least words of warning.

And I did repeat, in the article I wrote about this, what we said on last week’s podcast promo video, Doug, namely:

Sadly, it seems that one of the developers, who just happened to have the password to unlock the corporate password vault, was running some kind of media-related software that they hadn’t patched.

And the crooks were able to use an exploit against it… to install a keylogger, Doug!

From which, of course, they got that super-secret password that opened the next stage of the equation.

If you’ve ever heard the term lateral movement – that’s a Jargon term you’ll hear a lot.

The analogy you have with conventional criminality is…

..get into the lobby of the building; hang around a little bit; then sneak into a corner of the security office; wait in the shadows so nobody sees you until the guards go and make a cup of tea; then go to the shelf next to the desk and grab one of those access cards; that gets you into the secure area next to the bathroom; and in there, you’ll find the key to the safe.

You see how far you can get, and then you work out probably what you need, or what you’ll do, to get you the next step, and so on.

Beware the keylogger, Doug! [LAUGHS]


DOUG.  Yes!


DUCK.  Good, old-school, non-ransomware malware is [A] alive and well, and [B] can be just as harmful to your business.


DOUG.  Yes!

And we’ve got some advice, of course.

Patch early, patch often, and patch everywhere.


DUCK.  Yes.

LastPass were very polite, and they didn’t blurt out, “It was XYZ software that had the vulnerability.”

If they’d said, “Oh, the software that was hacked was X”…

…then people who didn’t have X would go, “I can stand down from blue alert; I don’t use that software.”

In fact, that’s why we say not just patch early, patch often… but patch *everywhere*.

Just patching the software that affected LastPass is not going to be enough in your network.

It does need to be something you do all the time.


DOUG.  And then we’ve said this before, and we’ll continue to say it until the sun burns out: Enable 2FA wherever you can.


DUCK.  Yes.

It is *not* a panacea, but at least it means that passwords alone are not enough.

So it doesn’t raise the bar all the way, but it definitely doesn’t make it easier for the crooks.


DOUG.  And I believe we’ve said this recently: Don’t wait to change credentials or reset 2FA seeds after a successful attack.


DUCK.  As we’ve said before, a rule that says, “You have to change your password – change for change’s sake, do it every two months regardless”…

…we don’t agree with that.

We just think that is getting everybody into the habit of a bad habit.

But if you think there might be a good reason to change your passwords, even though it’s a real pain in the neck to do it…

…if you think it might help, why not just do it anyway?

If you’ve got a reason to start the change process, then just go through with the whole thing.

Don’t delay/Do it today.

[QUIETLY] See what I did there, Doug?


DOUG.  Perfect!

Alright, let’s stay on the subject of 2FA.

We are seeing a spike in rogue 2FA apps in both app stores.

Could this be because of the Twitter 2FA kerfuffle, or some other reason?

Beware rogue 2FA apps in App Store and Google Play – don’t get hacked!


DUCK.  I don’t know that it’s specifically due to the Twitter 2FA kerfuffle, where Twitter have said, for whatever reasons they have, “Ooh, we’re not going to use SMS two-factor authentication anymore, unless you pay us money.!

And since the majority of people aren’t going to be Twitter Blue badge holders, they’re going to have to switch.

So I don’t know that that’s caused a surge in rogue apps in App Store and Google Play, but it certainly drew the attention of some researchers who are good friends to Naked Security: @mysk_co, if you want to find them on Twitter.

They thought, “I bet lots of people are actually looking for 2FA authenticator apps right now. I wonder what happens if you go to the App Store or Google Play and just type in Authenticator app?”

And if you go to the article on Naked Security, entitled “Beware rogue 2FA apps”, you will see a screenshot that those researchers prepared.

It’s just row after row after row of identically-looking authenticators. [LAUGHS]


DOUG.  [LAUGHS] They’re all called Authenticator, all with a lock and a shield!


DUCK.  Some of them are legit, and some of them aren’t.

Annoyingly. When I went – even after this had got into the news… when I went to the App Store, the top app that came up was, as far as I could see, one of these rogue apps.

And I was really surprised!

I thought, “Crikey – this app is signed in the name of a very well known Chinese mobile phone company.”

Luckily, the app looked rather unprofessional (the wording was very bad), so I didn’t for a moment believe that it really was this mobile phone company.

But I thought, “How on earth did they manage to get a code-signing certificate in the name of a legitimate company, when clearly they wouldn’t have had any documentation to prove that they were that company?” (I won’t mention its name.)

Then I read the name really carefully… and it was, in fact, a typosquat, Doug!

One of the letters in the middle of the word had, how can I say, a very similar shape and size to the one belonging to the real company.

And so, presumably, it had therefore passed automated tests.

It didn’t match any known brand name that somebody already had a code signing certificate for.

And even I had to read it twice… even though I knew that I was looking at a rogue app, because I’d been told to go there!

On Google Play, I also came across an app that I was alerted to by the chaps who did this research…

…which is one that doesn’t just ask you to pay $40 a year for something you could get for free built into iOS, or directly from Play Store with Google’s name on it for free.

It also stole the starting seeds for your 2FA accounts, and uploaded them to the developer’s analytics account.

How about that, Doug?

So that’s at best extreme incompetence.

And, at worst, it’s just outright malevolent.

And yet, there it was… top result when the researchers went looking in the Play Store, presumably because they splashed a little bit of ad love on it.

Remember, if someone gets that starting seed, that magic thing that’s in the QR code when you set up app-based 2FA…

…they can generate the right code for you, for any 30-second login window in the future, forever and ever, Doug.

It’s as simple as that.

That shared secret is *literally* the key to all your future one-time codes.


DOUG.  And we’ve got a reader comment on this rogue 2FA story.

Naked Security reader LR comments, in part:

I dumped Twitter and Facebook ages ago.

Since I am not using them, do I need to be concerned about the two-factor situation?


DUCK.  Yes, that’s an intriguing question, and the answer is, as usual, “It depends.”

Certainly if you’re not using Twitter, you could still choose badly when it comes to installing a 2FA app…

…and you might be more inclined to go and get one, now 2FA has been in the news because of the Twitter story, than you would have weeks, months, or years ago.

And if you *are* going to go and opt for 2FA, just make sure you do it as safely as you can.

Don’t just go and search, and download what seems like the most obvious app, because here is strong evidence that you could put yourself very much in harm’s way.

Even if you’re on the App Store or on Google Play, and not sideloading some made-up app that you got from somewhere else!

So, if you are using SMS-based 2FA but you don’t have Twitter, then you don’t need to switch away from it.

If you choose to do so, however, make sure you pick your app wisely.


DOUG.  Alright, great advice, and thank you very much, LR, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can kind comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


LastPass: The crooks used a keylogger to crack a corporatre password vault

There’s no date on the update, but as far as we can make out, LastPass just [2023-02-27] published a short document entitled Incident 2 – Additional details of the attack.

As you probably remember, because the bad news broke just before the Christmas holiday season in December 2022, LastPass suffered what’s known in the jargon as a lateral movement attack.

Simply put, lateral movement is just a fancy way of saying, “Once you get into the lobby, you can sneak into a dark corner of the security office, where you can wait in the shadows until the guards get up to make tea, when you can grab an access card from the shelf next to where they usually sit, which will get you into the secure area next to the cloakroom, where you’ll find the keys to the safe.”

The unknown unknowns

As we’ve previously described, LastPass spotted, in August 2022, that someone had broken into their DevOps (developement operations) network and run off with proprietary information, including source code.

But that’s a bit like coming back from vacation to find a side window smashed and your favourite games console missing… with nothing else obviously amiss.

You know what you know, because there’s broken glass on the kitchen floor and a console-shaped gap where your beloved PlayBox-5/360 games device used to be.

But you don’t know, and you can’t easily figure out, what you don’t know, such as whether the crooks diligently scanned-but-replaced all the personal documents in your desk drawer, or took good-quality photos of the educational certificates on the wall, or found copies of your front door key that you’d forgotten you had, or went into your bathroom and used your toothbrush to…

…well, you simply can’t be sure what they didn’t do with it.

Threat actor pivots

In LastPass’s case, the initial breach was immediately followed, as the company now says, by an extended period of attackers poking around elsewhere looking for additional cyberbooty:

The threat actor pivoted from the first incident, which ended on 2022-08-12, but was actively engaged in a new series of reconnaissance, enumeration, and exfiltration activities aligned to the cloud storage environment spanning from 2022-08-12 to 2022-10-26.

The burning question, it seems, was, “How was that pivoting possible, given that the needed access credentials were locked up in a secure password vault to which only four developers had access?”

(The word pivot in this context is just a jargon way of saying, “Where the crooks went next.”)

LastPass now thinks it has the answer, and though it’s a bad look for the company to get pwned in this way, we’ll repeat what we said in last week’s podcat promo video, in respect of the recent Coinbase breach, where source code was also stolen:

Coinbase’s luckless employee got phished, but LastPass’s luckless developer apparently got keylogged, with the crooks exploiting an unpatched vulnerability to get their foothold:

[Access to the vault password] was accomplished by targeting the DevOps engineer’s home computer and exploiting a vulnerable third-party media software package, which enabled remote code execution capability and allowed the threat actor to implant keylogger malware. The threat actor was able to capture the employee’s master password as it was entered, after the employee authenticated with MFA, and gain access to the DevOps engineer’s LastPass corporate vault.

Sadly, it doesn’t matter how complex, long, random or unguessable your password is if your attackers can simply record you typing it in.

(No, we’re not sure why there was apparently no requirement for 2FA for opening up the corporate vault, in addition to the 2FA used when the employee first authenticated.)

What to do?

  • Patch early, patch often, patch everywhere. This doesn’t always help, for example if your attackers have access to a zero-day exploit for which no patch yet exists. But most vulnerabilities never get turned into zero-days, which means that if you patch promptly you will very frequently be ahead of the crooks. Anyway, especially in the case of a zero-day, why leave yourself exposed for a moment longer than you need to?
  • Enable 2FA wherever you can. This doesn’t always help, for example if you’re attacked via a phishing site that tricks you into handing over your regular password and your current one-time code at the same time. But it often stops stolen passwords alone being enough to mount further attacks.
  • Don’t wait to change credentials or reset 2FA seeds after a successful attack. We’re not fans of regular, forced password changes when there’s no obvious need, just for the sake of change. But we are fans of a change early, change everywhere approach when you know that crooks have got in somewhere.

That rotten thief who stole your games console probably just grabbed it and ran, so as not to get caught, and didn’t waste time going into your bathroom, let alone picking up your toothbrush…

…but we reckon you’re going to replace it anyway.

Now we’ve mentioned it.


Dutch police arrest three cyberextortion suspects who allegedly earned millions

Dutch police announced late last week that they’d arrested three young men, aged between 18 and 21, suspected of cybercrimes involving breaking in, stealing data, and then demanding hush money.

The charges include: computer intrusion, data theft, extortion, blackmail, and money laundering.

The trio were actually arrested a month earlier, back in January 2023, but the details of the arrest were kept secret until now, presumably to allow undercover investigations to continue.

Undercover cyberoperations

Legally authorised undercover operations by cybercops can bring surprising results, even if those operations don’t ultimately lead to suspects being identified, or to actual servers and data being seized.

Late last year, for example, we wrote about a trick that the Dutch police used for some time against the DEADBOLT ransomware gang, who scramble unpatched QNAP network storage devices over the internet, and demand payment in Bitcoins to decrypt the ruined files.

The Dutch cops didn’t know who was behind the ransom demands, but they were able to “cheat the crooks back” by buying decryption keys for 155 victims, but then pulling the rug out from under the crooks before the payment went through.

The cops figured out a lawfully approved way to disown their payments on the blockchain (and thus to retain their Bitcoins) immediately after getting the decryption keys but before the criminals could claim the cryptocash.

Loosely speaking, the cops deliberately did a double-spend when buying the decryption keys, paying the very same Bitcoinage both to the crooks and, soon afterwards, to themselves. By carefully choosing the transaction fees they offered in each case, the cops were able to lure the crooks into assuming that the original payment was certain to go through, and thus to release the decryption keys quickly. The cops then jumped in with a duplicate transaction with a better fee, thus gazumping the crooks and clawing the funds back. Sadly, the DEADBOLT crooks have now learned simply to wait “for the cheque to clear” before shipping their “product”.

No honour amonst thieves

Intriguingly, these latest Dutch arrests relate to cybercriminality going back to March 2021, when the suspects would have been two years younger still.

Despite their youth, the police claim that the suspects were blackmailing victims for more-than-grown-up sums of money:

As far as we can ascertain, the blackmail money demanded in each incident ranged from €100,000 to more than €700,000. … In the past few years, the prime suspect, [now 21], appears to have had a criminal income of €2,500,000.

Even worse, the police note that paying the blackmail didn’t always work out:

In many cases, stolen data was leaked online even after the affected companies had paid up.

Simply put, if you’ve ever wondered how much you can trust the crooks who just broke into your network by paying for their silence…

…the answer might very well be, “Not a bit.” (Pun intended.)

What to do?

For advice into how network intruders typically get in, how to detect them if they do, and how to keep them out in the first place, listen to this insighful interview with Peter Mackenzie, Director of Incident Response at Sophos.

This is a cybersecurity session from the Sophos Security SOS Week 2022 that will alarm, amuse and educate you, all in equal measure. (Full transcript available.)

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.


Another way to help yourself, and everyone else, is to report cybercriminal activity to the police.

The Dutch police would love to hear from you, especially if you may have any information about recent cybecriminality that might relate to the suspects above (the Dutch generally don’t name suspects, and haven’t done so here) – for example because you were blackmailed with the threat of stolen data being leaked online or of further, more destructive, attacks.

You can find out more about how Dutch law enforcement is taking on cybercrime on the police website, and read a short briefing document for IT specialists that gives tips not only on how to keep cybercrooks out in the first place, but also how to preserve useful evidence for police and the courts if attackers do get into your network.


Learn more about Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response  ▶


Beware rogue 2FA apps in App Store and Google Play – don’t get hacked!

Thanks to Tommy Mysk and Talal Haj Bakry of @mysk_co for the impetus and information behind this article. The duo describe themselves as “two iOS developers and occasional security researchers on two continents.” In other words, although cybersecurity isn’t their core business, they’re doing what we wish all programmers would do: not taking application or operating system security features for granted, but keeping their own eyes on how those features work in real life, in order to avoid tripping over other people’s mistakes and assumptions.
The featured image above is based on one of their tweets, which you can see in full below.

Twitter recently announced that it doesn’t think SMS-based two-factor authentication (2FA) is secure enough any more.

Ironically, as we explained last week, the very users for whom you’d think this change would be most important are the “top tier” Twitter users – those who pay for a Twitter Blue badge to give them more reach and to allow them to send longer tweets…

…but those pay-to-play users will be allowed to keep using text messages (SMSes) to receive their 2FA codes.

The rest of us need to switch over to a different sort of 2FA system within the next three weeks (before Friday 2023-03-17).

That means using an app that generates a secret “seeded” sequence of one-time codes, or using a hardware token, such as a Yubikey, that does the cryptographic part of proving your identity.

Hardware keys or app-based codes?

Hardware security keys cost about $100 each (we’re going by Yubikey’s approximate price for a device with biometric protection based on your fingerprint), or $50 if you’re willing to go for the less-secure sort that can be activated by the touch of anyone’s finger.

We’re therefore willing to assume that anyone who has already invested in a hardware security token will have done so on purpose, and won’t have bought one to leave it sitting idly around at home.

Those users will therefore already have switched away from from SMS-based or app-based 2FA.

But everyone else, we’re guessing, falls into one of three camps:

  • Those who don’t use 2FA at all, because they consider it an unnecessary additional hassle when logging in.
  • Those who turned on SMS-based 2FA, because it’s simple, easy to use, and works with any mobile phone.
  • Those who went for app-based 2FA, because they were reluctant to hand over their phone number, or had already decided to move on from text-message 2FA.

If you’re in the second camp, we’re hoping you won’t just give up on 2FA and let it lapse on your Twitter account, but will switch to an app to generate those six-digit codes instead.

And if you’re in the first camp, we’re hoping that the publicity and debate around Twitter’s change (was it really done for security reasons, or simply to save money on sending so many SMSes?) will be the impetus you need to adopt 2FA yourself.

How to do app-based 2FA?

If you’re using an iPhone, the password manager built into iOS can generate 2FA codes for you, for as many websites as a you like, so you don’t need to install any additional software.

On Android, Google offers its own authenticator app, unsurprisingly called Google Authenticator, that you can get from Google Play.

Google’s add-on app does the job of generating the needed one-time login code sequences, just like Apple’s Settings > Passwords utility on iOS.

But we’re going to assume that at least some people, and possibly many, will perfectly reasonably have asked themselves, “What other authenticator apps are out there, so I don’t have to put all my cybersecurity eggs into Apple’s (or Google’s) basket?”

Many reputable companies (including Sophos, by the way, for both iOS and Android) provide free, trustworthy, authenticator utilities that will do exactly what you need, without any frills, fees or ads, if you understandably feel like using a 2FA app that doesn’t come from the same vendor as your operating system.

Indeed, you can find an extensive, and tempting, range of authenticators just by searching for Authenticator app in Google Play or the App Store.

Spoilt for choice

The problem is that there is an improbable, perhaps even imponderable, number of such apps, all apparently endorsed for quality by their acceptance into Apple’s and Google’s official “walled gardens”.

In fact, friends of Naked Security @mysk_co just emailed us to say that they’d gone looking for authenticator apps themselves, and were somewhere between startled and shocked at what they found.

Tommy Mysk, co-founder of @mysk_co, put it plainly and simply in an email:

We analysed several authenticator apps after Twitter had stopped the SMS method for 2FA. We saw many scam apps looking almost the same. They all trick users to take out a yearly subscription for $40/year. We caught four that have near identical binaries. We also caught one app that sends every scanned QR code to the developer’s Google analytics account.

As Tommy invites you to ask yourself, in a series of tweets he’s posted, how is even a well-informed user supposed to know that their top search result for “Authenticator app” may in fact be the very one to avoid at all costs?

Imposter apps in this category, it seems, generally try to get you to pay them anywhere from $20 to $40 every year – about as much as it would cost to buy a reputable hardware 2FA token that would last for years and almost certainly be more secure:

When we tried searching on the App Store, for example, our top hit was an app with a description that bordered on the illiterate (we’re hoping that this level of unprofessionalism would put at least some people off right away), created by a company using the name of a well-known Chinese mobile phone brand.

Given the apparent poor quality of the app (though it had nevertheless made it into the App Store, don’t forget), our first thought was that we were looking at out-and-out company name infringement.

We were surprised that the presumed imposters had been able to acquire an Apple code signing certificate in a name we didn’t think they had the right to use.

We had to read the company name twice before we realised that one letter had been swapped for a lookalike character, and we were dealing with good old “typosquatting”, or what a lawyer might call passing off – deliberately picking a name that doesn’t literally match but is visually similar enough to mislead you at a glance.

When we searched on Google Play, the top hit was an app that @mysk_co had already tweeted about, warning that it not only demands money you don’t need to spend, but also steals the seeds or starting secrets of the accounts you set up for 2FA.

Remember the secret string 6QYW4P6K­WALGCUWM in the QR code, and the TOTP numbers 660680 that you can see in the images below, because we’ll meet them again later on:

Why seeds are secrets

To explain.

Most app-based 2FA codes rely on a cryptographic protocol known as TOTP, short for time-based one-time password, specified in RFC 6238.

The algorithm is surprisingly simple, as you can see from the sample Lua code below:

The process works like this:

A. Convert the seed, or “starting secret”, originally provided to you as a base32-encoded string (as text or via a QR code), into a string of bytes [line 4].

B. Divide the current “Unix epoch time” in seconds by 30, ignoring the fractional part. The Unix time is the number of seconds since 1970-01-01T00:00:00Z [5].

C. Save this number, which is effectively a half-minute counter that started in 1970, into a memory buffer as a 64-bit (8-byte) big-endian unsigned integer [6].

D. Hash that 8-byte buffer using one iteration of HMAC-SHA1 with the base32-decoded starting seed as the key [7].

E. Extract the last byte of the 160-bit HMAC-SHA1 digest (byte 20 of 20), and then take its bottom four bits (the remainder when divided by 16) to get a number X between 0 and 15 inclusive [8].

F. Extract bytes X+1,X+2,X+3,X+4 from the hash, i.e. 32 bits drawn anywhere from the first four bytes (1..4) to the last-four-but-one bytes (16..19) [13].

G. Convert to a 32-bit big-endian unsigned integer and zero out the most significant bit, so it works cleanly whether it’s later treated as signed or unsigned [13].

H. Take the last 6 decimal digits of that integer (calculate the remainder when divided by a million) and print it out with leading zeros to get the TOTP code [17].

In other words, the starting seed for any account, or the secret as you can see it labelled in @mysk_co’s tweet above, is quite literally the key to producing every TOTP code you will ever need for that account.

Codes are for using, seeds are for securing

There are three reasons why you only ever type in those weirdly-computed six-digit codes when you you login, and never use (or even need to see) the seed again directly:

  • You can’t work backwards from any of the codes to the key used to generate them. So intercepting TOTP codes, even in large numbers, doesn’t help you to reverse-engineer your way to any past or future logon codes.
  • You can’t work forwards from the current code to the next one in sequence. Each code is computed independently, based on the seed, so intercepting a code today won’t help you logon in the future. The codes therefore act as one-time passwords.
  • You never need to type the seed itself into a web page or password form. On a modern mobile phone, it can therefore be saved exactly once into the secure storage chip (sometimes called an enclave) on the device, where an attacker who steals your phone when it’s locked or turned off can’t extract it.

Simply put, a generated code is safe for one-time use, because the seed can’t be wrangled backwards from the code.

But the seed must be kept secret forever, because any code, from the start of 1970 until long after the likely heat death of the universe (263 seconds into the future, or about 0.3 trillion years), can be generated almost instantly from the seed.

Of course, the service you’re logging into needs a copy of your seed in order to verify that that you’ve supplied a code that matches the time at which you’re trying to log on.

So you need to trust the servers at the other end to take extra care to keep your seeds secure, even (or perhaps especially) if the service gets breached.

You also need to trust the application you’re using at your end never to reveal your seeds.

That means not displaying those seeds to anyone (a properly-coded app won’t even show the seed to you after you’ve entered it or scanned it in, because you simply don’t need to see it again), not releasing seeds to to any other apps, not writing them out to log files, adding them to backups or including them in debug output…

…and very, very definitely never transmitting any of your seeds over the network.

In fact, an app that uploads your seeds to a server anywhere in the wirld is either so incompetent that you should stop using it immediately, or so untrustworthy that you should treat it as cybercriminal malware.

What to do?

If you’ve grabbed an authenticator app recently, especially if you did it in a hurry as a result of Twitter’s recent announcement, review your choice in the light of what you now know.

If you were forced into paying a subscription for it; if the app is littered with ads; if the app comes with larger-than-life marketing and glowing reviews yet comes from a company you’ve never heard of; or if you’re simply having second thoughts, and something doesn’t feel right about it…

…consider switching to a mainstream app that your IT team has already approved, or that someone technical, whom you know and trust, can vouch for.

As mentioned above, Apple has a built-in 2FA code generator in Settings > Passwords, and Google has its own Google Authenticator app in the Play Store.

Your favourite security vendor probably has a free, no-ads, no-excitement code generator app that you can use, too. (Sophos has a standalone authenticator for iOS, and an authenticator component in the free Sophos Intercept X for Mobile app on both iOS and Android.)

If you do decide to switch authenticator app because you’re not sure about the one you’ve got, be sure to reset all the 2FA seeds for all the accounts you’ve entrusted to it.

(In fact, if the old app has an option to export your seeds so you can read them into a new app, you now know not only that you shouldn’t use that feature, but also that your decision to switch apps was a good one!)


QUANTIFYING THE RISK FOR YOURSELF

The risk of leaving your account protected by a 2FA seed that you think someone else might already know (or be able to figure out) is obvious.

You can prove this to yourself by using the TOTP algorithm we presented earlier, and feeding in [A] the “secret” string from Tommy Mysk’s tweet above and [B] the time he took the screenshot, which was 7:36pm Central European time on 2023-02-25, one hour ahead of UTC (Zulu time, denoted Z in the timestamp below).

The stolen seed is: 6QYW4P6KWALGCUWM
Zulu time was: 2023-02-25T18:36:00Z
Which is: 1,677,350,160 seconds into the Unix epoch

As you might expect, and as you can match up with the images in tweet above, the code produces the following output:

$ luax totp-mysk.lua Tommy Mysk's code was: 660680

As the famous videogame meme might put it: All his TOTP code are belong to us.


S3 Ep123: Crypto company compromise kerfuffle [Audio + Text]

LEARNING FROM OTHERS

The first search warrant for computer storage. GoDaddy breach. Twitter surprise. Coinbase kerfuffle. The hidden cost of success.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG. Crypto company code captured, Twitter’s pay-for-2FA play, and GoDaddy breached.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin

And it is episode 123, Paul.

We made it!


DUCK. We did!

Super, Doug!

I liked your alliteration at the beginning…


DOUG. Thank you for that.

And you’ve got a poem coming up later – we’ll wait with bated breath for that.


DUCK. I love it when you call them poems, Doug, even though they really are just doggerel.

But let’s call it a poem…


DOUG. Yes, let’s call it a poem.


DUCK. All two lines of it… [LAUGHS]


DOUG. Exactly, that’s all you need.

As long as it rhymes.

Let’s start with our Tech History segment.

This week, on 19 February 1971, what is believed to be the first warrant in the US to search a computer storage device was issued.

Evidence of theft of trade secrets led to the search of computer punch cards, computer printout sheets, and computer memory bank and other data storage devices magnetically imprinted with the proprietary computer program.

The program in question, a remote plotting program, was valued at $15,000, and it was ultimately determined that a former employee who still had access to the system had dialled in and usurped the code, Paul.


DUCK. I was amazed when I saw that, Doug, given that we’ve spoken recently on the podcast about intrusions and code thefts in many cases.

What was it… LastPass? GoDaddy? Reddit? GitHub?

It really is a case of plus ça change, plus c’est la même chose, isn’t it?

They even recognised, way back then, that it would be prudent to do the search (at least of the office space) at night, when they knew that the systems would be running but the suspect probably wouldn’t be there.

And the warrant actually states that “experts have made us aware that computer storage can be wiped within minutes”.


DOUG. Yes, it’s a fascinating case.

This guy that went and worked for a different company, still had access to the previous company, and dialled into the system, and then accidentally, it seems, printed out punch cards at his old company while he was printing out paper of the code at his new company.

And the folks at the old company were like, “What’s going on around here?”

And then that’s what led to the warrant and ultimately the arrest.


DUCK. And the other thing I noticed, reading through the warrant, that the cop was able to put in there…

…is that he had found a witness at the old company who confirmed that this chap who’d moved to the new company had let slip, or bragged about, how he could still get in.

So it has all the hallmarks of a contemporary hack, Doug!

[A] the intruder made a blunder which led to the attack being spotted, [B] didn’t cover his tracks well enough, and [C] he’d been bragging about his haxxor skills beforehand. [LAUGHS]

As you say, that ultimately led to a conviction, didn’t it, for theft of trade secrets?

Oh, and the other thing of course, that the victim company didn’t do is…

…they forgot to close off access to former staff the day they left.

Which is still a mistake that companies make today, sadly.


DOUG. Yes.

Aside from the punch cards, this could be a modern day story.


DUCK. Yes!


DOUG. Well, let’s bring things into the modern, and talk about GoDaddy.

It has been hit with malware, and some of the customer sites have been poisoned.

This happened back in December 2022.

They didn’t come out and say in December, “Hey, this is happening.”

GoDaddy admits: Crooks hit us with malware, poisoned customer websites


DUCK. Yes, it did seem a bit late, although you could say, “Better late than never.”

And not so much to go into bat for GoDaddy, but at least to explain some of the complexity of looking into this…

… it seems that the malware that was implanted three months ago was designed to trigger intermittent changes to the behaviour of customers’ hosted web servers.

So it wasn’t as though the crooks came in, changed all the websites, made a whole load of changes that would show up in audit logs, got out, and then tried to profit.

It’s a little bit more like what we see in the case of malvertising, which is where you poison one of the ad networks that a website relies on, for some of the content that it sometimes produces.

That means every now and then someone gets hit up with malware when they visit the site.

But when researchers go back to have a look, it’s really hard for them to reproduce the behaviour.

[A] it doesn’t happen all the time, and [B] it can vary, depending on who you are, where you’re coming from, what browser you’re using…

…or even, of course, if the crooks recognise that you’re probably a malware researcher.

So I accept that it was tricky for GoDaddy, but as you say, it might have been nice if they had let people know back in December that there had been this “intermittent redirection” of their websites.


DOUG. Yes, they say the “malware intermittently redirected random customer websites to malicious sites”, which is hard to track down if it’s random.

But this wasn’t some sort of really advanced attack.

They were redirecting customer sites to other sites where the crooks were making money off of it…


DUCK. [CYNICAL] I don’t want to disagree with you, Doug, but according to GoDaddy, this may be part of a multi-year campaign by a “sophisticated threat actor”.


DOUG. [MOCK ASTONISHED] Sophisticated?


DUCK. So the S-word got dropped in there all over again.

All I’m hoping is that, given that there’s not much we can advise people about now because we have no indicators of compromise, and we don’t even know whether, at this remove, GoDaddy has been able to come up with what people could go and look for to see if this happened to them…

…let’s hope that when their investigation, that they’ve told the SEC (Securities and Exchange Commission) they’re still conducting); let’s hope that when that finishes, that there’ll be a bit more information and that it won’t take another three months.

Given not only that the redirects happened three months ago, but also that it looks as though this may be down to essentially one cybergang that’s been messing around inside their network for as much as three years.


DOUG. I believe I say this every week, but, “We will keep an eye on that.”

All right, more changes afoot at Twitter.

If you want to use two-factor authentication, you can use text messaging, you can use an authenticator app on your phone, or you can use a hardware token like a Yubikey.

Twitter has decided to charge for text-messaging 2FA, saying that it’s not secure.

But as we also know, it costs a lot to send text messages to phones all over the world in order to authenticate users logging in, Paul.

Twitter tells users: Pay up if you want to keep using insecure 2FA


DUCK. Yes, I was a little mixed up by this.

The report, reasonably enough, says, “We’ve decided, essentially, that text-message based, SMS-based 2FA just isn’t secure enough”…

…because of what we’ve spoken about before: SIM swapping.

That’s where crooks go into a mobile phone shop and persuade an employee at the shop to give them a new SIM, but with your number on it.

So SIM swapping is a real problem, and it’s what caused the US government, via NIST (the National Institute of Standards and Technology), to say, “We’re not going to support this for government-based logins anymore, simply because we don’t feel we’ve got enough control over the issuing of SIM cards.”

Twitter, bless their hearts (Reddit did it five years ago), said it’s not secure enough.

But if you buy a Twitter Blue badge, which you’d imagine implies that you’re a more serious user, or that you want to be recognised as a major player…

…you can keep on using the insecure way of doing it.

Which sounds a little bit weird.

So I summarised it in the aforementioned poem, or doggerel, as follows:

 Using texts is insecure for doing 2FA. So if you want to keep it up, you're going to have to pay.

DOUG. Bravo!


DUCK. I don’t quite follow that.

Surely if it’s so insecure that it’s dangerous for the majority of us, even lesser users whose accounts are perhaps not so valuable to crooks…

…surely the very people who should at least be discouraged from carrying on using SMS-based 2FA would be the Blue badge holders?

But apparently not…


DOUG. OK, we have some advice here, and it basically boils down to: Whether or not you pay for Twitter Blue, you should consider moving away from text-based 2FA.

Use a 2FA app instead.


DUCK. I’m not as vociferously against SMS-based 2FA as most cybersecurity people seem to be.

I quite like its simplicity.

I like the fact that it does not require a shared secret that could be leaked by the other end.

But I am aware of the SIM-swapping risk.

And my opinion is, if Twitter genuinely thinks that its ecosystem is better off without SMS-based 2FA for the vast majority of people, then it should really be working to get *everybody* off 2FA…

…especially including Twitter Blue subscribers, not treating them as an exception.

That’s my opinion.

So whether you’re going to pay for Twitter Blue or not, whether you already pay for it or not, I suggest moving anyway, if indeed the risk is as big as Twitter makes out to be.


DOUG. And just because you’re using app-based 2FA instead of SMS-based 2FA, that does not mean that you’re protected against phishing attacks.


DUCK. That’s correct.

It’s important to remember that the greatest defence you can get via 2FA against phishing attacks (where you go to a clone site and it says, “Now put in your username, your password, and your 2FA code”) is when you use a hardware token-based authenticator… like, as you said, a Yubikey, which you have to go and buy separately.

The idea there is that that authentication doesn’t just print out a code that you then dutifully type in on your laptop, where it might be sent to the crooks anyway.

So, if you’re not using the hardware key-based authentication, then whether you get that magic six-digit code via SMS, or whether you look it up on your phone screen from an app…

…if all you’re going to do is type it into your laptop and potentially put it into a phishing site, then neither app-based nor SMS-based 2FA has any particular advantage over the other.


DOUG. Alright, be safe out there, people.

And our last story of the day is Coinbase.

Another day, another cryptocurrency exchange breached.

This time, by some good old fashioned social engineering, Paul?

Coinbase breached by social engineers, employee data stolen


DUCK. Yes.

Guess what came into the report, Doug?

I’ll give you a clue: “I spy, with my little eye, something beginning with S.”


DOUG. [IRONIC] Oh my gosh!

Was this another sophisticated attack?


DUCK. Sure was… apparently, Douglas.


DOUG. [MOCK SHOCKED] Oh, my!


DUCK. As I think we’ve spoken about before on the podcast, and as you can see written up in Naked Security comments, “‘Sophisticated’ usually translates as ‘better than us’.”

Not better than everybody, just better than us.

Because, as we pointed out in the video for last week’s podcast, no one wants to be seen as the person who fell for an unsophisticated attack.

But as we also mentioned, and as you explained very clearly in last week’s podcast, sometimes the unsophisticated attacks work…

…because they just seem so humdrum and normal that they don’t set off the alarm bells that something more diabolical might.

The nice thing that Coinbase did is they did provide what you might call some indicators of compromise, or what are known as TTPs (tools, techniques and procedures) that the crooks followed in this attack.

Just so you can learn from the bad things that happened to them, where the crooks got in and apparently had a look around and got some source code, but hopefully nothing further than that.

So firstly: SMS based phishing.

You get a text message and it has a link in the text message and, of course, if you click it on your mobile phone, then it’s easier for the crooks to disguise that you’re on a fake site because the address bar is not so clear, et cetera, et cetera.

It seemed that that bit failed because they needed a two-factor authentication code that somehow the crooks weren’t able to get.

Now, we don’t know…

…did they forget to ask because they didn’t realise?

Did the employee who got phished ultimately realise, “This is suspicious. I’ll put in my password, but I’m not putting in the code.”

Or were they using hardware tokens, where the 2FA capture just didn’t work?

We don’t know… but that bit didn’t work.

Now, unfortunately, that employee didn’t, it seems, call it in and tell the security team, “Hey, I’ve just had this weird thing happen. I reckon someone was trying to get into my account.”

So, the crooks followed up with a phone call.

They called up this person (they had some contact details for them), and they got some information out of them that way.

The third telltale was they were desperately trying to get this person to install a remote access program on their say so.


DOUG. [GROAN]


DUCK. And, apparently, the programs suggested were AnyDesk and ISL Online.

It sounds as though the reason they tried both of those is that the person must have baulked, and in the end didn’t install either of them.

By the way, *don’t do that*… it’s a very, very bad idea.

A remote access tool basically bumps you out of your chair in front of your computer and screen, and plops the attacker right there, “from a distance.”

They move their mouse; it moves on your screen.

They type at their keyboard; it’s the same as if you were typing at your keyboard while logged in.

And then the last telltale that they had in all of this is presumably someone trying to be terribly helpful: “Oh, well, I need to investigate something in your browser. Could you please install this browser plugin?”

Whoa!

Alarm bells should go off there!

In this case, the plugin they wanted is a perfectly legitimate plug in for Chrome, I believe, called “Edit This Cookie”.

And it’s meant to be a way that you can go in and look at website cookies, and website storage, and delete the ones that you don’t want.

So if you go, “Oh, I didn’t realise I was still logged into Facebook, Twitter, YouTube, whatever, I want to delete that cookie”, that will stop your browser automatically reconnecting.

So it’s a good way of keeping track of how websites are keeping track of you.

But of course it’s designed so that you, the legitimate user of the browser, can basically spy on what websites are doing to try and spy on you.

But if a *crook* can get you to install that, when you don’t quite know what it’s all about, and they can then get you to open up that plugin, they can get a peek at your screen (and take a screenshot if they’ve got a remote access tool) of things like access tokens for websites.

Those cookies that are set because you logged in this morning, and the cookie will let you stay logged in for the whole day, or the whole week, sometimes even a whole month, so you don’t have to log in over and over again.

If the crook gets hold of one of those, then any username, password and two-factor authentication you have kind-of goes by the board.

And it sounds like Coinbase were doing some kind of XDR (extended detection response).

At least, they claimed that someone in their security team noticed that there was a login for a legitimate user that came via a VPN (in other words, disguising your source) that they would not normally expect.

“That could be right, but it kind-of looks unusual. Let’s dig a bit further.”

And eventually they were actually able to get hold of the employee who’d fallen for the crooks *while they were being phished, while they were being socially engineered*.

The Coinbase team convinced the user, “Hey, look, *we’re* the good guys, they’re the bad guys. Break off all contact, and if they try and call you back, *don’t listen to them anymore*.”

And it seems that that actually worked.

So a little bit of intervention goes an awful long way!


DOUG. Alright, so some good news, a happy ending.

They made off with a little bit of employee data, but it could have been much, much worse, it sounds like?


DUCK. I think you’re right, Doug.

It could have been very much worse.

For example, if they got loads of access tokens, they could have stolen more source code; they could have got hold of things like code-signing keys; they could have got access to things that were beyond just the development network, maybe even customer account data.

They didn’t, and that’s good.


DOUG. Alright, well, let’s hear from one of our readers on this story.

Naked Security reader Richard writes:

Regularly and actively looking for hints that someone is up to no good in your network doesn’t convince senior management that your job is needed, necessary, or important.

Waiting for traditional cybersecurity detections is tangible, measurable and justifiable.

What say you, Paul?


DUCK. It’s that age-old problem that if you take precautions that are good enough (or better than good enough, and they do really, really well)…

…it kind-of starts undermining the arguments that you used for applying those precautions in the first place.

“Danger? What danger? Nobody’s fallen over this cliff for ten years. We never needed the fencing after all!”

I know it’s a big problem when people say, “Oh, X happened, then Y happened, so X must have caused Y.”

But it’s equally dangerous to say, “Hey, we did X because we thought it would prevent Y. Y stopped happening, so maybe we didn’t need X after all – maybe that’s all a red herring.”


DOUG. I mean, I think that XDR and MDR… those are becoming more popular.

The old “ounce of prevention is worth a pound of cure”… that might be catching on, and making its way upstairs to the higher levels of the corporation.

So we will hopefully keep fighting that good fight!


DUCK. I think you’re right, Doug.

And I think you could argue also that there may be regulatory pressures, as well, that make companies less willing to go, “You know what? Why don’t we just wait and see? And if we get a tiny little breach that we don’t have to tell anyone about, maybe we’ll get away with it.”

I think people are realising, “It’s much better to be ahead of the game, and not to get into trouble with the regulator if something goes wrong, than to take unnecessary risks for our own and our customers’ business.”

That’s what I hope, anyway!


DOUG. Indeed.

And thank you very much, Richard, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH. Stay secure!

[MUSICAL MODEM]


go top