Category Archives: News

NPM JavaScript packages abused to create scambait links in bulk

Johnathan Swift is probably most famous for his novel Gulliver’s Travels, during which the narrator, Lemuel Gulliver, encounters a socio-political schism in Liiliputian society caused by unending arguments over whether you should open a boiled egg at the big end or the little end.

This satirical observation has flowed diretly into modern computer science, with CPUs that represent integers with the least significant bytes at the lowest memory addresses called little-endian (that’s like writing the year AD 1984 as 4 8 9 1, in the sequenceunits-tens-hundreds-thousands), and those that put the most significant bytes first in memory (as numbers are conventionally written: 1 9 8 4) known as big-endian.

Swift, of course, gave us another satirical note that applies rather neatly to open-source supply chain attacks, where programmers decide to use project X, only to find that X depends on Y, which itself depends on Z, which depends on A, B and C, which in turn…

…you get the picture.

That observation came in a series of remarks about poets that appeared, appropriately enough, in a poem:

 So, Nat'ralists observe, a Flea Hath smaller Fleas that on him prey, And these have smaller yet to bite 'em, And so proceed ad infinitum

We’re not sure, but we’re guessing that the Great Vowel Shift was still not complete in the late 1600s and early 1700s, and that the -EA in Swift’s word Flea was pronounced then as we still, rather peculiarly, pronounce the -EY in prey today. Thus the poem would be read aloud with the sound flay to rhyme with pray. (This E-used-to-be-A business is why British people still say DARBY when they read the placename Derby, or BARKSHIRE when they visit Royal Berkshire.)

Flea stacks considered hamrful

We’ve therefore got used to the idea that rogue content uploaded to open source package repositories generally aims to inject itself unnoticed into the “flea stacks” of code dependencies that some products inadvertently download when updating automatically.

But researchers at supply-chain security testing outfit Checkmarx recently warned about a much less sophisticated, yet potentially much more intrusive, abuse of popular repositories: as phishing link “redirectors”.

Researchers noticed hundreds of online properties such as WordPress blogging sites that had been littered with scammy-looking posts…

…that linked off to thousands of URLs hosted in the NPM package repository.

But those “packages” didn’t exist to publish source code.

They existed simply as placeholders for README files that included the final links that the crooks wanted people to click on.

These links typically including referral codes that would net the scammers a modest reward, even if the person clicking through was doing so simply to see what on earth was going on.

The NPM package names weren’t exactly subtle, so you ought to spot them.

Fortunately, the crooks (inadvertently, we assume) managed to include their list of poisonous packages in one of their uploads.

Checkmarx has therefore published a list containing more than 17,000 unique bogus names, of which just a small sample (one each for the first few letters of the alphabet) shows you what sort of “goods and services” these crooks claim to offer:

active-amazon-promo-codes-list-that-work-updates-daily-106
bingo-bash-free-bingo-chips-and-daily-bonus-222
call-of-duty-warzone-2400-points-for-free-gamerhash-com778
dice-dream-free-rolls
evony-kings-return-upgrade-keep-level-35-without-spending-money779
fifa-mobile-23--new-toty-23-make-millions546
get-free-tiktok-followers505
how-can-i-get-my-snap-score-higher796
instagram_followers_bot_free_apk991
jackpot_world_free_coins_and_jewels307
king-of-avalon--tips-and-tricks-to-get-free-gold429
lakers-shirt-nba-jersey023
. . .

Checkmarx also published a list of close to 200 web pages on which posts had been published that promoted and linked to these bogus NPM packages.

It sounds as though the scammers already had usernames and passwords for some of these sites, which allowed them to post as named or otherwise “trusted” users and reviewers.

But any site with unmoderated or poorly-moderated comments could be peppered anonymously with this sort of rogue link, so just forcing all your community members to create an account on your site is not itself enough to control this sort of abuse.

Creating clickable links in many, if not most, online source code repositories is surprisingly easy, and automatically follows the look-and-feel of the site as a whole.

You don’t even need to create full-blown HTML layouts or CSS page styles – usually, you just create a file in the root directory of your project called README.md.

The extension .md is short for Markdown, a super-easy-to-use text markkup language (see what they did there?) that replaces the complex angle-bracket tags and attributes of HTML with simple text annotations.

To make text bold in Mardown, just put stars round it, so that **this bit** would be bold. For paragraphs, you just leave blank lines. To create a link, just put some text in square brackets and follow it with a URL in round brackets. To display an image from a URL instead of creating clickable text to it, put an exclamation point in front of the link, and so on.

What to do?

  • Don’t click “freebie” links, even if you find you are interested or intrigued. You don’t know where you’ll end up, but it will probably be in harm’s way. You may well also be creating bogus pay-per-click traffic for the crooks, and even though the amount for each click might be minuscule, why gift cybercriminals anything if you can help it?
  • Don’t fill in online surveys, no matter how harmless they seem. Checkmarx reported that many of these links end up with surveys and other “tests” to qualify you for “gifts” of some sort. The scale and breadth of this scamming exercise is a good reminder that fake “surveys” that each ask for small and apparently inconsequential gobbets of information about you aren’t collecting that data independently. It all ends up collated into one huge bucket of PII (personally identifiable information) that ultimately gives away much more you than you might expect. Filling in surveys gives free assistance to the next wave of scammers, so why why gift cybercriminals anything if you can help it?
  • Don’t run blogs or community sites that allow unmoderated posts or comments. You don’t have to force everyone to create a password if you don’t want to, but you should require a trusted human to approve every comment. If you can’t handle the volume of comment spam (which can be huge – though most blogging services have filtering tools that can help you get rid of most of it automatically), turn comments off. A bogus link in a comment is essentially a free service to scammers, so why gift cybercriminals anything if you can help it?

Remember…

think before you click, and if in doubt, don’t give it out!


Coinbase breached by social engineers, employee data stolen

Popular cryptocurrency exchange Coinbase is the latest well-known online brand name that’s admitted to getting breached.

The company decided to turn its breach report into an interesting mix of partial mea culpa and handy advice for others.

As in the recent case of Reddit, the company couldn’t resist throwing in the S-word (sophisticated), which once again seems to follow the definition offered by Naked Secuity reader Richard Pennington in a recent comment, where he noted that ‘Sophisticated’ usually translates as ‘better than our defences’.

We’re inclined to agree that in many, if not most, breach reports where threats and attackers are described as sophisticated or advanced, those words are indeed used relatively (i.e. too good for us) rather than absolutely (e.g. too good for everyone).

Coinbase confidently stated, in the executive summary at the start of its article:

Fortunately, Coinbase’s cyber controls prevented the attacker from gaining direct system access and prevented any loss of funds or compromise of customer information.

But that apparent certainty was undermined by the admission, in the very next sentence, that:

Only a limited amount of data from our corporate directory was exposed.

Unfortunately, one of the favourite TTPs (tools, techniques and procedures) used by cybercriminals is known in the jargon as lateral movement, which refers to the trick of parlaying information and access acquired in one part of a breach into ever-wider system access.

In other words, if a cybercriminal can abuse computer X belonging to user Y to retrieve confidential corporate data from database Z (in this case, fortunately, limited to employee names, e-mail addresses, and phone numbers)…

…then saying that the attacker didn’t “gain direct system access” sounds like a rather academic distinction, even if the sysadmins amongst us probably understand those words to imply that the criminals didn’t end up with a terminal prompt at which they could run any system command they wanted.

Tips for threat defenders

Nevertheless, Coinbase did list some of the cybercriminal tools, techniques and procedures that it experienced in this attack, and the list provides some useful tips for threat defenders and XDR teams.

XDR is a bit of a buzzword these days (it’s short for extended detection and response), but we think that the simplest way of describing it is:

Extended detection and response means regularly and actively looking for hints that someone is up to no good in your network, instead of waiting for traditional cybersecurity detections in your threat response dashboard to trigger a response.

Obviously, XDR doesn’t mean turning off your existing cybersecurity alerting and blocking tools, but it does mean extending the range and nature of your threat hunting, so that you’re not only searching for cybercriminals once you’re fairly certain they’ve already arrived, but also watching out for them while they’re still getting ready to attempt an attack.

The Coinbase attack, reconstructed from the company’s somewhat staccato account, seems to have involved the following stages:

  • TELLTALE 1: An SMS-based phishing attempt.

Staff were urged via SMS to login to read an important corporate notification.

For convenience, the message included a login link, but that link went to a bogus site that captured usernames and passwords.

Apparently, the attackers didn’t know, or didn’t think, to get hold of the 2FA (two-factor authentication code) they’d need to go along with the username and password, so this part of the attack came to nothing.

We don’t know how 2FA protected the account. Perhaps Coinbase uses hardware tokens, such as Yubikeys, that don’t work simply by providing a six-digit code that you transcribe from your phone to your browser or login app? Perhaps the crooks failed to ask for the code at all? Perhaps the employee spotted the phish after giving away their password but before revealing the final one-time secret needed to complete the process? From the wording in the Coinbase report, we suspect that the crooks either forgot or couldn’t find a believable way to capture the needed 2FA data in their fake login screens. Don’t overestimate the strength of app-based or SMS-based 2FA. Any 2FA process that relies merely on typing a code displayed on your phone into a field on your laptop provides very little protection against attackers who are ready and willing to try out your phished credentials immediately. Those SMS or app-generated codes are typically limited only by time, remaining valid for anywhere between 30 seconds and a few minutes, which generally gives attackers long enough to harvest them and use them before they expire.

  • TELLTALE 2: A phone call from someone who said they were from IT.

Remember that this attack ultimately resulted in the criminals acquiring a list of employee contact details, which we assume will end up sold or given away in the cybercrime underground for other crooks to abuse in future attacks.

Even if you have tried to keep your work contact details confidential, they may already be out there and widely-known anyway, thanks to an earlier breach you might not have detected, or to a historical attack against a secondary source, such as an outsourcing company to which you once entrusted your staff data.

  • TELLTALE 3: A request to install a remote-access program.

In the Coinbase breach, the social engineers who’d called up in the second phase of the attack apparently asked the victim to install AnyDesk, followed by ISL Online.

Never install any software, let alone remote access tools (which allow an outsider to view your screen and to control your mouse and keyboard remotely as if they were sitting in front of your computer) on the say-so of someone who just called you, even if you think they are from your own IT department.

If you didn’t call them, you’ll almost certainly never be sure who they are.

  • TELLTALE 4: A request to install a browser plugin.

In the Coinbase case, the tool that the crooks wanted the victim to use was called EditThisCookie (an ultra-simple way of retrieving secrets such as access tokens from a user’s browser), but you should refuse to install any browser plugin on the say-so of someone you don’t know and have never met.

Browser plugins get almost unfettered access to everything you type into your browser, including passwords, before they get encrypted, and to everything your browser displays, after it’s been decrypted.

Plugins can not only spy on your browsing, but also invisibly modify what you type in before it’s transmitted, and the content you get back before it appears on the screen.

What to do?

To repeat and develop the advice we’ve given so far:

  • Never login by clicking on links in messages. You should know where to go yourself, without needing “help” from a message that could have come from anywhere.
  • Never take IT advice from people who call you. You should know where to call up yourself, to reduce the risk of being contacted by a scammer who knows exactly the right time to jump in and appear to be “helping” you.
  • Never install software on the say-so of an IT staffer you haven’t verified. Don’t even install software that you yourself consider safe, because the caller will probably direct you to a booby-trapped download into which malware has already been added.
  • Never reply to a message or call by asking if it’s genuine. The sender or caller will simply tell you what you want to hear. Report suspicious contacts to your own security team as soon as you can.

In this case, Coinbase says its own security team was able to use XDR techniques, spotting unusual patterns of activity (for example, attempted logons via an unexpected VPN service), and to intervene within about 10 minutes.

This meant that the individual under attack not only broke off all contact with the criminals right away, before too much harm was done, but knew to be extra-careful in case the attackers came back with yet more ruses, cons and so-called active adversary trickery.

Make sure you’re a human part of your company’s XDR “sensor network”, too, along with any technological tools your security team has in place.

Giving your active defenders more to go on that just “VPN source address showed up in access logs” means they’ll be much better equipped to detect and respond to an active attack.


LEARN MORE ABOUT ACTIVE ADVERSARIES

In real life, what really works for the cybercrooks when they initiate an attack? How do you find and treat the underlying cause of an attack, instead of just dealing with the obvious symptoms?

LEARN MORE ABOUT XDR AND MDR

Short of time or expertise to take care of cybersecurity threat response? Worried that cybersecurity will end up distracting you from all the other things you need to do?

Take a look at Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response  ▶

LEARN MORE ABOUT SOCIAL ENGINEERING

Join us for a fascinating interview with Rachel Tobac, DEFCON Social Engineering Capture the Flag champ, about how to detect and rebuff scammers, social engineers and other sleazy cybercrimimals.

No podcast player showing below? Listen directly on Soundcloud.

Twitter tells users: Pay up if you want to keep using insecure 2FA

Twitter has announced an intriguing change to its 2FA (two-factor authentication) system.

The change will take effect in about a month’s time, and can be summarised very simply in the following short piece of doggerel:

 Using texts is insecure for doing 2FA, So if you want to keep it up you're going to have to pay.

We said “about a month’s time” above because Twitter’s announcement is somewhat ambiguous with its dates-and-days calculations.

The product announcement bulletin, dated 2023-02-15, says that users with text-message (SMS) based 2FA “have 30 days to disable this method and enroll in another”.

If you include the day of the announcement in that 30-day period, this implies that SMS-based 2FA will be discontinued on Thursday 2023-03-16.

If you assume that the 30-day window starts at the beginning of the next full day, you’d expect SMS 2FA to stop on Friday 2023-03-17.

However, the bulletin says that “after 20 March 2023, we will no longer permit non-Twitter Blue subscribers to use text messages as a 2FA method. At that time, accounts with text message 2FA still enabled will have it disabled.”

If that’s strictly correct, then SMS-based 2FA ends at the start of Tuesday 21 March 2022 (in an undisclosed timezone), though our advice is to take the shortest possible interpretation so you don’t get caught out.

SMS considered insecure

Simply put, Twitter has decided, as Reddit did a few years ago, that one-time security codes sent via SMS are no longer safe, because “unfortunately we have seen phone-number based 2FA be used – and abused – by bad actors.”

The primary objection to SMS-based 2FA codes is that determined cybercriminals have learned how to trick, cajole or simply to bribe employees in mobile phone companies to give them replacement SIM cards programmed with someone else’s phone number.

Legitimately replacing a lost, broken or stolen SIM card is obviously a desirable feature of the mobile phone network, otherwise you’d have to get a new phone number every time you changed SIM.

But the apparent ease with which some crooks have learned the social engineering skills to “take over” other people’s numbers, usually with the very specific aim of getting at their 2FA login codes, has led to bad publicity for text messages as a source of 2FA secrets.

This sort of criminality is known in the jargon as SIM-swapping, but it’s not strictly any sort of swap, given that a phone number can only be programmed into one SIM card at a time.

So, when the mobile phone company “swaps” a SIM, it’s actually an outright replacement, because the old SIM goes dead and won’t work any more.

Of course, if you’re replacing your own SIM because your phone got stolen, that’s a great security feature, because it restores your number to you, and ensures that the thief can’t make calls on your dime, or listen in to your messages and calls.

But if the tables are turned, and the crooks are taking over your SIM card illegally, this “feature” turns into a double liability, because the criminals start receiving your messages, including your login codes, and you can’t use your own phone to report the problem!

Is this really about security?

Is this change really about security, or is it simply Twitter aiming to simplify its IT operations and save money by cutting down on the number of text messages it needs to send?

We suspect that if the company really were serious about retiring SMS-based login authentication, it would impel all its users to switch to what it considers more secure forms of 2FA.

Ironically, however, users who pay for the Twitter Blue service, a group that seems to include high-profile or popular users whose accounts we suspect are much more attractive targets for cybercriminals…

…will be allowed to keep using the very 2FA process that’s not considered secure enough for everyone else.

SIM-swapping attacks are difficult for criminals to pull off in bulk, because a SIM swap often involves sending a “mule” (a cybergang member or “affiliate” who is willing or desperate enough to risk showing up in person to conduct a cybercrime) into a mobile phone shop, perhaps with fake ID, to try to get hold of a specific number.

In other words, SIM-swapping attacks often seem to be premeditated, planned and targeted, based on an account for which the criminals already know the username and password, and where they think that the value of the account they’re going to take over is worth the time, effort and risk of getting caught in the act.

So, if you do decide to go for Twitter Blue, we suggest that you don’t carry on using SMS-based 2FA, even though you’ll be allowed to, because you’ll just be joining a smaller pool of tastier targets for SIM-swapping cybergangs to attack.

Another important aspect of Twitter’s announcement is that although the company is no longer willing to send you 2FA codes via SMS for free, and cites security concerns as a reason, it won’t be deleting your phone number once it stops texting you.

Even though Twitter will no longer need your number, and even though you may have originally provided it on the understanding that it would be used specificially for the purpose of improving login security, you’ll need to remember to go in and delete it yourself.

What to do?

  • If you already are, or plan to become, a Twitter Blue member, consider switching away from SMS-based 2FA anyway. As mentioned above, SIM-swapping attacks tend to be targeted, because they’re tricky to do in bulk. So, if SMS-based login codes aren’t safe enough for the rest of Twitter, they’ll be even less safe for you once you’re part of a smaller, more select group of users.
  • If you are a non-Blue Twitter user with SMS 2FA turned on, consider switching to app-based 2FA instead. Please don’t simply let your 2FA lapse and go back to plain old password authentication if you’re one of the security-conscious minority who has already decided to accept the modest inconvenience of 2FA into your digital life. Stay out in front as a cybersecurity trend-setter!
  • If you gave Twitter your phone number specifically for 2FA messages, don’t forget to go and remove it. Twitter won’t be deleting any stored phone numbers automatically.
  • If you’re already using app-based authentication, remember that your 2FA codes are no more secure than SMS messages against phishing. App-based 2FA codes are generally protected by your phone’s lock code (because the code sequence is based on a “seed” number stored securely on your phone), and can’t be calculated on someone else’s phone, even if they put your SIM into their device. But if you accidentally reveal your latest login code by typing it into a fake website along with your password, you’ve given the crooks all they need anyway, whether that code came from an app or via a text message.
  • If your phone loses mobile service unexpectedly, investigate promptly in case you’ve been SIM-swapped. Even if you aren’t using your phone for 2FA codes, a crook who’s got control over your number can neverthless send and receive messages in your name, and can make and answer calls while pretending to be you. Be prepared to show up at a mobile phone store in person, and take your ID and account receipts with you if you can.
  • If haven’t set a PIN code on your phone SIM, consider doing so now. A thief who steals your phone probably won’t be able to unlock it, assuming you’ve set a decent lock code. Don’t make it easy for them simply to eject your SIM and insert it into another device to take over your calls and messages. You’ll only need to enter the PIN when you reboot your phone or power it up after turning it off, so the effort involved is minimal.

By the way, if you’re comfortable with SMS-based 2FA, and are worried that app-based 2FA is sufficiently “different” that it will be hard to master, remember that app-based 2FA codes generally require a phone too, so your login workflow doesn’t change much at all.

Instead of unlocking your phone, waiting for a code to arrive in a text message, and then typing that code into your browser…

…you unlock your phone, open your authenticator app, read off the code from there, and type that into your browser instead. (The numbers typically change every 30 seconds so they can’t be re-used.)


PS. The free Sophos Intercept X for Mobile security app (available for iOS and Android) includes an authenticator component that works with almost all online services that support app-based 2FA. (The system generally used is called TOTP, short for time-based one-time password.)

Sophos Authenticator with one account added. (Add as many as you want.)
The countdown timer shows you how long the current code is still valid for.


GoDaddy admits: Crooks hit us with malware, poisoned customer websites

Late last week [2023-02-16], popular web hosting company GoDaddy filed its compulsory annual 10-K report with the US Securities and Exchange Commission (SEC).

Under the sub-heading Operational Risks, GoDaddy revealed that:

In December 2022, an unauthorized third party gained access to and installed malware on our cPanel hosting servers. The malware intermittently redirected random customer websites to malicious sites. We continue to investigate the root cause of the incident.

URL redirection, also known as URL forwarding, is an unexceptionable feature of HTTP (the hypertext transfer protocol), and is commonly used for a wide variety of reasons.

For example, you might decide to change your company’s main domain name, but want to keep all your old links alive; your company might get acquired and need to shift its web content to the new owner’s servers; or you might simply want to take your current website offline for maintenance, and redirect visitors to a temporary site in the meantime.

Another important use of URL redirection is to tell visitors who arrive at your website via plain old unencrypted HTTP that they should visit using HTTPS (secure HTTP) instead.

Then, once they have reconnected over an encrypted connection, you can include a special header to tell their browser to start with HTTPS in future, even if they click on an old http://... link, or mistakenly type in http://... by hand.

In fact, redirects are so common that if you hang around web developers at all, you’ll hear them referring to them by their numeric HTTP codes, in much the same way that the rest of us talk about “getting a 404” when we try to visit a page that no longer exists, simply because 404 is HTTP’s Not Found error code.

There are actually several different redirect codes, but the one you’ll probably hear most frequently referred to by number is a 301 redirect, also known as Moved Permanently. That’s when you know that the old URL has been retired and is unlikely ever to reappear as a directly reachable link. Others include 303 and 307 redirects, commonly known as See Other and Temporary Redirect, used when you expect that the old URL will ultimately come back into active service.

Here are two typical examples of 301-style redirects, as used at Sophos.

The first tells visitors using HTTP to reconnect right away using HTTPS instead, and the second exists so that we can accept URLs that start with just sophos.com by redirecting them to our more conventional web server name www.sophos.com.

In each case, the header entry labelled Location: tells the web client where to go next, which browsers generally do automatically:

$ curl -D - --http1.1 http://sophos.com
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://sophos.com/ <--reconnect here (same place, but using TLS)
. . . $ curl -D - --http1.1 https://sophos.com
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://www.sophos.com/ <--redirect to our web server for actual content
Strict-Transport-Security: . . . <--next time, please use HTTPS to start with
. . .

The command line option -D - above tells the curl program to print out the HTTP headers in the replies, which are what matters here. Both these replies are simple redirects, meaning that they don’t have any content of their own to send back, which they denote with the header entry Content-Length: 0. Note that browsers generally have built-in limits on how many redirects they will follow from any starting URL, as a simple precaution against getting caught up in an never-ending redirect cycle.

Redirect control considered harmful

As you can imagine, having insider access to a company’s web redirection settings effectively means that you can hack their web servers without modifying the contents of those servers directly.

Instead, you can sneakily redirect those server requests to content you’ve set up elsewhere, leaving the server data itself unchanged.

Anyone checking their access and upload logs for evidence of unauthorised logins or unexpected changes to the HTML, CS , PHP and JavaScript files that make up the official content of their site…

…will see nothing untoward, because their own data won’t actually have been touched.

Worse still, if attackers trigger malicious redirects only every now and then, the subterfuge can be hard to spot.

That seems to have been what happened to GoDaddy, given that the company wrote in a statement on its own site that:

In early December 2022, we started receiving a small number of customer complaints about their websites being intermittently redirected. Upon receiving these complaints, we investigated and found that the intermittent redirects were happening on seemingly random websites hosted on our cPanel shared hosting servers and were not easily reproducible by GoDaddy, even on the same website.

Tracking down transient takeovers

This is the same sort of problem that cybsersecurity researchers encounter when dealing with poisoned internet ads served up by third-party ad servers – what’s known ih the jargon as malvertising.



Obviously, malicious content that appears only intermittently doesn’t show up every time you visit an affected site, so that even just refreshing a page that you aren’t sure about is likely to destroy the evidence.

You might even perfectly reasonably accept that what you just saw wasn’t an attempted attack, but merely a transient error.

This uncertainty and unreproducibility typically delays the first report of the problem, which plays into the hands of the crooks.

Likewise, researchers who follow up on reports of “intermittent malevolence” can’t be sure they’re going to be able to grab a copy of the bad stuff either, even if they know where to look.

Indeed, when criminals use server-side malware to alter the behaviour of web services dynamically (making changes at run-time, to use the jargon term), they can use a wide range of external factors to confuse researchers even further.

For example, they can change their redirects, or even suppress them entirely, based on the time of day, the country you’re visiting from, whether your’re on a laptop or a phone, which browser you’re using…

…and whether they think you’re a cybersecurity researcher or not.



What to do?

Unfortunately, GoDaddy took nearly three months to tell the world about this breach, and even now there’s not a lot to go on.

Whether you’re a web user who’s visited a GoDaddy-hosted site since December 2022 (which probably includes most of us, whether we realise it or not), or a website operator who uses GoDaddy as a hosting company…

…we aren’t aware of any indicators of compromise (IoCs), or “signs of attack”, that you might have noticed at the time or that we can advise you to search for now.

Worse still, even though GoDaddy describes the breach on its website under the headline Statement on recent website redirect issues, it states in its 10-K filing that this may be a much longer-running onslaught than the word “recent” seems to imply:

Based on our investigation, we believe [that this and other incidents dating back to at least March 2000] are part of a multi-year campaign by a sophisticated threat actor group that, among other things, installed malware on our systems and obtained pieces of code related to some services within GoDaddy.

As mentioned above, GoDaddy has assured the SEC that “we continue to investigate the root cause of the incident”.

Let’s hope that it doesn’t take another three months for the company to tell us what it uncovers in the course of this investigation, which appears to stretch back three years or more…


S3 Ep122: Stop calling every breach “sophisticated”! [Audio + Text]

CAN WE STOP WITH THE “SOPHISTICATED” ALREADY?

The birth of ENIAC. A “sophisticated attack” (someone got phished). A cryptographic hack enabled by a security warning. Valentine’s Day Patch Tuesday. Apple closes spyware-sized 0-day hole.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT


DOUG.  Patching bugs, hacking Reddit, and the early days of computing.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth.

He is Paul Ducklin.

Paul, how do you do?


DUCK.  Very well, Douglas.


DOUG.  Alright, I have an exciting This Week in Tech History segment for you today.

If this were a place in the world, it would be Rome, from where all civilisation began.

Sort of.

It’s arguable.

Anyhow…


DUCK.  Yes, that is definitely arguable! [LAUGHS]


DOUG.  [LAUGHS] This week, on 14 February 1946, ENIAC, or Electronic Numerical Integrator and Computer, was unveiled.

One of the earliest electronic general purpose computers, ENIAC filled an entire room, weighed 30 tonnes and contained 18,000 vacuum tubes, 70,000 resistors, 10,000 capacitors, and around 5 million hand-soldered joints.

ENIAC was used for a variety of calculations, including artillery shell trajectories, weather predictions, and thermonuclear weapons research.

It paved the way for commercially viable electronic computers, Paul.


DUCK.  Yes, it did!

The huge irony, of course, is that we British got there first, with the Colossus during the Second World War, at Bletchley Park.

And then, in a fit of amazing governmental wisdom, we decided to: [A] smash them all into tiny pieces, [B] burn all the documentation ([QUIETLY] though some of it survived), and [C] keep the fact that we had used thermionic valves to build fast electronic digital computers secret.

[PAUSE] What a silly thing to do… [LAUGHS]

Colossus – the first electronic digital computer


DOUG.  [AMAZED] Why would they do that?


DUCK.  [TRAGIC] Aaaaargh, I don’t know.

In the US, I believe, at the time of ENIAC, it was still not clear whether electromechanical relays or thermionic valves (vacuum tubes) would win out, because vacuum tubes were zillions of times faster…

…but they were hot, they used vast amounts of power, and they tended to blow randomly, which stopped the computer working, et cetera, et cetera.

But I think it was ENIAC that finally sealed the fate of all the electromechanical computers.


DOUG.  Speaking of things that have been around for a while…

..Reddit says that it was hacked thanks to a sophisticated phishing attack that, it turns out, wasn’t all that sophisticated.

Which might be the reason it works so well, ironically.

Reddit admits it was hacked and data stolen, says “Don’t panic”


DUCK.  [LAUGHS] I’m glad you said that rather than me, Doug!

But, yes, I think you’re right.

Why is it that so many senior execs who write breach notifications feel obliged to sneak the word “sophisticated” in there? [LAUGHS]

The whole thing about phishing attacks is that they’re *not* sophisticated.

They *aren’t* something that automatically sets alarm bells ringing.


DOUG.  Reddit says:

As in most phishing campaigns, the attacker sent out plausible-sounding prompts pointing employees to a website that cloned the behavior of our intranet gateway in an attempt to steal credentials and second-factor tokens. After successfully obtaining a single employee’s credentials, the attacker gained access to internal docs, code…

So that’s where it gets simple: trick one person into clicking on a link, getting taken to a page that looks like one of your systems, and handing over a 2FA code.


DUCK.  And then they were able to jump in, grab the stuff and get out.

And so, like in the LastPass breach and the recent GitHub breach, source code got stolen, along with a bit of other stuff.

Although that’s a good sign, inasmuch as it’s Reddit’s stuff that got stolen and not its users’ stuff (so it’s their problem to wrestle with, if you know what I mean)… we do know that inamongst that stuff, even if you only get source code, let alone internal documentation, there may be hints, scripts, tokens, server names, RESTy API endpoints, et cetera, that an attacker could use later.

But it does look as though the Reddit service itself, in other words the infrastructure behind the service, was not directly affected by this.

So, the crooks got in and they got some stuff and they got out, but it wasn’t like they broke into the network and then were able to wander around all the other places.


DOUG.  Reddit does offer three pieces of advice, two-thirds of which we agree with.

We’ve said countless times on the show before: Protect against phishing by using a password manager, because it makes it harder to put the right password into the wrong site.

Turn on 2FA if you can, so you have a second factor of authentication.

This one, though, is up for debate: Change your passwords every two months.

That might be a bridge too far, Paul?


DUCK.  Yes, Chester Wisniewski and I did a podcast (when was it? 2012?) where we busted that myth.

And NIST, the US National Institute of Standards and Technology, agrees with us.

It *is* a bridge too far, because it’s change for change’s sake.

And I think there are several problems with just, “Every two months, I’ll change my password.”

Firstly, why change your password if you genuinely don’t think there’s any reason to?

You’re just wasting your time – you could spend that time doing something that directly and genuinely improves your cybersecurity.

Secondly, as Chester put it in that old podcast (which we’ve put in the article, so you can go and listen to it), “It kind-of gets people into the habit of a bad habit,” because you’re trying to program their attitudes to passwords instead of embracing randomness and entropy.

And, thirdly, I think it leads people to thinking, “You know what, I should change my password, but I’m going to change them all in six weeks’ time anyway, so I’ll leave it until then.”

I would rather have an approach that says, “When you think you need to change your password, *do it in five minutes*.”


BUSTING PASSWORD MYTHS

Even though we recorded this podcast more than a decade ago, the advice it contains is still relevant and thoughtful today. We haven’t hit the passwordless future yet, so password-related cybersecurity advice will be valuable for a good while yet. Listen here, or click through for a full transcript.


DOUG.  There is a certain irony here with recommending the use of a password manager…

…when it’s pretty clear that this employee wouldn’t have been able to log into the fake site had he or she been using a password manager.


DUCK.  Yes, you’d think so, wouldn’t you?

Because it would just go, “Never heard of the site, can’t do it, don’t have a password.”

And you’d be going, “But it looks so right.”

Computer: “No, never heard of it.”


DOUG.  And then, once you’ve logged into a bogus site, 2FA does no good if you’re just going to enter the code into a form on the bogus site that gets sent to the crook!


DUCK.  If you’re planning to use 2FA as an excuse for being more casual about security, either [A] don’t do that, or [B] choose a two-factor authentication system that doesn’t rely simply on transcribing digits from your phone onto your laptop.

Use a token-based system like OAuth, or something like that, that is more sophisticated and somewhat harder for the crooks to subvert simply by getting you to tell them the magic digits.


DOUG.  Let’s stay on the irony theme.

GnuTLS had a timing flaw in the code that was supposed to log timing attack errors.

How do you like that?

Serious Security: GnuTLS follows OpenSSL, fixes timing attack bug


DUCK.  [LAUGHS] They checked to see whether something went wrong during the RSA session setup process by getting this variable called ok.

It’s TRUE if it’s OK, and it’s FALSE if it’s not.

And then they have this code that goes, “If it’s not OK, then report it, if the person’s got debugging turned on.”

You can see the programmer has thought about this (there’s even a comment)…

If there’s no error, then do a pretend logging exercise that isn’t really logging, but let’s try and use up exactly the same amount of time, completely redundantly.

Else if there was an error, go and actually do the logging.

But it turns out that either there wasn’t sufficient similarity between the execution of the two paths, or it could have been that the part where the actual logging was happening responded in a different amount of time depending on the type of error that you deliberately provoked.

It turns out that by doing a million or more deliberately booby-trapped, “Hey, I want to set up a session request,” you could basically dig into the session setup in order to retrieve a key that would be used later for future stuff.

And, in theory, that might let you decrypt sessions.


DOUG.  And that’s where we get the term “oracle bug” (lowercase oracle, not to be confused with the company Oracle).

You’re able to see things that you shouldn’t be able to see, right?


DUCK.  You essentially get the code to give you back an answer that doesn’t directly answer the question, but gives you some hints about what the answer might be.

You’re letting the encryption process give away a little bit about itself each time.

And although it sounds like, “Who could ever do a million extra session setup requests without being spotted?”…

…well, on modern networks, a million network packets is not actually that much, Doug.

And, at the end of it, you’ve actually learned something about the other end, because its behaviour has just not been quite consistent enough.

Every now and then, the oracle has given away something that it was supposed to keep secret.


DOUG.  Alright, we’ve got some advice about how to update if you’re a GnuTLS user, so you can head over to the article to check that out.

Let’s talk about “Happy Patch Tuesday”, everybody.

We’ve got a lot of bugs from Microsoft Patch Tuesday, including three zero-days.

Microsoft Patch Tuesday: 36 RCE bugs, 3 zero-days, 75 CVEs


DUCK.  Yes, indeed, Doug.

75 CVEs, and, as you say, three of them are zero-days.

But they’re only rated Important, not Critical.

In fact, the critical bugs, fortunately, were, it seems, fixed responsibly.

So it wasn’t that there’s an exploit already out there in the wild.

I think what’s more important about this list of 75 CVEs is that almost half of them are remote code execution bugs.

Those are generally considered the most serious sorts of bug to worry about ,because that’s how crooks get in in the first place.

Then comes EoP (elevation of privilege), of which there are several, including one of them being a zero-day… in the Windows Common Log File System driver

Of course, RCEs, remote code executions, are often paired up by cybercriminals with elevation of privilege bugs.

They use the first one to break in without needing a password or without having to authenticate.

They get to implant code that then triggers the elevation of privilege bug, so not only do they go *in*, they go *up*.

And typically they end up either as a sysadmin (very bad, because then they’re basically free to roam the network), or they end up with the same privilege as the local operating system… on Windows, what’s called the SYSTEM account (which pretty much means they can do anything on that computer).


DOUG.  There are so many bugs in this Patch Tuesday that it forced your hand to devote a section of this article called Security Bug Classes Explained

…which I would deem to be required reading if you’re just getting into cybersecurity and want to know what types of bugs are out there.

So we talked about an RCE (remote code execution), and we talked about EoP (elevation of privilege).

You next explained what a Leak is…


DUCK.  Indeed.

Now, in particular, memory leaks can obviously be bad if what’s leaking is, say, a password or the entire contents of a super-secret document.

But the problem is that some leaks, to someone who’s not familiar with cybersecurity, sound really unimportant.

OK, so you leaked a memory address of where such-and-such a DLL or such-and-such a kernel driver just happened to be loaded in memory?

How bad is that?

But the problem is that remote code execution exploits are generally much easier if you know exactly where to poke your knitting needle in memory on that particular server or that particular laptop.

Because modern operating systems almost all use a thing called ASLR (address space layout randomisation), where they deliberately load programs, and DLLs, and shared libraries, and kernel drivers and stuff at randomly chosen memory addresses…

…so that your memory layout on your test computer, where your exploit worked perfectly, will not be the same as mine.

And it’s much harder to get an exploit to work generically when you have this randomness built into the system than when you don’t.

So there are some tiny little memory leaks, where you might just leak eight bytes of memory (or even just four bytes if it’s a 32-bit system) where you give away a memory address.

And that is all the crooks need to turn an exploit that might just work, if they’re really lucky, into one which they can abuse every single time, reliably.

So be careful of leaks!


DOUG.  Please tell us what a Bypass means.


DUCK.  It sort-of means exactly what it says.

You’ve got a security precaution that you expect the operating system or your software to kick in with.

For example, “Hey, are you really sure that you want to open this dastardly attachment that came in in an email from someone you don’t know?”

If the crooks can find a way to do that bad behaviour but to bypass the security check that’s supposed to kick in and give you a fighting chance to be a well-informed user doing the right thing…

…believe me, they will take it.

So, security bypasses can be quite problematic.


DOUG.  And then along those lines, we talked about Spoofing.

In the Reddit story, luring someone to a website that looks like a legit website but isn’t – it’s a spoof site.

And then, finally, we’ve got DoS, or denial of service.


DUCK.  Well, that’s exactly what it says.

It’s where you stop something that is supposed to work on the victim’s computer from doing its job.

You kind-of think, “Denial of service, it should be at the bottom of the list of concerns, because who really cares? We’ve got auto-restart.”

But if the crooks can pick the right time to do it (say, 30 seconds after your server that crashed two minutes ago has just come back up),then they may actually be able to use a denial of service bug surprisingly infrequently to cause what amounts to almost a continuous outage for you.

And you can imagine: [A] that could actually cost you business if you rely on your online services being up, and [B] it can make a fascinating smokescreen for the crooks, by creating this disruption that lets the crooks come steaming in somewhere else.


DOUG.  And not content to be left out of the fun, Apple has come along to fix a zero-day remote code execution bug.

Apple fixes zero-day spyware implant bug – patch now!


DUCK.  This bug, and I’ll read out the CVE just for reference: it is CVE-2023-23529

…is a zero-day remote code execution hole in WebKit, which I for one, and I think many other people infer to mean, “Browser bug that can be triggered by code that’s supplied remotely.”

And of course, particularly in iPhones and iPads, as we’ve spoken about many times, WebKit is required code for every single browser, even ones that don’t use WebKit on other platforms.

So it kind-of smells like, “We found out about this because there’s some spyware going around,” or, “There’s a bug that can be used to jailbreak your phone and remove all the strictures that let the crooks in and let them wander around at will.”

Obviously, on a phone, that’s something you definitely don’t want.


DOUG.  Alright, and on this story, Naked Security reader Peter writes:

I try to update as soon as I’ve seen your update alerts in my inbox. While I know little to nothing about the technical issues involved, I do know it’s important to keep software updated, and it’s why I have the automatic software update option selected on all my devices. But it’s seldom, if ever, that I receive software alerts on my iPhone, iPad or MacBook before receiving them from Sophos.

So, thanks, guys!

That’s nice!


DUCK.  It is!

And I can only reply by saying, “Glad to be of assistance.”

I quite like writing those articles, because I think they provide a decent service.

Better to know and be prepared than to be caught unawares… that is my opinion.


DOUG.  And not to show how the sausage is made around here too much, but the reason Paul is able to jump on these Apple updates so quickly is because he has a big red siren in his living room that’s connected via USB cable to his computer, and checks the Apple security update page every six seconds.

So it starts blaring the second that page has been updated, and then he goes and writes it up for Naked Security.


DUCK.  [LAUGHS] I think the reason is probably just that I tend to go to bed quite late.


DOUG.  [LAUGHS] Exactly, you don’t sleep…


DUCK.  Now I’m big, I don’t have a fixed bedtime.

I can stay up as late as I want! [LAUGHTER]


DOUG.  Alright, thank you, Peter, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure.

[MUSICAL MODEM]


Featured image of ENIAC licensed under CC BY-SA 3.0.


go top