Category Archives: News

World Password Day: 2 + 2 = 4

World Password Day is always hard to write tips for, because the primary advice you’ll hear has been the same for many years.

That’s because the “passwordless future” that we’ve all been promised is still some time away, even if some services already support it.

Simply put, we’re stuck with the old, while at the same time preparing for the new.

That’s why we’ve come up with four tips for 2023, but split them into two halves.

Thus the headline: 2 + 2 = 4.

We’ve got two Timeless Tips that you already know (but might still be putting off), plus two Tips To Think About Today.


TIMELESS TIP 1. PASSWORD MANAGEMENT

Use a password manager if you can.

Password managers help you choose a completely different password for every site. They can come up with 20 random characters as easily as you can remember your cat’s name. And they make it hard to put the right password into the wrong site, because they can’t be tricked by what a site looks like. They always check the URL of the website instead.

TIMELESS TIP 2. GO TWO-FACTOR

Use 2FA when you can.

2FA is short for two-factor authentication, where a password alone is not enough. 2FA often relies on one-time codes, typically six digits long, that you have to put in as well as your same-every-time password. So it’s a minor inconvenience for you, but it makes things harder for the crooks, because they can’t jump straight in with just a stolen password.


TIP FOR TODAY 1. LESS IS MORE

Get rid of accounts you aren’t using.

Lots of sites force you to create a permanent account even if you only want to use them once. That leaves them holding personal data that they don’t need, but that they could leak at any time. (If sites can’t or won’t close your account and delete your data when asked, consider reporting them to the regulator in your country.)

TIP FOR TODAY 2. REVISIT RECOVERY

Revisit your account recovery settings.

You may have old accounts with recovery settings such as phone numbers or email addresses that are no longer valid, or that you no longer use. That means you can’t recover the account if ever you need to, but someone else might be able to. Fix the recovery settings if you can, or consider closing your account (see previous tip).


And with that, Happy World Password Day, everybody 🌻


Tracked by hidden tags? Apple and Google unite to propose safety and security standards…

Apple’s AirTag system has famously been subjected to firmware hacking, used as a free low-bandwidth community radio network, and involved in a stalking incident that tragically ended in a murder charge.

To be fair to Apple, the company has introduced various tricks and techniques to make AirTags harder for stalkers and criminals to exploit, given how given how easily the devices can be hidden in luggage, stuffed into the upholstery of a car, or squeezed into the gap under a bicycle saddle.

But with lots of similar devices already on the market, and Google said to be working on a product of its own to take advantage of the zillions of Bluetooth-enabled phones that are out and about running Google Android…

…surely there should be safety and security standards that are encouraged, or perhaps even demanded and expected, throughout the “smart tag” market?

Apple and Google seem to think so, because experts from both companies have been working together to propose an internet standard they’re calling Detecting Unwanted Location Trackers:

Internet standards, to this day, retain their original, conciliatory designation Request For Comments, almost universally written simply as RFC. But when you want to ask for comments on a proposed new standard, it would be unwiedly to call it an RFCRFC, so they’re just known as Internet Drafts, or I-Ds, and have document names and URL slugs starting draft-. Each draft is typically published with a six-month commentary period, after which it may be abandoned, modified and re-proposed, or accepted into the fold and given a new, unique number in the RFC sequence, which is currently up to RFC 9411 [2023-05-03T19:47:00Z].

How big is too big to conceal?

The document introduces the term UT, short for Unwanted Tracking, and the authors hope that well-designed and correctly implemented tracking devices will take steps to make UT hard (though we suspect this risk can never be eliminated entirely).

Apple and Google’s proposal starts by splitting trackers into exactly two classes: small ones, and large ones.

Large devices are considered “easily discoverable”, which means that they’re hard to hide, and although they are urged to implement UT protection, they’re not obliged to do so.

Small devices, on the other hand, are considered easily concealed, and the proposal demands that they provide at least a basic level of UT protection.

In case you’re wondering, the authors tried to nail down the difference between small and large, and their attempt to do so reveals just how hard it can be to create unarguable, universal definitions of this sort:

 Accessories are considered easily discoverable if they meet one of the following criteria: - The item is larger than 30 cm in at least one dimension. - The item is larger than 18 cm x 13 cm in two of its dimensions. - The item is larger than 250 cm^3 in three-dimensional space.

While we all probably agree than an AirTag is small and easily concealed, this definition also, probably very reasonably, considers our iPhone “small”, along with the Garmin we use on our bicycle, and our GoPro camera.

Our MacBook Pro, however, comes in as “large” on all three counts: it’s more then 30cm wide; it’s more than 13cm deep; and it’s well over 250cc in volume (or three-dimensional space, as the document puts it, which presumably includes the extra overall “straight line” volume added by bits that stick out).

You can try measuring some of your own portable electronic devices; you might be pleasantly surprised how chunky and apparently obvious a product can be, and yet still be considered small and “easily concealed” by the specifications.

To bleat, or not to bleat?

Loosely speaking, the proposed standards expect that all concealable devices:

  • MUST NOT BROADCAST their identity and trackability when they know they’re are near their registered owner. This helps ensure that a device that’s officially with you can’t easily be used by someone else to keep track of your every twist and turn as they follow you around in person.
  • MUST BROADCAST a “Hey, I’m a trackable Bluetooth thingy” notification every 0.5 to 2 seconds when they know they’re away from their owner. This helps to ensure that you have a way of spotting that someone else has slipped a tag ito your bag to exploit the tag to follow you around.

As you can see, these devices present two very different security risks: one where the tag shouldn’t bleat about itself when it’s with you and is supposed to be there; and the other where the tag needs to bleat about itself because it’s sticking with you suspiciously even though it’s not yours.

Tags must switch from “I am keeping quiet because I am with my real owner” mode into “Here I am, in case anyone is suspicious of me” mode after no more than 30 minutes of not synching with their owner.

Likewise they must switch back into “I’m holding my peace” after no more than 30 minutes of realising they’re back in safe hands.

When with you, they need to change their machine identifier (known in the jargon as their MAC address, short for media access code) every 15 minutes at most, so they don’t give you away for too long.

But they must hang onto their MAC address for 24 hours at a time when they’re parted from you, so they give everyone else plenty of chance to notice that the same unaccompanied tag keeps showing up nearby.

And if you do spot any unwanted tags in your vicinity, they must respond to any “reveal yourself” probes you send them by bleeping 10 times, and vibrating or flashing if they can, at a sound level laid down very specifically:

The [bleeper] MUST emit a sound with minimum 60 Phon peak loudness as defined by ISO 532-1:2017. The loudness MUST be measured in free acoustic space substantially free of obstacles that would affect the pressure measurement. The loudness MUST be measured by a calibrated (to the Pascal) free field microphone 25 cm from the accessory suspended in free space.

To track, or not to track?

Very importantly, any tag you find must not only provide a way for you to stop it calling home with its location to its owner, but also provide clear instructions on how to do this:

The accessory SHALL have a way to [be] disabled such that its future locations cannot be seen by its owner. Disablement SHALL be done via some physical action (e.g., button press, gesture, removal of battery, etc.).

The accessory manufacturer SHALL provide both a text description of how to disable the accessory as well as a visual depiction (e.g. image, diagram, animation, etc.) that MUST be available when the platform is online and OPTIONALLY when offline.

In other words, when you think you’ve busted someone who’s trying to track you, you need a way to throw your stalker off the scent, while also being able to retain the suspicious device safely as evidence, instead of resorting to smashing it or flinging it in a lake to keep it quiet.

If you wanted to, assuming that the device wasn’t jury rigged to turn tracking on just when you thought you’d turned it of, we guess you could even go off-track somewhere before turning it off, then backtrack to your original location and carry on from there, thus setting a false trail.

What to do?

If you’re interested in mobile device security; if you’re into privacy; if you’re worried about how tracking devices could be abused…

…we recommend reading through these proposed standards.

Although some of the specifications dig into technical details such as how to encrypt serial number data, others are as much social and cultural as they are technical, such as when, how and for whom such encrypted data should be unscrambled.

There are also aspects of the proposal you might not agree with, such as the specification than “obfuscated owner information” must be emitted by the device on demand.

For example, the proposal insists that this “obfuscated” data needs to include at least a partial phone number (the last four digits), or a hollowed-out email address (where tips@sophos.com would become t***@s*****.com, which obfuscates older, shorter email addresses much less usefully than newer, longer ones).

The current draft only came out yesterday [2023-05-02], so there are still six months open for comment and feedback…


Apple delivers first-ever Rapid Security Response “cyberattack” patch – leaves some users confused

We’ve written about the uncertainty of Apple’s security update process many times before.

We’ve had urgent updates accompanied by email notifications that warned us of zero-day bugs that needed fixing right away, because crooks were already onto them…

…but without even the vaguest description of what sort of criminals, and what they were up to, which would at least help to round out the story.

Our approach has therefore been simply to assume the worst, and to infer that the story that Apple wasn’t telling ran something like this: “Devices analysed in the wild found to have hidden spyware implanted by unknown threat actors.”

And we’ve therefore followed our own rhyming advice of: Do not delay/Simply do it today.

We’ve had updates arrive for the very latest macOS and iOS versions, but with nothing for earlier supported versions, with no mention of whether those devices were immune by good fortune, at risk but left in limbo for a while, or at risk but never going to be fixed.

Sometimes, those older versions have received their own patches for exactly the same zero-day holes, without explanation, days or weeks later.

At other times, the next updates for those older versions have at least implied that the zero-day holes didn’t affect them after all.

Enter the Rapid Security Response

Well, today (which just happens to be a public holiday in the UK, as we celebrate Beltane and the approximate halfway point between vernal equinox and summer solstice), we received a brand new sort of update notification for both our Mac and our iPhone.

This one announced what Apple calls a Security Response, tagged not with a new version number, but with a letter in round brackets after the existing version number.

For macOS Ventura, we were offered version 13.3.1 (a) and for our iPhone, we were offered 16.4.1 (a).

On both devices, there was a brand new URL that linked not to Apple’s usual HT201222 Security Updates portal (which hasn’t been updated since 2023-04-12 – we checked), but to a brand new page named HT201224, entitled Rapid Security Responses:

Rapid Security Responses are a new type of software release for iPhone, iPad, and Mac. They deliver important security improvements between software updates — for example, improvements to the Safari web browser, the WebKit framework stack, or other critical system libraries. They may also be used to mitigate some security issues more quickly, such as issues that might have been exploited or reported to exist “in the wild.”

We couldn’t help but smile at the choice of words, as we suspect you will too.

The well-known and widely-understood phrase in the wild is stuck between air-quotes; the phrase zero-day is avoided entirely, and any possible in-the-wildness is waved away as might have been exploited, and left unadmitted with the words reported to exist.

Who gets these patches?

As Apple notes, this sort of rapid patch is the firt of its sort: New Rapid Security Responses are delivered only for the latest version of iOS, iPadOS and macOS — beginning with iOS 16.4.1, iPadOS 16.4.1, and macOS 13.3.1.

So, at least we know that there aren’t supposed to be updates right noe for iOS and iPadOS 15, or for macOS 11 and 12 (Big Sur and Monterey), because those versions don’t support the this new rapid-patching system.

But that’s all we know, because what you see above is, as the saying goes, all she wrote.

What to do?

There are no release notes to go with the 13.3.1 (a) and 16.4.1 (a) patches for macOS and iOS/iPadOS, so the parts of the system needed patching, and the nature of the vulnerabilities that were fixed, are left unsaid.

The HT201224 web page invites us to assume that this sort of emergency fix will be use to patch serious WebKit or kernel-level bugs (the very sort that malware implanters and spyware operators love to exploit), but just how dangerous and exploitable the unknown bugs are in this case is, obviously, unknown.

Nevertheless, given that these Rapid Security Responses sound very much like zero-day anti-spyware fixes, and that Apple is at least clear that they relate to “important security improvements”, we went ahead with them, forcing an update of our devices right away.

  • On our Mac, the process was quick – much, much quicker than a typically system update, taking about two minutes altogether, including waiting 60 seconds for a reboot to start. Our system now indeed reports that it’s running macOS 13.3.1 (a).
  • On our iPhone, we weren’t so fortunate. As reported by some commenters on Naked Security, our update downloaded OK, but failed with a notification and a popup saying, “iOS Security Response 16.4.1 (a) failed verification because you are no longer connected to the internet.”Ironically, we were happily browsing and emailing at the time, so the apps on our device didn’t seem to have any trouble connecting to the internet.

We tried logging into our App Store account (we normally login only to get app updates, which do require an authenticated connection, as explicitly noted by the App Store app), but that made no difference.

Retrying didn’t help either.

Have you updated yet, and if so, how did you get along with the process?


Update. About an hour after we first tried installing the update on our phone, we had another go. This time the update verification succeeded, our phone instantly rebooted and the Rapid Security Response was installed and the reboot completed within a few tens of seconds, rather than the usual tens of minutes or longer. [2023-05-01T20:00:00Z]


Mac malware-for-hire steals passwords and cryptocoins, sends “crime logs” via Telegram

Researchers at dark web monitoring company Cyble recently wrote about a data-stealing-as-a-service toolkit that they found being advertised in an underground Telegram channel.

One somewhat unusual aspect of this “service” (and in this context, we don’t mean that word in any sort of positive sense!) is that it was specifically built to help would-be cybercriminals target Mac users.

The malware peddlers’ focus on Apple fans was clearly reflected in the name they gave their “product”: Atomic macOS Stealer, or AMOS for short.

They’re after passwords, cryptocoins and files

According to Cyble, the crooks are explicitly advertising that their malware can do all of these things:

  • Rip off passwords and authentication information from your macOS Keychain (Apple’s internal storage system for passwords and authentication credentials).
  • Steal files from your Desktop and Documents directories.
  • Retrieve comprehensive information about your system.
  • Plunder secret data from eight different browsers.
  • Slurp the contents of dozens of different cryptowallets.

Ironically, the one browser that doesn’t show up on the list is Apple’s own Safari, but the sellers claim to be able to exfiltrate data from Chrome, Firefox, Brave, Edge, Vivaldi, Yandex, Opera, and Opera’s gamer-centric browser, OperaGX.

As an AMOS “customer”, you also get an account on the cybergang’s online AMOS cloud portal, and a feature to send “crime logs” and stolen data directly to your Telegram account, so you don’t even need to login to the portal to check for successful attacks.

As well as that, you get what the crooks describe as a beautiful DMG installer, presumably to improve the likelihood that you can lure prospective victims into installing the software in the first place.

DMGs are Apple Disk Image files, commonly used by legitimate software developers as a well-known, good-looking, easy-to-use way of delivering Mac applications.

All this for $1000 a month.

Watch out for password prompts

As you can imagine, attackers who want to access your macOS Keychain can’t do so simply by tricking you into running a program while you’re already logged in.

Running an app under your account is enough to read many or most of your files, but actions such as viewing and changing system settings, and viewing Keychain items, require you to put in your password every time, as an extra layer of safety and security.

In this case, Cyble researchers noted that the malware lures you into giving away your account password by popping up a dialog with the title System Preferences (in macOS Ventura, it’s actually now called System Settings), and claiming that macOS itself “wants to access System Preferences”.

Well-informed Mac users should spot that the popup produced clearly belongs to the malware app itself, which is simply called Setup.

Password dialogs that are requested by the System Preferences (or System Settings) app itself come up as an integral part of the Preferences application window.

So, they can only be accessed when the System Preferences app itself has focus and thus shows up as the active application in your Mac’s menu bar.

What to do?

Malware that specifically targets Mac users is rare compared to malware aimed at Windows users, but this find by Cyble’s dark web diggers is a reminder that “unusual” is not the same as “non-existent”.

If you’re one of those Mac users who tends to treat cybersecurity as a curiosity instead of building it into your digital lifestyle, perhaps because a friend or family member once assured you that “Macs don’t get viruses”…

…please treat this article as a gentle reminder that malware attacks aren’t just things that happen to other people.

  • Stick to reputable download sites. Apple’s own App Store isn’t perfect, but it’s less of a free-for-all than sites and services you’ve never heard of. You can control the source of apps you install via the System Settings > Privacy & Security page, accessible directly from the Apple menu. If you need off-market apps, you can always give yourself access temporarily, and then lock your system down again immediately afterwards.
  • Don’t be fooled by what these crooks refer to as the “beauty” of an app. Modern software development tools make it easier than ever to produce professional-looking applications and installers, so malware doesn’t inevitably give itself away by looking sub-standard.
  • Consider running real-time malware blocking tools that not only scan downloads, but also proactively prevent you from reaching dangerous download servers in the first place. Sophos Home is free for up to three users (Mac and/or Windows), or modestly priced for up to 10 users. You can invite friends and family to share your licence, and help them by looking after their devices remotely via our cloud-based console, so you don’t need to run a server at home.

Note. Sophos products detect and block the malware in Cyble’s report under the name OSX/InfoStl-CP, if you are a Sophos user and would like to check your logs.



Google wins court order to force ISPs to filter botnet traffic

A US court has recently unsealed a restraining order against a gang of alleged cybercrooks operating outside the country, based on a formal legal complaint from internet giant Google.

Google, it seems, decided to use its size, influence and network data to say, “No more!”, based on evidence it had collected about a cybergang known loosely as the CryptBot crew, whom Google claimed were:

  • Ripping off Google product names, icons and trademarks to shill their rogue software distribution services.
  • Running “pay-per-install” services for alleged software bundles that deliberately injected malware onto victims’ computers.
  • Operating a botnet (a robot or zombie network) to steal, collect and collate personal data from hundred of thousands of victims in the US.

You can read a PDF of the court document online.
Thanks to our chums at online pub The Register for posting this.

Plunder at will

Data that these CryptBot criminals are alleged to have plundered includes browser passwords, illicitly-snapped screenshots, cryptocurrency account data, and other PII (personally identifiable information).

As the court order puts it:

The Defendants are responsible for distributing a botnet that has infected approximately 672,220 CryptBot victim devices in the US in the last year. At any moment, the botnet’s extraordinary computing power could be harnessed for other criminal schemes.

Defendants could, for example, enable large ransomware or distributed denial-of-service attacks on legitimate businesses and other targets. Defendants could themselves perpetrate such a harmful attack, or they could sell access to the botnet to a third party for that purpose.

Because the defendants are apparently operating out of Pakistan, and unsurprisingly didn’t show up in court to argue their case, the court decided its outcome without hearing their side of the story.

Nevertheless, the court concluded that Google had shown “a likelihood of success” in respect of charges including violating the Computer Fraud and Abuse Act, trademark rules, and racketeering laws (which deal, loosely speaking, with so-called organised crime – committing crimes as if you were running a business):

[The court favors] a temporary restraining order. The criminal enterprise is defrauding users and injuring Google. There is no countervailing factor weighing against a temporary restraining order: there is no legitimate reason why Defendants should be permitted to continue to disseminate malware and cracked software and manipulate infected computers to carry out criminal schemes. […]

Every day that passes, the Defendants infect new computers, steal more account information, and deceive more unsuspecting victims. Protection from malicious cyberattacks and other cybercrimes is strongly in the public interest.

As you can imagine, some aspects of the restraining order follow the sort of legalisms that strike non-lawyers as tautological outcomes, namely officially demanding that the criminals stop committing crimes, including: no longer distributing malware, no longer running a botnet, no longer stealing victims’ data and no longer selling that stolen data on to other crooks.

Block that traffic

Interestingly, however, the court order also authorises Google to identify network providers whose services directly or indirectly make this criminality possible, and to “[request] that those persons and entities take reasonable best efforts” to stop the malware and the data theft in its tracks.

That intervention doesn’t just apply to companies such as domain name registrars and hosting providers. (Court orders often demand that server names get taken away from criminals and handed over to law enforcement or to the company being harmed, and that websites or web servers get taken down.)

Presumably to make it harder for these alleged crooks simply to shift their servers to hosting providers that either can’t be identified at all, or that will happily ignore US takedown requests, this court order even covers blocking network traffic that is known to be going to or coming from domains associated with the CryptBot crew.

The final network hops taken by any malicious traffic that reaches US victims is almost certain to pass through ISPs that are under US jurisdiction, so we’re assuming that those providers may end up with legal responsibility for actively filtering out any malicious traffic.

To be clear, the court order doesn’t demand, or even mention, any sort of snooping on, sniffing out or saving of any data that’s transferred; it merely covers taking “reasonable steps to identify” and “reasonable steps to block” traffic to and from a list of identified domains and IP numbers.

Additionally, the order covers blocking traffic “to and/or from any other IP addresses or domains to which Defendants may move the botnet infrastructure,” and gives Google the right to “amend [its list of network locations to block] if it identifies other domains, or similar identifiers, used by Defendants in connection with the Malware Distribution Enterprise.”

Finally, the restraining order states, in a single, mighty sentence:

Defendants and their agents, representatives, successors or assigns, and all persons acting in concert or in participation with any of them, and any banks, savings and loan associations, credit card companies, credit card processing agencies, merchant acquiring banks, financial institutions, or other companies or agencies that engage in the processing or transfer of money andlor real or personal property, who receive actual notice of this order by personal service or otherwise, are, without prior approval of the Court, temporarily restrained and enjoined from transferring, disposing or, or secreting any money, stocks, bonds, real or personal property, or other assets of Defendants or otherwise paying or transferring any money, stocks, bonds, real or personal property, or other assets to any of the Defendants, or into or out of any accounts associated with or utilized by any of the Defendants.

In plain English: if you try to help this lot to cash out their ill-gotten gains, whether you accept thirty pieces of silver from them in payment or not, expect to be in trouble!

Will it work?

Will this have any large-scale effect on CryptBot operations, or will their activities simply pop up under a new name, using new malware, distributed from new servers, to build a new botnet?

We don’t know.

But these alleged criminals have now been publicly named, and with more than two-thirds of a million computers said to have been infected with CryptBot zombie malware in the last year in the US alone…

…even a tiny dent in their activities will surely help.

What to do?

To reduce your own risk of zombie malware compromise:

  • Stay away from sites offering unofficial downloads of popular software. Even apparently legitimate download sites sometimes can’t resist adding their own extra “secret sauce” to downloads you could just as easily get via the vendor’s own official channels. Beware of assuming that the first result from a search engine is the official site for any product and simply clicking through to it. If in doubt, ask someone you know and trust to help you find the real vendor and the right download location.
  • Consider running real-time malware blocking tools that not only scan downloads, but also proactively prevent you from reaching risky or outright dangerous download servers in the first place. Sophos Home is free for up to three users (Windows and/or Mac), or modestly priced for up to 10 users. You can invite friends and family to share your licence, and help them look after their devices remotely, via our cloud-based console. (You don’t need to run a server at home!)
  • Never be tempted to go for a pirated or cracked program, no matter how valid you think your own justification might be for not paying for or licensing it correctly. If you can’t or won’t pay for a commercial product, find a free or open-source alternative that you can use instead, even if it means learning a new product or giving up some features you like, and get it from a genuine download server.


go top