Category Archives: News

Apple AirTag hacked again – free internet with no mobile data plan!

Earlier this week we wrote about a jailbreak hack against Apple’s newly introduced AirTag product.

In that story, the researcher @ghidraninja was able to modify the firmware on the AirTag itself, despite the anti-tampering protection implemented by Apple’s own AirTag firmware programming.

But this “attack” (if that is the right word) is different, because it doesn’t involve modifying or cracking the AirTag itself.

Instead, it involves using the AirTag protocol on a Bluetooth device that doesn’t have internet connectivity in order to “trick” (if that is the right word) nearby Apple devices into sending data over the internet on its behalf.

Very loosely put: free internet access!

(But with some spectacular limitations on bandwidth and latency,as we shall see below.)

In the paper describing the hack, the device used was a cheap and easily programmable ESP32 Bluetooth/Wi-Fi chip commonly used in IoT devices and readily available from hobby electronics websites.

Fabian Bräunlein, the researcher who came up with this proof of concept. has dubbed it Send My.

That’s a pun on Apple’s own Find My service by which AirTags “call home” when they’re lost, even though they don’t have internet connections of their own.

What your AirTag tells the world

Given that AirTags can call home all the way to Apple even though the AirTag has no internet connectivity of its own…

…Bräunlein wondered if the process could be subverted by a Bluetooth-based non-Apple chip, using Apple Find My reports as Send My transmissions instead.

Greatly simplified, AirTags let themselves get tracked something like this:

  • When you pair an AirTag with an Apple ID, your computer and the AirTag agree on a cryptographic “seed”. This is used to generate a random data string every 15 minutes. This is a bit like the seed used in a 2FA authenticator app, which calculates a new pseudrandom 6-digit code every 30 seconds. (The AirTag seed is not shared with Apple.)
  • Every 2 seconds, the AirTag sends out a Bluetooth Low Energy broadcast beacon that contains a cryptographic public key. This is the public part of an Elliptic Curve keypair that was generated using the random data string derived from the original seed and corresponding to the current 15-minute time window.

That’s all the AirTag does: spray-and-pray.

If any internet connected Apple device such as an iPhone or MacBook is in range and just happens to receive AirTag HERE-I-AM messages, it acts as a relay and completes the delivery of each message as follows.

The Apple device:

  • Computes its own location using GPS, Bluetooth, Wi-fi or other available sources.
  • Encrypts the location data using the Elliptic Curve public key in the AirTag message.
  • Uploads the encrypted data to Apple’s Find My service.

An elegant design

Bräunlein describes this as an “elegant design” with some useful privacy and security properties.

Firstly, AirTags don’t need unique identifiers that get transmitted every time, because the ID they use is simply one half of an ever-changing cryptographic public-private keypair.

Secondly, neither the Apple device that relays the message for free on the AirTag’s behalf nor Apple itself ever know any of the private keys used.

In other words:

  • The AirTag doesn’t know which Apple device picked up and relayed its messages, thus preserving the privacy of the person whose device helped out by providing internet access to deliver the Find My report.
  • Apple knows which device sent in the Find My message but can’t decrypt it, so the location of the relay is kept private.
  • The owner of the AirTag that called home can decrypt the location in the Find My message, but has no idea which relay device passed the message on.

    How to find a lost AirTag?

    At this point, you are probably wondering how you query Apple’s service to track down a lost AirTag, given that all Apple keeps is a giant and anonymous list of location messages encrypted by randomly generated public keys.

    The answer is that you, as the owner of the AirTag, know the secret cryptographic seed from which your AirTag generates its public-private keypairs every 15 minutes.

    So if you want to track down your AirTag over, say, a two-hour period, you simply calculate the list of eight public keys that your AirTag would have used during that peroidd (one x 2 hour window = 8 x 15 minute windows), hash them up with SHA256, and ask Apple, “Are any of these hashes on your list?”

    In theory, you might be able to retrieve messages from other AirTags simply by lying about the hashes you send in, but there’s not really much point.

    Firstly, the chance that you’ll guess a valid hash (out of 2256 possible choices) is vanishingly small; and secondly, if you did get a reply you wouldn’t be able to do anything with it because you couldn’t decrypt it or tell which AirTag sent it.

    Public keys as unexpected data

    Now, the question is, “Can you use these public keys not as cryptographic objects used to scramble the data you want to send, but to encode the data you want to send instead?”

    Bräunlein came up with an effective way to do just that.

    He programmed a Bluetooth device to transmit AirTag public keys that weren’t actually keys at all: his “public keys” were in fact a series of encoded message packets that contained his hidden data.

    Sure, many or even most of the messages would probably get lost in the Bluetooth ether, and those Bluetooth broadcasts that did get picked up by nearby iDevices and Macs might never get forwarded onwards to Apple, or might take ages to arrive…

    …but by limiting the length of the hidden message and repeating the same Bluetooth “public keys” over and over again, Bräunlein’s hope was that eventually a complete copy of all the data packets containing the hidden data might make it to Apple.

    At this point, the recipient, knowing what to expect, could query Apple’s Find My servers to see which messages had arrived, and thus decode the message.

    Intriguingly, the location data encrypted in the actual Find My message by the relaying device is completely irrelevant to Bräunlein’s system – in fact, it’s useless for his purpose because he has no control of what that location data is going to be given that it is injected by the intermediate relay device.

    In the end, it’s simply the list of Find My message “public keys” that arrives at Apple that tells the recipient what hidden data got sent.

    How to find a message if you need its hash first?

    Right now, you are no doubt wondering how these “public keys” convey any data if the recipient needs to know the hash of each “public key” in order to retrieve it.

    For example, if you send a fake “public key” that consists of the bytes THE DATA IS 42, then in order to recover that message, surely I would need to know the text THE DATA IS 42 in advance, in order to calculate the hash I’d need to see the message had been delivered?

    Actiually, you can be a bit trickier than that.

    Imagine that you want to send me a two-digit number, and we agree that you will do so by using one, and only one, of these possible “public keys”:

     THE NUMBER IS 00 THE NUMBER IS 01 . . . THE NUMBER IS 98 THE NUMBER IS 99
    

    If you mock up AirTag broadcasts using one and only one of those messages as a public key, and you send it many times to improve its chance of getting through, I can figure out which hidden message you sent by working out the SHA256 hashes of all possible 100 messages…

     SHA256('THE NUMBER IS 00') --> 0b1c1677579e...373350bd8cd1 SHA256('THE NUMBER IS 01') --> 3193afed4ac6...de3b0a207c12 . . . SHA256('THE NUMBER IS 98') --> 5ecfe2a3bfb3...04a6267c88f1 SHA256('THE NUMBER IS 99') --> d32c873c52f5...d5b48be249f8
    

    …and asking Apple about all of them.

    Bingo!

    Apple would know nothing about 99 of the 100 messages (the ones that didn’t get transmitted), but the one that did show up in Apple’s database would uniquely identify the hidden data you sent in the first place.

    Sending more data

    Bräunlein’s system was rather more sophisticated and generalised that the process above: he used longer “public keys” for encoding his data and followed a predetermined pattern.

    Each “public key” included a message ID, a message index, and a single “hidden data” bit that was either set or clear. (There was a bit more to it that: we have simplified things slightly here to save space, but the principle is the same.)

    For example, if the recipient were expecting a 16-bit message with, say, an ID of 0xCAFEF00D, the “public keys” might look like this:

    CA FE F0 0D 00 00 00 00 0v -- msg 0xCAFEF00D, counter #0, value of bit 0 = v (0 or 1)
    CA FE F0 0D 00 00 00 01 0w -- msg 0xCAFEF00D, counter #1, value of bit 1 = w (0 or 1)
    . . .
    CA FE F0 0D 00 00 00 0E 0x -- msg 0xCAFEF00D, counter #14 (0x0E), value of bit 14 = x (0 or 1)
    CA FE F0 0D 00 00 00 0F 0y -- msg 0xCAFEF00D, counter #15 (0x0F), value of bit 15 = y (0 or 1)
    

    16 different “public keys” would be transmitted, typically repeated many times each to improve the chance of them being picked up and delivered.

    The recipient would then query Apple’s Find My servers for 32 different “public keys”, like this:

    CA FE F0 0D 00 00 00 00 00 -- msg 0xCAFEF00D, counter #0, guess that v = 0 CA FE F0 0D 00 00 00 00 01 -- msg 0xCAFEF00D, counter #0, guess that v = 1
    CA FE F0 0D 00 00 00 01 00 -- msg 0xCAFEF00D, counter #1, guess that w = 0
    CA FE F0 0D 00 00 00 01 01 -- msg 0xCAFEF00D, counter #1, guess that w = 1
    . . . CA FE F0 0D 00 00 00 0E 00 -- msg 0xCAFEF00D, counter #14 (0x0E), guess that x = 0 CA FE F0 0D 00 00 00 0E 01 -- msg 0xCAFEF00D, counter #14 (0x0E), guess that x = 1
    CA FE F0 0D 00 00 00 0F 00 -- msg 0xCAFEF00D, counter #15 (0x0F), guess that y = 0
    CA FE F0 0D 00 00 00 0F 01 -- msg 0xCAFEF00D, counter #15 (0x0F), guess that y = 1
    

    Half of these “public keys” would be missing from Apple’s list, corresponding to the messages that never got sent; the other half would be reported as “found”, corresponding to the individual bits in the hidden data.

    Simply put, I will only ever receive one of the two messages CA FE F0 0D 00 00 00 0E 00 and CA FE F0 0D 00 00 00 0E 01, and the one that does arrive will surreptitiously tell me the value of bit 14 (0x0E) in the hidden data.

    The counter field in each “public key” message means that the bits can be stitched back together in the right order no matter when they arrive, and also that partial data can be reconstructed even if some of the bits never make it through.

    Free (and stealthy) internet acces!

    However, as you’ve probably already figured out, this system may be “free”, but it’s not fast or efficient.

    Bräunlein reported that he could send at about 20 bits/second and receive at about 25 bits/second, but that his hidden data “messages” took anywhere from a minute to an hour to arrive.

    What to do?

    Is this a risk?

    Not really.

    As Bräunlein points out, Apple may not easily be able to prevent this sort of misuse of its Find My system, and may not even want to, given that it designed the system to be anonymous and private.

    (We suspect that Apple will introduce some sort of rate limiting to reduce the already limited send-and-receive bandwidth of Send My even further, but that would reduce rather than eliminate this technique.)

    Bräunlein speculates, however, that his Send My technique could be used for exfiltrating data from semi-secure environments in which trusted mobile phones containing only trusted apps are allowed, and all internet-connected devices are monitored and controlled.

    That’s because this trick (we’ve decided that, yes, that is the right word!) gives untrusted, anonymous Bluetooth devices a way to transmit data over the internet via nearby trusted phones without ever authenticating to those phones or any of their apps.

    The hidden Send My data gets exfiltrated as an apparently anonymous an unimportant part of Apple’s own system software.

    If you’re worried about that sort of risk, then you probably shouldn’t be allowing users to take their mobile phones into your secure areas anyway, or you should be insisting that they are switched into Airplane Mode first.


Gamers beware! Crooks take advantage of MSI download outage…

Well-known computer gaming hardware vendor MSI is warning of fake download sites ripping off its brand.

The company doesn’t just sell high-end graphics cards and gaming rigs, it also offers a free software product called Afterburner that it trumpets as “the gold standard of overclocking utilities.”

Overclocking is how enthusiasts describe the act of squeezing maximum performance out of their hardware by running it up to, at or even beyond the limits usually recommended by the component manufacturers.

For example, you might decide to run your processor faster than usual so it can perform calculations more quickly.

But that might cause it to overheat and shut down, so you might then try tweaking the operating voltage slightly to adjust the current draw and reduce the heating effect.

Then you might ramp up the speed of the fan to improve cooling, or any of a number of trial-and-error tweaks in a edgy combination to eke out the best performance you can get without crashing the computer.

You can see why an overclocking tool that is not only endorsed by a hardware vendor but also developed and supplied by that vendor is a must-have for any avid gamer…

…but just right now, MSI’s “Afterburner Software download link is currently closed due to routine maintenance,” according to the company’s warning.

We verified the outage by visiting the download page: the [Download Afterburner] button is still there, but it doesn’t do anything. [2021-05-13T22:15Z]

The HTML behind the button doesn’t specify any download link, so clicking the button confusingly makes it look as though the site is broken rather than merely offline for maintenance.

According to MSI, a gang of cybercrooks stepped in to fill the temporary download vaccuum, setting up a fake site that looked like an alternative download location, but serving up malware instead of the real deal:

MSI is informing the public of a malicious software being disguised as the official MSI Afterburner software. The malicious software is being unlawfully hosted on a suspicious website impersonating as MSI’s official website with the domain name [afterburner-msi DOT space]. MSI has no relation with this website or the aforementioned domain.

The good news is that when we last checked [2021-05-13T22:25Z], the malicious server named above was offline and therefore didn’t pose any immediate risk.

The bad news, of course, is that the crooks could easily move to another site, where over-keen gaming enthusiasts might encounter the same (or other) malware offered under similar false pretences.

The other bad news is that MSI hasn’t yet put a warning on the download page itself, which is what we would have done: we’d have replaced the dud-and-dysfunctional download button with a warning not to go hunting on third-party websites for alternative sources of the donwload.

Impetuous users might go searching elsewhere even in the face of a clear explanation of the situation, but well-informed users almost certainly wouldn’t.

What to do?

This is a timely reminder of the risks associated with trawling the internet to find unofficial versions of software that you can’t get directly from the usual source.

Even if you’re not a software pirate who’s explicitly looking for an “unofficial” (read: unlawful) download of software that isn’t free, it’s tempting to go “off market” when the vendor’s own website isn’t working.

The problem, of course, is that unofficial download sources are just that: unofficial.

Even if an unofficial installer isn’t overtly malicious, it could nevertheless include some added “secret sauce“, such as an unwanted browser plugin or an advertising addon that the vendor’s own download doesn’t have.

Or you might be tempted to sidestep the temporary unavailability of a paid software product by using a cracked version instead.

It’s not legal to use pirated software, but it might feel morally acceptable (or, perhaps, not entirely unacceptable) to use a cracked version of a software package temporarily if you have paid for a licence, but can’t lay your hands on a legitimate installer just at the moment.

For an eye-opening description of what can go wrong if you decide to cut cybersecurty corners by trusting software that you shouldn’t, read the fascinating article MTR in Real Time: Pirates pave way for Ryuk ransomware on our sister site Sophos News.

Our advice is simple:

  • If you’re in a hurry and can’t get hold of software that you really need, don’t go snooping around where angels fear to tread. If it’s that important and it’s a work computer, speak to your IT department instead of trying to go it alone.
  • If you’re in a hurry and can’t get hold of software that you’d really like, consider waiting until is is available and managing without it until then.

In a recent SophosLabs report, we noted that the DarkSide ransomware gang (the crooks behind the recent Colonial Pipeline attack, along with many others), spend anywhere from just over six weeks (44 days) to just under three months (88 days) inside their victim’s networks, watching, waiting, planning and finally unleashing each attack.

If the crooks can show that kind of patience while they line up all the malicious components they need to destroy your network…

…we think it’s worth having a bit of patience yourself so that you don’t accidentally give those very same crooks a helping hand.

One last thing

While we’re here: if you find yourself in MSI’s position, with a download site that’s offline, please don’t leave a broken download button behind on your download page and publish a warning somehere else.

Put the explanation and the warning right there on the download page itself, because a little bit of clarity goes an awful long way!


S3 Ep32: AirTag jailbreak, Dell vulns, and a never-ending scam [Podcast]

Apple’s brand new AirTag product got hacked already. Things you can learn from Colonial Pipeline’s ransomware misfortune. Why Dell patched a bunch of driver bugs going back more than a decade. And the “Is it you in the video?” scam just keeps on coming back.

With Kimberly Truong, Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Additional links you will find useful:


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher, Overcast and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.

Tempted by cryptocoins? Fake trading apps get personal…

Remember how ransomware started?

It was all about volume.

The CryptoLocker gang, for example, raked in millions of dollars, perhaps even hundreds of millions, by scrambling your files and then extorting you for $300 to unscramble them again.

These days, however, the big-money ransomware gangs take a very different approach.

They typically go after companies one by one, so they can rake in similar amounts of money by focusing their attention on one victim at a time, whom they then blackmail for hundreds of thousands or millions of dollars each.

The crooks, sadly, get a threefold benefit out of this approach: they get to play their cards closer to their chests; they get to squeeze their victims for bigger amounts each time; and they can put much more effort into each attack.

Lure, love and leech

Romance scammers, who prey on vulnerable people online and lure them into long-term, long-distance relationships that are really just a pack of lies, take a similar approach.

They play the field, as it were, on dating sites, identifying numerous possible targets at first before targeting those victims whom the crooks can see have fallen for their “charms” the hardest.

Like modern ransomware gangs, romance scammers have sufficient operational patience that they aren’t out to scam hundreds of dollars each out of thousands of victims, but to scam hundreds of victims out of hundreds of thousands of dollars each.

They might not set out to target any particular individual up front, but once they’ve won a victim’s trust and loyalty, they’ll focus on that person for as long as the scam keeps working.

Trading scammers love you, too

Well, SophosLabs researchers have just published a report entitled Fake Android and iOS apps disguise as trading andcryptocurrency apps, and it seems that some investment scammers are taking a similar sort of approach.

These trading scammers get you to fall in love with them too, or at least with the money they promise you.

After all, if you’ve gone to all the trouble of building an imposter website that looks like a genuine online currency trading business, and a fake app that is believable enough to pass muster as belonging to someone else’s brand…

…why spam out links to that site, or draw attention to your app, so that millions of people who aren’t going to be fooled, and who will never fall into your evil clutches, might see what you are up to and raise the alarm?

If your app’s already in Google Play, you risk having it chucked out, which means you’re then faced with starting over.

So why not start “off market”, and parlay that into something special, for selected users only, not available in the Play Store, right from the start?

And if your victim has an iPhone, there are no app markets for Apple users other than the App Store, so you need to follow a “you’re smart and special and so is this app” approach anyway.

Super Signature services

Technically, it’s possible to install iPhone apps that didn’t come from the App Store, but it’s a complex and closed process designed so that developers can test apps before releasing them, or so that companies can produce in-house apps that are used only inside the organisation rather than offered commercially to the public.

So, if you’re not a legitimate software creator but you want to build an iPhone app to scam other people, you need someone who will pretend to be the “developer” of your app, and who will submit it for one-off signing to Apple.

Then, your victims need to jump through special hoops by which their devices get registered into the “development process” so their phones are authorised by Apple to run your “special” app.

Apple carefully limits the number of test apps that it will sign for any development team, and keeps track of the number of phones that are using those apps, specifically to discourage commercial coders from misusing the process as a way of sidestepping the App Store.

In other words, a crook who sets out to game this system really can’t afford to have hundreds of people installing the app but then realising it’s a scam and getting rid of it.

Indeed, Apple’s own guidelines warn developers as follows:

You’re allowed to register a fixed number of devices per product family per year, and disabling a device in your developer account won’t decrease the count of registered devices.

Love comes first, the app comes later

So, online trading scammers who have iPhone users in their sights might as well take the trouble to get potential victims to fall in love with the scam first, before tempting them with their bogus apps.

The new SophosLabs report takes you through the fascinating tale of how the crooks do it, including:

  • How the crooks identify potential victims and lure them into a trusting relationship. (They use social media and dating sites, just like romance scammers.)
  • How the crooks get their iPhone apps digitally signed without engaging directly with Apple. (They use online proxy companies, offering what are known in the jargon as Super Signature services to take care of that side of things.)
  • How the crooks talk their victims into installing the fake apps without using the App Store. (They use the same sort of provisoining system that a company might use with its own employees, essentially “managing” the victim’s phone for them so that they can install a “special” app.)
  • How the crooks keep the investment myth alive once the victim has started making deposits. (They use fake feedback that make it look as though deposits really went through, and to give the impression that your “investment” can be withdrawn in the future, even though it’s gone for ever.)

As if that isn’t bad enough on its own, one of the scams that SophosLabs investigated reminded us, yet again, that cybercriminals often aren’t very good at cybersecurity themselves.

The criminals’ server had a wide-open directory that contained all the genuine customer data that they had collected under the guise of “know your customer” regulations, such as scans of passports, ID cards, driving licences and more.

What to do?

  • If it sounds too good to be true, it is too good to be true. Even if you think of all your social media and dating site connections as friends, you have no idea what their motivation is for talking up any investment scheme they recommend. For all you know, they could already have fallen for a scam themselves and be unknowingly dragging you in after them, or their account could have been hacked.
  • Find your own way to investment websites you want to investigate. In these scams, the crooks are hoping you won’t check the links they send you too closely because they’re coming from a “friend” and so can trust the links implicitly. But even if a link does come from a true friend, they could have made a mistake, so do your own searches anyway. (And see bullet point #1 above.)
  • Never install iPhone apps that don’t come from the App Store unless you know for sure that they were built, tested and delivered by your own employer for a legiimtate purpose that’s specific to your business. Be especially wary if the person trying to pitch the app to you comes up with a bunch of excuses such as “you’re an early adopter so you get the app before its release to the App Store”, or other tall stories that try to justify why they are unable to deliver the app in the regular way. (And see bullet point #1 above.)

Apple AirTag jailbroken already – hacked in rickroll attack

Apple recently announced a tracking device that it calls the AirTag, a new competitor in the “smart label” product category.

The AirTag is a round button about the size of a key fob that you can attach to a suitcase, laptop or, indeed, to your keys, to help you find said item if you misplace it.

If you remember those whistle-and-they-bleep-back-at-you keyrings that were all the rage for a while in the 1990s, well, this is the 21st century version of one of those.

Unlike their last-millennium sonic counterparts, however, modern tracking tags come with loads more functionality, and therefore present a correspondingly greater privacy risk.

Armed with wireless connectivity in the form of Bluetooth and NFC, modern tags don’t just respond neutrally with a beep-beep-beep when you send them an audio signal and they’re within range.

Products like the AirTag also announce themselves with regular Bluetooth beaconing transmissions, just like your phone does when it’s in discoverable mode.

To stop your tags being used as a permanent tracking tool for anyone who’s stalking you, the Bluetooth identifier swaps itself around every few minutes, like the Bluetooth beacons used in the Apple-and-Google privacy-preserving “exposure notification” interface that was introduced for coronavirus infection tracking.

If someone else swipes an NFC-enabled phone near an AirTag, it presents them with a supposedly anonymous URL pointing to the Apple server found.apple.com, where they can report the misplaced item.

(We don’t have an AirTag to practise with, but apparently you can choose to reveal personal information such a phone number via the tracking URL, but we assume that nothing about your identity is revealed by default, so that lost items can be reported anonymously.)

Power glitching

As you probably expected to hear, AirTags are meant to be resilient against hacking, or jailbreaking as it is commonly called on Apple devices.

Notably, the firmware (the miniature operating system and software programmed into the device) is supposed to be locked down so it can’t be peeked at in the first place, let alone modified to run alternative code.

In particular, the hardware used in the AirTag, an nRF52832 microcontroller, can be set during bootup into a special mode that prevents any of the real-time chip-control features, such as debugging, being used.

In the nRF52xxx series of chips, an additional anti-hacking feature known as APPROTECT, short for Access Port Protection, can also be activated at startup to prevent the contents of the firmware from being read out.

Lsat year, however, an intrepid cybersecurity researcher known only as LimitedResults figured out (and wrote up a fascinating description of) a way to stop the chip turning off its built-in debugger by injecting a carefully-chosen burst of electrical interference into the power supply during startup

Too little interference, and nothing would happen; too much electrical tampering, or the right amount of tampering at the wrong time, and the chip would simply fail to boot at all.

But with just the right sort of microsecond-sized power glitch supplied at just the right time, LimitedResults was effectively able to “blank out” the chip commands that were supposed to suppress debugging, while leaving everything else unaffected so that the system nevertheless continued running.

LimitedResults was then able to connect a debugger to the debug port (which ought to have been unresponsive) and dump a copy of the firmware that was supposed to be shielded from prying eyes.

Additionally, because the two-way debug port was now active, the unlocked device could be controlled as well as snooped upon.

AirTag attacked

According to reports, another researcher who goes by @ghidraninja on Twitter (Ghidra is a well-known reverse engineering toolkit from the US National Security Agency) has now used this power glitch trick to “jailbreak” an AirTag.

Apparently, @ghidraninja was able to dump the AirTag’s copy-protected firmware, modify it slightly, and write it back undetected by the device.

The hack, so far, is a proof of concept (PoC) rather than a dangerous attack: @ghidraninja modified the server name found.apple.com inside the firmware so that a “lost” AirTag would misdirect an inquisitive iPhone not to Apple’s legitimate site….

…but to a YouTube video, you guessed it, of Rick Astley performing Never Gonna Give You Up:

What to do?

Right now, we don’t think there’s much to worry about, unless you’re in the cybersecurity team (or the PR crew) at Apple and you are trying to figure out how to harden the next generation of AirTags against this trick.

Crooks who wanted to abuse a tracking tag to stalk you or to keep your property under surveillance could, after all, simply use a booby-trapped tracking tag of their own devising and hide it somewhere you would probably not notice it.

If they wanted it to resemble an AirTag so that they could “hide” it in plain sight, they could simply enclose it in a look-alike package.

Of course, there’s still the risk of someone using a booby-trapped AirTag as a lure to trick Good Samaritan iPhone users into visting a fake URL and giving themselves away…

…so, as the video says, “Be careful when scanning untrusted AirTags.

Our recommendation, if you find someone else’s stuff and want to help to reunite it with its real owner, is simply to hand it in old-school style, for example at a police station or an official lost property office.

And, as cynical as it sounds, be wary of people you don’t know who are apparently filled with gratitude for an unsolicited “favour” they claim you did for them.

Listen to our special-episode podcast with Rachel Tobac, a renowned social engineering expert, and give yourself the confidence and understanding not to get sucked into saying, doing or accepting online “gifts” that might be anything but:


go top