Category Archives: News

Naked Security Live – Just how (un)safe is AirDrop?

Researchers in Germany say they reported what they consider to be an AirDrop privacy hole to Apple in 2019, but never heard back.

So, they went away and worked on what they consider an improved version, dubbed Privacy Drop, and recently announced it to the world.

Does this mean AirDrop is dangerous and you should stop using it?

We investigate:

[embedded content]

Watch directly on YouTube if the video won’t play here.
Click the on-screen Settings cog to speed up playback or show subtitles.

Why not join us live next time?

Don’t forget that these talks are streamed weekly on our Facebook page, where you can catch us live every Friday.

We’re normally on air some time between 18:00 and 19:00 in the UK (late morning/early afternoon in North America).

Just keep an eye on the @NakedSecurity Twitter feed or check our Facebook page on Fridays to find out the time we’ll be live.

Apple AirDrop has “significant privacy leak”, say German researchers

Security researchers at the Technical University of Darmstadt in Germany have just put out a press release about an academic paper they’ll be presenting at a Usenix conference later in 2021.

(If the end of the last sentence gives you a sense of déjà vu, that’s because it seems to be “pre-announce your Usenix research” month: we wrote earlier this week about Dutch academics who had come up with a new memory-flipping trick based on rowhammering for subverting your computer via a browser.)

The paper itself has a neutrally worded title that simply states the algorithm that it introduces, namely: PrivateDrop: Practical Privacy-Preserving Authentication for Apple AirDrop.

But the press release is more dramatic, insisting that:

Apple AirDrop shares more than files. [We] discover significant privacy leak in Apple’s file-sharing service.

For those who don’t have iPhones or Macs, AirDrop is a surprisingly handy but proprietary Apple protocol that lets you share files directly but wirelessly with other Apple users nearby.

Instead of sharing files via the cloud, where the sender uploads to a central server from where the recipient then downloads the file, AirDrop works even when both users are offline, using a combination of Bluetooth and peer-to-peer Wi-Fi for fast, simple, local wireless sharing.

The problem, according to the researchers, comes in the form of AirDrop’s Contacts only mode, where you tell AirDrop not to accept connections from just anyone, but only from users already in your own contact list.

AirDrop setting choices.

To be clear, opening up AirDrop to Everyone doesn’t mean that anyone can access your phone without you knowing, because you get a pop-up first that requests permission, and the sender can’t bypass that.

But one problem with Everyone mode is that if someone tries to send you a file, the pop-up includes a tiny thumbnail of the file they want to send, so you can make sure it’s not only a sender you trust but content you want.

That means you can easily be bluejacked, the slang term for someone sending you an unsolicited pic that you are forced to see in order to decide whether you want to see it.

AirDrop requests show you a thumbnail first,
so you have to see it to Decline it.

Locking things down with Contacts only therefore seems a good choice.

However, there’s a different sort of problem if you use the Contacts only mode, say the Darmstadt researchers.

Simply put, the two ends of an AirDrop connection agree on the whether they consider each other a contact by exchanging network packets that don’t properly protect the privacy of the contact data.

The researchers claim that the contact identifiers, which are based on phone numbers and email addresses, are exchanged as SHA-256 cryptographic hashes, in order to protect the original data.

Each end converts their own contact data into hashes and compares those against the data sent over from the other, rather than sharing and comparing the original phone numbers and email addresses.

This means that each end doesn’t have to reveal its raw contact data up front to the other just to see which entries they have in common.

Apparently, the hashes exchanged are just that, straight hashes, with no salt involved, so that if you had a precomputed list of all possible hashes for all possible phone numbers, you’d be able to look them up in your hash list and thus “reverse” the cryptography by sheer brute force.

UK mobile numbers, for example, have the form +44.7xxx.xxx.xxx, because all UK numbers start +44 and all mobiles in the UK start with a 7, leaving just nine digits for the rest.

Therefore there are only 109 possible numbers, for a total of 1 billion, which most modern laptops could compute in hours or even minutes.

Each SHA-256 hash is 32 bytes long (256 bits), if you choose to store the whole thing instead of approximating it by keeping only the first half, for a total of just 32GB of disk space to save the lot.

For email addresses, computing an exhaustive “reverse list” is clearly impossible, but by using a list of known email addresses, such as those dumped in various data breaches over the years, you could build and save a “reverse dictionary” of likely candiates in the same sort of time and space as you’d need for the phone number list.

Doesn’t TLS stop the leak?

Of course, the contact agreement stage of the AirDrop process happens over TLS, so a third-party attacker can’t just sniff the hashed contact data wirelessly.

And even with a jailbroken phone, the attacker couldn’t easily set up a modified iOS kernel that would reliably extract the contents of the AirDrop packets for subsequent cracking.

However, the Darmstadt team already “solved” the TLS problem in a Usenix paper from 2019, by figuring out a way to run a Manipulator-in-the-Middle attack, or MitM for short, against the AirDrop connection setup process.

A MitM is where X thinks they’re talking directly to Y, which we’ll denote as X<->Y, but the traffic is actually being proxied, or relayed, through the M in the middle, like this: X<->M<->Y.

TLS, of course, is supposed to help you to prevent MitM attacks by allowing each end, if it wishes, to request a digital certificate from the other, and for each end to verify that the other’s certificate was digitally attested by someone they trust.

And in Contacts only mode, AirDrop apparently insists on each end coming up with a certificate that’s ultimately signed by Apple itself.

Self-signed certificates

According to the 2019 paper, however, if the recipient is using Everyone mode in AirDrop, then self-signed certificates are allowed, so even iPhones that have never called home to Apple to register for an Apple account can vouch for themselves and use AirDrop anyway.

In the 2019 paper, the authors also figured out a way to spot that two users were trying to start up an AirDrop connection and to prevent it from working by jamming the network traffic setup with fake connection resets…

…and they also figured out that, if you get in the way often enough when two people are trying to share files, many recipients will eventually say to the sender, “Hang on a minute, I’ll temporarily switch to Everyone mode and see if that works.”

At which point, an attacker can start up an AirDrop service that looks like the real recipient’s by picking a device name similar to the real one (e.g. using John instead of John’s iPhone), trick the sender into connecting to the MitM device, connect onwards to the now-open-to-everyone recipient, and end up as the MitM.

Bingo, a working MitM atttack!

At this point, say the researchers, you can read out the SHA-256 hashes of the sender’s contact list, and have a go at cracking the hashes of the contact data in the list against the tables you calculated earlier.

They researchers claim that:

[We] informed Apple about the privacy vulnerability already in May 2019 via responsible disclosure. So far, Apple has neither acknowledged the problem nor indicated that they are working on a solution.

Presumably that’s why they’ve written this year’s Usenix paper, which presents a modified AirDrop-contact matching protocol that they claim solves all these problems, in the hope that Apple might adopt it in future.

What to do?

As you have probably figured out, there are a lot of moving parts in this attack, so there are plenty of places where attackers need to get lucky.

In particular, as far as we can tell, the attackers need recipients to get frustrated enough at not being able to connect that they revert to Everyone mode; and the attackers then also need senders to misrecognise the recipient’s device in the list when they try to recconect.

So, if you are worried about this attack:

  • Turn AirDrop off if you aren’t using it. That’s good security practice anyway. There’s no need to be discoverable to other AirDrop users all the time.
  • Don’t blindly fall back to Everyone mode if Contacts only mode keeps failing. If you’re in a private place with a sender you trust, it’s probably OK, but if you’re in a busy coffee shop or shopping mall, remember that Everyone mode opens you up to, well, everyone else around.
  • Be careful whom you connect to. The researchers relied on using obvious variants of the recipient’s device name, using a sort of “namesquatting” trick. If you can, get the recipient to show you their device name on-screen first so you don’t get suckered into picking a similar but bogus name.

S3 Ep29: Anti-tracking, rowhammer problems and IoT vulns [Podcast]

How Firefox showed the hand to a widely abused online tracking trick. Why reading from one part of your computer’s memory can paradoxically (and sneakily) let you write to another part. And yet more IoT bugs, this time a whole slew of them that go by the moniker “name:wreck”.

With Kimberly Truong, Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher, Overcast and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.

Linux team in public bust-up over fake “patches” to introduce bugs

One of the hot new jargon terms in cybersecurity is supply chain attack.

The phrase itself isn’t new, of course, because the idea of attacking someone indirectly by attacking someone they get their supplies from, or by attacking one of their supplier’s suppliers, and so on, is not new.

Perhaps the best-known example of a software-based supply chain attack in the past year is the notorious SolarWinds hack.

SolarWinds is a supplier of widely-used IT monitoring products, and was infiltrated by cybercriminals who deliberately poisoned the company’s product development process.

As a result, the company ended up inadvertently serving up malware bundled in with its official product updates, and therefore indirectly infecting some of its customers.

More recently, but fortunately less disastrously, the official code repository of the popular web programming language PHP was hacked, via a bogus patch, to include a webshell backdoor.

This backdoor would have allowed a crook to run any command they liked on your server simply by including a special header in an otherwise innocent web request.

The PHP team noticed the hack very quickly and managed to remove the malicious code in a few hours, so it was never included in an official release and (as far as we can tell) no harm was ultimately done in the real world.

A job worth doing

As you can imagine, it’s difficult to conduct what you might call a “penetration test” to judge a software project’s resistance to malevolent bug patches.

You’d have to submit fake bug fixes and then wait to see if they got accepted into the codebase, by which time the damage would already have been done, even if you quickly submitted a followup report to admit your treachery and to urge that the bug fix be reverted.

Indeed, by that time, it might be too late to prevent your fake patch from making it into real life, especially in open source projects that have a public code repository and a rapid release cycle.

In other words, it’s a tricky process to test a project’s ability to handle malevolent “fixes” in the form of unsolicited and malicious patches, and by some measures it’s an ultimately pointless one.

You might even compare the purposeful, undercover submission of known-bad code to the act of anonymously flinging a stone though a householder’s window to “prove” that they are at risk from anti-social vandals, which is surely the sort of “test” that benefits neither party.

Of course, that hasn’t stopped apparently well-meaning but sententious researchers from trying anyway.

For example, we recently wrote about a coder going by the grammatically curious name of Remind Supply Chain Risks who deliberately submitted bogus packages to the Python community to, well, to remind us about supply chain risks…

…not just once or twice but 3951 times in quick succession.

A job worth doing, it seems, was worth overdoing.

Social engineering gone awry?

In 2020, something similar but potentially more harmful was done in the name of research by academics at the University of Minnesota.

A student called Qiushi Wu, and his professor Kangjie Lu, published a paper entitled On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software [OSS] via Hypocrite Commits.

Unfortunately, the paper included what the authors described as a “proof of concept”:

We [took] the Linux kernel as target OSS and safely demonstrate[d] that it is practical for a malicious committer to introduce use-after-free bugs.

The Linux kernel team was unsurprisingly unamused at being used as part of an unannounced experiment, especially one that was aimed at delivering a research paper about supply chain attacks by actually setting out to perpetrate them under cover.

After all, given that the researchers themselves came up with the name Hypocrite Commits, and then deliberately submitted some under false pretences and without the sort of official permission that professional penetration testers always negotiate up front…

…didn’t that make them into exactly what their paper title suggested, namely hypocrites?

Fortunately, it looked as though that brouhaha was resolved late in 2020.

The authors of the paper published a clarification in which they admitted that:

We respect OSS volunteers and honor their efforts. We have never intended to hurt any OSS or OSS users. […]

Does this project waste certain efforts of maintainers? Unfortunately, yes. We would like to sincerely apologize to the maintainers involved in the corresponding patch review process; this work indeed wasted their precious time.

Despite the apology, however, the researchers insisted in their clarification that this wasn’t what a Computer Science ethics committee might call “human research”, or social engineering as it is often known.

Sure, some officially-endorsed tests that IT departments conduct do indeed carry out what amounts to social engineering, such as phishing tests in which unsuspecting users are lured in to click a bogus web link and then confronted with a warning, along with advice on how to avoid getting caught out next time.

But you can argue that this “hypocrite commit” research goes much further than that, and is more like getting a penetration testing team to call up users on the phone and then talking them into actually revealing their passwords, or setting up fraudulent bank payment instructions on the company’s account.

That sort of behaviour is almost always expressly excluded from penetration testing work, for much the same reason that fire alarm tests rarely involve getting a real employee in a real office to start a real fire in their real trash basket.

Once more unto the breach

Well, the war of words between the University and the Linux kernel team has just re-intensified, after it transpired that a doctoral student in the same research group has apparently been submitting fake bug reports again.

This prompted one of the Head Honchos of the Linux world (not that one, we mean Greg Kroah-Hartman, aka Greg KH) to declare:

Please stop submitting known-invalid patches. Your professor is playing around with the review process in order to achieve a paper in some strange and bizarre way.

This is not ok, it is wasting our time, and we will have to report this, AGAIN, to your university…

Even if you excuse the researcher because you think that kernel team is over-reacting due to embarrassment, given that numerous of these fake patches had already been accepted into the codebase, it’s hard not to feel sympathy with Greg KH’s personal tweet on the subject:

Let slip the hounds

A real war of words has now erupted.

Apparently, the researcher in this case admitted that what he did was wrong, but in an unrepentant way, saying:

I respectfully ask you to cease and desist from making wild accusations that are bordering on slander.

These patches were sent as part of a new static analyzer that I wrote and it’s sensitivity is obviously not great. I sent patches on the hopes to get feedback. We are not experts in the linux kernel and repeatedly making these statements is disgusting to hear.

Obviously, it is a wrong step but your preconceived biases are so strong that you make allegations without merit nor give us any benefit of doubt.

I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies and non experts.

Which provoked Greg KH to respond with:

You, and your group, have publicly admitted to sending known-buggy patches to see how the kernel community would react to them, and published a paper based on that work.

Now you submit a new series of obviously-incorrect patches again, so what am I supposed to think of such a thing? […]

Because of this, I will now have to ban all future contributions from your University and rip out your previous contributions as they were obviously submitted in bad-faith with the intent to cause problems.

*plonk*

We assume that the word *plonk* is an onomatopoeic description of the ball (a hardball, by the sound and volume of it) landing back in the other player’s court.

And the University is officially involved now, pledging to investigate and to consider its position:

What to do?

We’re not sure how the University is likely to respond, and how long the “ban” is likely to be upheld, but we are waiting with interest for the next installment in the saga.

In the meantime, we have two suggestions:

  • Let us know in the comments whose side you are on. Is this a question of wounded pride in the Linux team? Is it righteous indignation at being used as pawns in academic research? Or is it simply something that should be sucked up as part of the rich tapestry of life as an OSS programmer? Community opinion on this is important, given that any rift between academia and the Linux community is in no one’s interest and needs to be avoided in future.
  • If you’re thinking that actual supply chain attacks that introduce actual bugs make cool research projects, our own recommendation is: “Please don’t do that.” You can see, based on this case, just how much ill-will you might create and how much time you might waste.

There are plenty of real bugs to find and fix (and many companies will pay you for doing so).

So, if you’re genuinely interested in bugs, we urge you to focus your time on finding and fixing them, whichever side you support in this specific case.


LEARN MORE ABOUT MANAGING SUPPLY CHAIN RISKS


When cryptography attacks – how TLS helps malware hide in plain sight

Lots of things that we rely on, and that are generally regarded as bringing value, convenience and benefit to our lives…

…can be used for harm as well as good.

Even the proverbial double-edged sword, which theoretically gave ancient warriors twice as much fighting power by having twice as much attack surface, turned out to be, well, a double-edged sword.

With no “safe edge” at the rear, a double-edged sword that was mishandled, or driven back by an assailant’s counter-attack, became a direct threat to the person wielding it instead of to their opponent.

Sadly, there are lots of metaphorically double-edged swords amidst modern technology.

And no IT technology feels quite as double-edged as encryption, the process of scrambling data securely in such a way that only the intended recipient can ever unscramble it later on.

Almost everything about encryption makes it feel as though it is both immeasurably useful and dispiritingly dangerous at the same time.

The encryption dilemma

Consider some of these dilemmas:

  • You work out how to crack your enemy’s “invincible” cipher in wartime. (The Poles, Swedes, British and others famously and almost unbelievably pulled this off against several Nazi encryption systems during World War 2.) But you daren’t let anyone find out how well you’re doing, and you can’t even use all of the information you decrypt, in case the enemy cottons on and changes the system.
  • You encrypt all the critical data on your computer to protect it from thieves and hackers. But you’d better not lose the decryption key, or you won’t be able to access the information yourself. (Ironically, the stronger and safer the encryption technology you use, the less likely you’ll be able to crack it yourself if you ever forget the password.)
  • You implement an encryption system that gives you an advantage over the hackers who keep trying to attack you. But it’s so useful at keeping the hackers out of your business that the hackers start using exactly the same technology themelves, and suddenly you can’t keep track of their business, either.

This last dilemma is one that has been creeping up on us steadily over the last few years on the web.

TLS (transport layer security), the protocol used to encrypt the majority of today’s web and email traffic, is what puts the padlock in your browser’s address bar.

By doing so, TLS makes it very much harder for crooks to do three things:

  1. The crooks can’t easily snoop on the data you’re sending to a website, such as your login password or credit card number.
  2. They can’t easily tamper with the data that’s coming back, such as altering the bank balance to stop you noticing a fraud, or replacing an innocent download with dangerous malware.
  3. They can’t easily spoof you into thinking that their fraudulent, cloned website belongs to a brand or product you trust, such as your bank or a social network.

TLS takes off everywhere

Ten years ago, even the biggest and most popular online services in the world, such as Facebook, Gmail and Hotmail (now Outlook.com) didn’t use TLS all the time – it was thought to be too complicated, too slow, and not always necessary.

Sure, social media sites or online stores would encrypt the important stuff, such as when you actually logged in, or paid for something, or edited your private profile.

But the rest of the time, they’d often just use unencrypted web pages, figuring that you didn’t really needed protection against snooping, tampering and spoofing when you were “just looking”.

Well, that sort of simplification won’t wash any more, because we give away more than enough to put us in harm’s way just during regular browsing.

These days, therefore, we expect our web browsing to be protected by TLS all the time.

And most of the time these days, it is.

Everything looks the same

Guess what?

The crooks have fallen in love with TLS as well.

By using TLS to conceal their malware machinations inside an encrypted layer, cybercriminals can make it harder for us to figure out what they’re up to.

That’s because one stream of encrypted data looks much the same as any other.

Given a file that contains properly-encrypted data, you have no way of telling whether the original input was the complete text of the Holy Bible, or the compiled code of the world’s most dangerous ransomware.

After they’re encrypted, you simply can’t tell them apart – indeed, a well-designed encryption algorithm should convert any input plaintext into an output ciphertext that is indistinguishable from the sort of data you get by repeatedly rolling a die.

Paradoxically, then, as more and more of the internet gets encrypted, thus keeping us more secure…

…it also gets harder and harder to keep track of anomalous, unwanted and dangerous content.

When data is properly encrypted, you can’t differentiate between ciphertexts even if you know what the plaintexts were.

Keeping on top of it all

At this point, you’re probably wondering just exactly what the crooks are getting up to these days with TLS, and how much they’re using it.

And the excellent news is that Sean Gallagher of SophosLabs has just completed an extensive survey, based on data gathered from all around the world via our own software, to answer exactly those questions.

In his paper, published today, entitled Nearly half of malware now use TLS to conceal communications, he takes you through the tricks used by today’s cybercriminals to help them hide in plain sight, simply by making their bad traffic look much the same as our good traffic.

From just under a quarter of malware-related traffic using TLS a year ago to just under half today, this is definitely an issue you should be aware of.

As Sean writes:

The most concerning trend we’ve noted is the use of commercial cloud and web services as part of malware deployment, command and control. Malware authors’ abuse of legitimate communication platforms gives them the benefit of encrypted communications provided by Google Docs, Discord, Telegram, Pastebin and others—and, in some cases, they also benefit from the “safe” reputation of those platforms.

We also see the use of off-the-shelf offensive security tools and other ready-made tools and application programming interfaces that make using TLS-based communications more accessible continuing to grow.

Learn how these attacks work, and how SophosLabs is able to keep on top of them even though they’re encrypted.


go top