Category Archives: News

No password required! “Sign in with Apple” account takeover flaw patched

A security reseacher from Delhi in India is a tidy $100,000 richer thanks to a bug bounty payout from Apple for an account takeover flaw that he discovered in the Sign in with Apple system.

Bhavuk Jain, a serial bug bounty hunter, has described how he found the sort of bug that leaves you thinking, “It can’t have been that simple!”

Apparently, however, it was.

When we say “simple”, of course, we don’t mean that the bug itself was glaringly obvious to find and that anyone could have done it in 60 seconds.

Fortunately, a lot of security holes that leave you with a facepalm feeling after you hear about them depend on a researcher knowing where to look in the first place.

Finding “simple” bugs is often an intangible mixture of skill, experience, doggedness, intuition and – we have to be honest here – at least a bit of luck,

What’s simple about this one was the theoretical ease with which anyone who knew how to trigger it could have exploited it.

Sign in with Apple, like similar services offered by companies such as Facebook and Google, is a way for users to authenticate against your site or service by putting in their Apple credentials instead of a username and password specific to your site.

That’s nowhere near as crazy as it sounds: you’re not asking people to share their actual Apple (or Facebook, or Google) passwords with you, which would not only be dangerous but also against Apple’s (or Facebook’s, or Google’s) terms of service.

What you are doing is outsourcing the task of verifying a user’s identity to a large, well-known and trusted brand so that you don’t have to knit your own authentication software, or maintain a user account database of your own.

Your users don’t actually sign in on your site – they sign in via the third party’s system and acquire an authentication token specific to your site that they use to access their account on your server.

You can then verify for yourself, via the authentication provider, that the token they provide – think of it as a temporary ID badge specific to your site for that user – is both genuine and current.

The benefits are as follows: you get top-quality cryptography and authentication “for free”; your users can use login credentials they already have; and Apple gets to encourage users to have Apple accounts in the first place.

On that basis, the concept of using a major player’s existing and presumably secure login system sounds like a win-win-win situation.

Apple, or whoever is the authentication broker, doesn’t get access to your users’ accounts via this process, and likewise you don’t get access to their Apple accounts.

So this approach seems like a great way for you, if you’re a boutique website operator, to offer your users the sort of super-duper protection against password breaches that Apple and its ilk simply can’t afford not to have in place, but that would be a huge business distraction (and expense) if you were to try to do it yourself.

What went wrong?

The hole that Jain found has already been shut down by Apple, which is why he’s able to talk about it now.

He’s cautiously not given all the details away (presumably to stop copycats from trying to exploit the hole anyway, which would be a fruitless waste of everyone’s time at this point), but exploiting the flaw seems to start off like this:

  • You (via app or browser) tell Apple’s site that you want to start logging in.
  • Apple sends back a reply with an identifier to use during authentication.

Apparently, you can choose to let Apple share your login name – the email associated with your Apple ID – directly with the site that is using Sign in with Apple, which is convenient if you want to use the same email for logging in on both, or get Apple to generate a temporary email identifier to use during the login if you have a different login name on the third party site.

Either way, the pre-authentication reply comes back to you as a chunk of JSON data that includes an email address that acts as your moniker for logging into the third party site.

There’s a bunch of other data in there, too, such as the time it was issued, the time it expires, and more, but it’s the email address that’s important here.

One thing’s for sure – that JSON reply isn’t meant to be enough on its own for you to log in, merely enough for you to complete the rest of the login process, including proving your identity in some secure way, such as providing a valid Apple ID password and doing any necessary two-step verification dance.

At the end of the whole process, once Apple knows it’s you, you’ll get back a current, valid authentication token that you add into your future traffic to the third party site to prove you’ve logged in. (To be clear, the third party site will itself validate that token with Apple behind the scenes, so you can’t just make up a token code of your own.)

Unfortunately, Jain found an unexpected URL that was accessible on Apple’s login servers (he has redacted it to https://appleid.apple.com/XXXX/XXXX) to which he could send just the email address from the reply described above…

…and he’d get back a current, valid authentication token to use with the third party site, just as though he’d gone through the entire login process and proved who he was.

Just like that – no password required!

Simply put, he vaulted over the bit where regular users would need to identify themselves, so just knowing someone’s email address could have been enough to get access to one of their Sign in with Apple accounts.

What to do?

In theory, any online service that supports Sign in with Apple, and that didn’t have any additional login checks of its own, could have been vulnerable to this “login sidestep” flaw.

Although the Sign in with Apple service is relatively new, and isn’t yet ubiquitous, the Mac Observer website has a lengthy list of sites where it can be used, apparently including Adobe, Airbnb, Dropbox, eBay, Grindr, Medium, Strava, Tik Tok and WordPress.

But the good news is that because Jain practised what’s known as responsible disclosure – where he agreed to give the vendor exclusive access to his findings and wait until after it was fixed to say anything about it – you don’t need to update or to patch any software of your own.

This bug existed on Apple’s own servers and could therefore, in a happy ending, be fixed unilaterally.

Now he has gone public, Jain says that:

Apple […] did an investigation of their logs and determined there was no misuse or account compromise due to this vulnerability.

And that’s a good result for everyone.

Jain is $100k better off for his work, and this issue never became what’s called a zero-day, where a flaw is figured out and exploited before a fix is available.


Latest Naked Security podcast

Github uncovers malicious ‘Octopus Scanner’ targeting developers

GitHub has uncovered a form of malware that spreads via infected repositories on its system. It has spent the last ten weeks unpicking what it describes as a form of “virulent digital life”.

The malware is called the Octopus Scanner, and it targets Apache NetBeans, which is an integrated development environment used to write Java software. In its write-up of the attack, the GitHub Security Labs team explains how the malware lurks in source code repositories uploaded to its site, activating when a developer downloads an infected repository and uses it to create a software program.

Following a tip from a security researcher on 9 March, the Microsoft-owned site analysed the software to find out how it worked.

GitHub is an online service based on Git, a code versioning system developed by Linux creator Linus Torvalds. Git lets developers take snapshots of files in their software development projects, enabling them to revert their changes later or create different branches of a project for different people to work on. GitHub lets them ‘push’ copies of those repositories to its online service so that other developers can download (clone) and work on them too.

Here’s how Octopus Scanner works its dastardly magic. A developer downloads a project from a repository infected by the software and builds it, which means using the source code to create a working program. The build process activates the malware. It scans their computer to see if they have a NetBeans IDE installed. If they don’t, it takes no further action. But if they do, it infects the built files with a dropper that deliveres its final payload: a remote access trojan (RAT) that gives the perpetrators control over the user’s machine. The malware also tries to block any new project builds to replace the infected one, thereby preserving itself on the infected system.

Octopus Scanner doesn’t just infect the built files though. Most of the variants that GitHub found in its scans also infect a project’s source code, meaning that any other newly-infected projects mirrored to remote repositories would spread the malware further on GitHub.

GitHub Security Labs scanned the site’s repositories and found 26 of them containing the malware. The team matched the malware that it found with software hashes on VirusTotal and found a low detection rate, enabling it to spread without easily being caught.

GitHub regularly grapples with people using its repositories to deliberately distribute malware. Usually GitHub can just shut those repositories down and delete the accounts, but Octopus Scanner was trickier because the developers owning the respositories (known as maintainers) didn’t know they were infected. They were running legitimate projects so blocking those accounts or repositories could affect businesses. GitHub couldn’t merely delete the infected files in a compromised repository either, because the files would be crucial to the legitimate software project.

GitHub said that it was suprised to see the malware targeting NetBeans, because this isn’t the most popular Java IDE. It concluded:

Since the primary-infected users are developers, the access that is gained is of high interest to attackers since developers generally have access to additional projects, production environments, database passwords, and other critical assets. There is a huge potential for escalation of access, which is a core attacker objective in most cases.

We may never know who was behind Octopus Scanner but according to GitHub’s research it has been in circulation since as far back as 2018. It’s a sneaky example of code that targets a specific group of people covertly and efficiently.

Sophos products identify the malware samples listed in the GitHub Security Lab’s article by the names Java/Agent-BERX and Java/Agent-BERZ. If you are a NetBeans programmer, you can search for those names in your logs for evidence of Octopus Scanner files in your own build environment.


Latest Naked Security podcast

Clearview AI facial recogition sued again – this time by ACLU

The facial recognition company that everyone – or at least a large chunk of everyone – loves to hate, Clearview AI, is to get yet another day, and perhaps very much longer than that, in a Chicago courtroom.

The American Civil Liberties Union (ACLU), together with four community organisations based in the US state of Illinois, has brought a civil suit that states as its purpose to “put a stop to [Clearview’s] unlawful surreptitious capture and storage of millions of Illinoisans’ sensitive biometric identifiers”, and to “to remedy an extraordinary and unprecedented violation of Illinois residents’ privacy rights”.

Clearview already faced a class-action lawsuit filed in January, also in Illinois, for collecting biometric identifiers without consent.

Illinois is home not only to Chicago, the third-biggest metropolis in the US, but also to a state law called BIPA, short for Biometric Information Privacy Act, which legislates the USA’s strictest legal control over the collection and use of biometric data.

In case you missed it, Clearview AI pitches itself, on its corporate website, as offering “computer vision for a safer world”, and describes its services as follows:

Clearview AI is a new research tool used by law enforcement agencies to identify perpetrators and victims of crimes.

Clearview AI’s technology has helped law enforcement track down hundreds of at-large criminals, including pedophiles, terrorists and sex traffickers. It is also used to help exonerate the innocent and identify the victims of crimes including child sex abuse and financial fraud.

Using Clearview AI, law enforcement is able to catch the most dangerous criminals, solve the toughest cold cases and make communities safer, especially the most vulnerable among us.

In simple terms, Clearview trawls the internet for publicly available images of people, notably images that are already tagged in some way that identifies the people in the picture, and builds an ever-burgeoning index that can map faces back to names.

Loosely speaking, it tracks down pictures of you that are already available online, for example as snapshots on a social network, then analyses your face in those images to create what amounts to a faceprint.

It then combines that faceprint with existing data against your name to refine and improve its accuracy in matching untagged images against existing, tagged ones.

For example, if you’re a police officer, you might put in a picture of a “person of interest” whom you snapped during a lawful surveillance operation, and Clearview would try to find out who it was and to point you in the direction of useful intelligence about them.

Interestingly, the company wasn’t created specifically to address law enforcement needs or desires, but rather to create a massively scalable facial recognition system and then find a market for it.

In a CNN interview with Clearview founder Hoan Ton-That published earlier this year, Ton-That talks about how the company…

…spent about two years really perfecting the technology and the accuracy and the raw facial recognition technology accuracy. […] We have to search out of billions and billions of photos and still provide an accurate match. [… T]he second part of it was, ‘What’s the best application of this technology?’ And we found that law enforcement was such a great use case.

Ton-That admitted, in that interview from March 2020, that the company did have customers in the private sector, but claimed that all of them were “trained investigators.”

But that stance has changed recently, due to the class-action lawsuit mentioned above, with the company apparently agreeing in May 2020 that it would not sell to private entities any more, and would not sell its services at all in Illinois.

Is it legal?

As you can probably imagine, Clearview’s attitude is that the data it is searching and acquiring is already public – it doesn’t use pictures that are uploaded for private access only – and so all it is doing is creating its own index of images that anyone who wanted to could find and peruse on their own.

Following that argument, you could choose to treat Clearview as a special case of a search engine, many of which provide their own reverse image search features, apparently without the same level of legal controversy.

On that basis, you might conclude that Clearview is being penalised simply for being technologically successful and “building a better mousetrap.”

But the ACLU and its fellow plaintiffs in this latest legal suit disagree, arguing that Clearview isn’t just collating public images but scraping them in vast quantities and then converting them into faceprints rather than merely storing them as images.

Faceprints, says the ACLU, are not only biometrics as far as the public is concerned, but also identified explicitly as biometric identifiers in the Illinois BIPA act.

Interestingly, BIPA excludes photographs themselves from its list of biometric identifiers – it’s the processing of the photograhps to extract a “recogniser” that makes the data biometric:

In [the Biometric Information Privacy] Act:

“Biometric identifier” means a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry. Biometric identifiers do not include writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color, or eye color. […]

BIPA requires not only that anyone whose photos get “biometricised” be asked for written consent up front, but also that they have a right to know how their data will be used, and by whom, and when it will be deleted.

It’s hard to see how Clearview could ever practicably comply with the consent part of this law, if the court deems that the ACLU is right. (It also feels likely that more and more US states, and even the Federal Government, might enact similar legislation in the near future.)

It also seems unlikely that enough people would ever give their consent to make the service work usefully if the court rules that consent is always and explicity necessary. (Would you consent? Let us know, and why, in the comments below.)

You’d also think that purging faceprints from the database – and, presumably, retraining the entire system every time an entry is deleted to ensure that the withdrawn faceprint no longer affects the search results – might end up taking more effort than adding new faceprints that were acquired with consent.

Yet bigger foes

What makes life even tougher for Clearview right now is that it also faces a group of opponents that are perhap an even bigger foe to its business model than the Circuit Court of Cook County, Illinois.

Earlier this year, Twitter, Facebook (which owns Instagram) and Google (which owns YouTube) told Clearview that images on their sites weren’t there for scraping.

It seems that those three social media behmoths aren’t just objecting to the fact that their terms and conditions don’t allow third parties to scrape their data for commercial purposes.

Even if Clearview were to offer to pay a fair market price for the right to scrape images from Facebook, Twitter and Google in order to get the contractual angle in order, it looks as though those companies would refuse anyway.

We’d imagine that images from those three companies alone account for a lot of the accuracy in Clearview’s service.

What to do?

There’s an elephant in the room in this story, of course.

The problem here is that even if Clearview loses this case and ends not using scraped images at all, those images nevertheless remain publicly accessible, and are prone to scraping anyway by companies or groups that, unlike Clearview, will not seek to publicise their work.

Simply put, even if these lawsuits end up establishing that you have an implicit and irrevocable expectation of privacy when you upload pictures online, and even if there are legal precedents that successfully inhibit companies openly using your published images for facial recognition puposes…

…those legalisms won’t actually stop your photos getting scraped, in just the same way that laws criminalising the use of malware almost everywhere in the world haven’t put an end to malware attacks.

So we have to end with two pieces of advice that, admittedly, make today’s internet a bit less fun than you might like it to be:

  • If in doubt, don’t give it out. By all means publish photos of yourself, but be thoughtful and sparing about quite how much you give away about yourself and your lifestyle when you do. Assume they will get scraped whatever the law says. and assmue someone will try to misuse that data if they can.
  • Don’t upload data about your friends without permission. It feels a bit boring, but it’s the right thing to do – ask everyone in the photo if they mind you uploading it, ideally before you even take it. Even if you’re legally in the right to upload the photo because you took it, respect others’ privacy as you hope they’ll respect yours.

Let’s aim for a truly opt-in online future, where nothing to do with privacy is taken for granted, and every picture that’s uploaded has the consent of those in it.


Latest Naked Security podcast

COVID-19 tests, PPE and antivirual drugs find a home on the dark web

Criminals have been quick to adapt to the global coronavirus pandemic. Sophos threat researchers have shown how cybercriminals have taken advantage of COVID-19 in myriad ways, and the FBI has warned us about criminals profiteering with advance fee and business email compromise scams.

But what’s happening on the dark web, the scene of so much illegal trade?

Empire Market is one of the most popular places to buy illegal goods on the dark web, transacting a little over $1,000,000 a week. It is also one of the few prominent dark web markets that hasn’t banned the sale of pandemic-related goods.

I went there to see what impact the coronavirus is having.

Antiviral drugs

Empire Market has over 52 thousand listings across 11 categories, but the Drugs & Chemicals category dwarfs the others by an order of magnitude.

While recreational and common prescription drugs are a mainstay on dark web markets, there are a number of medicinal drugs that have been touted as prospective treatments for COVID-19. I wanted to see if drugs like hydroxychloroquine, remdesivir, favipiravir, lopinavir and ritonavir were on sale.

In total, I found 49 listings for chloroquine or hydroxychloroquine on Empire market.

Hydroxychloroquine for sale on the dark web

(Three vendors have subsequently been banned from the market, and one vendor’s listing is currently unavailable.)

The vendor who posted the most ads for hydroxychloroquine on Empire, a total of 33, claims to have an unlimited supply and will sell you a whopping 9,000 pills for $1,194.

There was one listing for favipiravir, by a vendor who also offered hand sanitizer and an ‘asbestos protection kit’ re-purposed to protect against COVID-19.

The same vendor also had a listing for lopinavir and ritonavir, a combination antiviral commonly used for the treatment and prevention of HIV/AIDS, now being researched as a possible treatment for COVID-19.

I was only able to find two additional listings for the drugs lopinavir and ritonavir and only one verified sale.

Finally, there was one vendor who could allegedly provide this COVID-19-fighting mega-pack!

COVID-19 'mega pack' for sale on the dark web

Most of the drugs on offer appeared to be legitimate products manufactured by genuine pharmaceutical companies, but some were clearly scams.

Recreational drugs

On 30 May 2019, there were 24,569 listings in Empire Market’s Drugs & Chemicals category. A year later there are over 34,000. This amounts to a year-over-year increase of roughly 42%. Since markets make their money by taking a cut of every transaction, and for Empire this cut is 4%, that’s some healthy revenue growth!

With growth like that, the illicit drugs trade on the most popular dark web market doesn’t seem to have suffered during the pandemic.

What is new, however, is pandemic-themed sales.

Empire Market currently has many listings that feature either Coronavirus or COVID-19 as a reason for discounts in what looks like a once-in-a-lifetime Black Friday sale.

Coronavirus-themed sales on Empire Market

What’s unclear is whether these discounts were being offered because the vendors were afraid that business might dry up, or because they anticipated a dramatic surge in orders as a result of movement restrictions.

Diagnostics

Next, I started looking into other items that are only on the market as a direct result of the pandemic, starting with at-home COVID-19 rapid test kits.

COVID-19 testing kits for sale on the dark web

I found several examples of COVID-19 rapid test kits being offered on Empire and was able to identify that six of the nine different kits found are being legitimately manufactured (of course what – if anything – is actually delivered to you may be different).

One of the pictures provided an opportunity to do a bit more digging – the packaging in the photo featured a logo and a domain name.

The site it linked to seems to have existed under at least three different domain names since 9 March 2020. When I first visited the site it was also selling testing kits, and the same products were also available on a sister site with a similar name. All four domains associated with this seller were registered in March and three of them are a variation on the phrase “corona safe”.

I contacted the site for details about the testing kits, such as proof of certification and origin but did not receive a response.

Of course there’s nothing wrong with registering domains with the words corona or coronavirus in them, but it is a red flag. There has been an explosion in coronavirus-themed domain name registrations since the start of the pandemic, many of which are being used for malicious purposes.

The sites have since stopped offering at-home testing kits for sale, but their Facebook, Instagram and Twitter pages are still promoting them through a prize draw.

PPE

There is a smattering of listings for PPE (personal protective equipment) on Empire Market, most of which are for face masks. The masks varied in protectiveness from simple surgical masks to N99 masks. Prices ranged from tens of dollars for one mask to thousands for boxes of masks.

PPE for sale on the dark web

I also found two offers for hand sanitizer and one for a commercial disinfectant, but couldn’t find any gloves or gowns, except for the asbestos protection kit I mentioned above.

Whilst some of these may be scams, there is no doubt that many of them are real. Like many of the prescription drugs on offer, they will likely have been stolen from warehouses or diverted during shipping. And of course, any mask that’s for sale on Empire Market isn’t available for resource-strapped healthcare workers, and is driving up the price of what’s left of the limited supply PPE.

And of course, if these products are bogus, or sub-standard, the protection they offer may not be adequate and may put the wearer at increased risk.

Fraud

The pandemic-themed sales don’t only apply to physical goods like drugs, testing kits or PPE. Vendors of digital goods have also been getting on the coronavirus bandwagon. These vendors are selling access to guides on how to commit fraud.

Pandemic-themed digital goods on the dark web

Some of the offers are for products that will help you profit directly from the pandemic, rather than because of it.

These guides have always been available but now that everyone is spending more time at home, perhaps the vendors are hoping to cash in on the opportunity to enlist more remote workers. We cannot overlook the fact that many people have lost their primary income due to the pandemic and some may turn to running these scams simply as means to pay their mounting bills.

Conclusion

The impact of COVID-19 on Empire Market has been limited, in terms of the number of products being offered. However, it is one of the few prominent markets that has not banned the sale of pandemic-related goods outright and is apparently happy to profit from the coronavirus. As other markets are setting restrictions on the sale of many of the products featured above, for Empire Market, it’s business as usual.

Latest Naked Security podcast

Windows 10 adds new security and privacy features in May update

Windows 10 release 2004 is out, with a slew of new features. They include several updates to its security and privacy. Here’s what you get when you download it, as outlined in the company’s blog post.

Microsoft has updated its System Guard Firmware Measurement. This feature, launched in Windows 10 1903, helps guarantee the integrity of a system when it starts by checking system firmware, and it’s part of a broader System Guard protection feature.

This system now checks more things when launching Windows (specifically IO ports and memory-mapped IO, which is a computing feature that uses the same address register to access both main memory and peripheral controllers). It provides more evidence that the system hasn’t been tampered with during bootup. You’ll need newer hardware to use this latest enhancement though, warns Microsoft, adding that it will be along shortly.

Also on the menu is Chromium-based Edge support for Application Guard, which is a Defender feature that allowlists trusted websites and puts everything else in a container using the Hyper-V hypervisor technology build into Windows 10. That stops malicious sites from snooping on your enterprise data. Microsoft switched its Edge browser to the open-source Chromium engine in April 2019, so this is a welcome addition.

Application Guard isn’t the only tool that Microsoft uses to shield the rest of the system from your activities. In Windows 10 1903, it launched the Windows Sandbox, which is a lightweight desktop environment that isolates anything you run in it and wipes all its files when you close it down. Think of it like a temporary scratchpad for running Windows programs, offering a good way to test applications or to run them once.

In Windows 10 2004, the Sandbox now supports configuration files, enabling you to customise your virtual environment. You can use a microphone with it now, along with full screen mode. You can also set apps to restart automatically when you sign in.

The latest Windows release also introduces broader support for FIDO2 security keys. Microsoft won its FIDO2 certification a year ago, folding it into Windows Hello. It now supports devices that are joined to the Azure Active Directory, which is the identity management and access control system that fronts Office 365 and everything else in the Azure cloud.

The company also added easier settings for passwordless access to Microsoft accounts directly in the OS. Now, you can access Sign-in options in the Accounts section of the Settings area and set ‘Require Windows Hello sign-in for Microsoft accounts’ to ‘On’. You can also set up PIN-based Windows Hello access in Safe mode (which boots into Windows with many features and hardware devices turned off for troubleshooting).

Being signed into a Microsoft account is now vital for users who rely on Microsoft’s Cortana virtual assistant. As of this Windows 10 release you must be signed into a Microsoft account to use Cortana.

Microsoft says that it is shifting the assistant to enterprise productivity and is abandoning music, connected home, and third-party skills in Cortana as of this release. The new incantation of Cortana is called Cortana enterprise services, and it falls under the Online Services Terms (OST) that the company updated in November 2019 after pressure from the Dutch privacy regulator.

Microsoft explains that it is redefining itself as the data processor for customer data collected by Cortana enterprise services, as opposed to the data controller. This is a GDPR distinction. Under that EU regulation, a data processor has less responsibility for user data. It only processes the data that it receives from the data controller (in this case the Cortana user), which calls the shots on what is collected, how it is changed, and where and how it is used.

Latest Naked Security podcast

go top