Category Archives: News

Congress wants to know who is using spyware against the US

On 1 May 2018, the richest man in the world was having a seemingly friendly WhatsApp conversation with Saudi Arabia’s Crown Prince Mohammed bin Salman when an unsolicited file was sent from the crown prince’s phone.

Within hours, a trove of data was exfiltrated from Amazon CEO Jeff Bezos’s phone: a data theft likely triggered by NSO Group’s notorious Pegasus mobile spyware, according to a United Nations report released earlier this year.

That one piece of commercial spyware alone has been linked to at least one assassination and multiple human rights abuses, including allegedly playing a part in the 2018 murder of Washington Post journalist Jamal Khashoggi; a June 2018 spearphishing attack on an Amnesty International staff member; and use by the Mexican government against prominent human rights lawyers, journalists and anti-corruption activists.

Finally, after years of states’ use of this kind of powerful spyware against their rivals and political enemies, the US Congress is planning to order its Director of National Intelligence (DNI) to keep track of the threat this malware poses to the nation, which foreign governments are using it, and for what.

John Scott-Railton, a senior researcher for Citizen Lab, last week spotted a powerful bit of legislation tucked into a draft of the intelligence funding bill for 2021. The Senate bill – which lays out funding for the government’s intelligence operations for next year – would require the DNI to submit a report to Congress on the threat posed by commercial spyware. Scott-Railton called it a “clear signal that [the] Senate is taking [the] National Security threat of commercial spyware very seriously.”

You can read the relevant language in Section 503 of the draft version of the Intelligence Authorization Act for Fiscal Year 2021.

Section 503. SOURCE: Intelligence Authorization Act for Fiscal Year 2021

Researchers at the University of Toronto’s Citizen Lab cybersecurity research laboratory are intimately familiar with Pegasus and other spyware. They’ve been tracking Pegasus for years. In fact, Citizen Lab first revealed Pegasus in August 2016. They also consulted on a New York Times report that found that “Mexico’s most prominent human rights lawyers, journalists and anti-corruption activists have been targeted by advanced spyware sold to the Mexican government” by NSO Group, an Israeli company that claims it made “an explicit agreement that it be used only to battle terrorists or the drug cartels and criminal groups that have long kidnapped and killed Mexicans”.

Scott-Railton said that for years, every major US tech company has grappled with the threats posed by commercial spyware. The same goes for the nation’s intelligence community and elected officials, including the State Department. Now, in a push led by Senator Ron Wyden, “the issue is going primetime for Congress,” Scott-Railton said.

Section 503 would require inquiry into, and reporting on, the companies that sell commercial spyware, including whether it’s coming from US companies. It also seeks details on which spyware buyers – be they foreign government or other entities – pose the biggest threat to the US and government employees based at home or overseas.

Who's making it and who's using it
Who’s making it and who’s using it IMAGE: Section 503. SOURCE: Intelligence Authorization Act for Fiscal Year 2021

Section 503 requires the government to work with technology companies and telecoms to figure out how to beef up the security of the consumer software and hardware used in the US: technology that’s targeted by intrusion and surveillance software. It suggests actively blocking threat actors by using multiple tools: Export controls, diplomatic pressure and trade agreements.

Scott-Railton provided this TLDR translation:

Commercial spyware has always been a NATSEC threat for the US. This language helps gov move towards action.

It’s “very bad news for habitual bad actors like NSO Group & quieter peers around the world,” he said.

Maybe so, but those “habitual bad actors” are habitually making an enormous amount of money selling this malware. Don’t expect them to give up without a fight, Scott-Railton said:

That sound you hear? That’s shady spyware firms trying to figure out how much more $$ to throw at lobbying, lawyers & influence ops to mitigate the damage.

Earlier this month, the current draft of the funding bill sailed through the Senate Select Committee on Intelligence with a 14-1 vote. It will be subject to a Senate vote later this summer.

Microsoft Azure users leave front door open for cryptomining crooks

Remember when as a server operator all you had to worry about were people scanning for open ports and then stealing secrets via telnet shells? Those were the days, eh?

Things got a lot more complicated when the cloud got popular. Now, hackers are gaining access to cloud-based systems via the web, and they’re using them to mine for cryptocurrency. Microsoft just found a campaign that exploits Kubernetes to install cryptomining software in its Azure cloud. That could generate some mad coin for attackers – and cost legitimate cloud users dear.

Software containers are small collections of software that run in isolation from each other, making it easier for lots of them to coexist on the same system. Kubernetes is an open source project that lets administrators manage software containers en masse, and it runs in cloud infrastructures like Microsoft’s Azure. Kubeflow is an open source framework that implements Tensorflow on top of Kubernetes, and Tensorflow is a system originally developed by Google for training AI systems.

AI training jobs need lots of computing power, so they generally use graphical processing units (GPUs), which can chew through floating point calculations very quickly. That’s great for mining some cryptocurrencies that use proof of work algorithms. They too rely on lots of computing power. While GPUs aren’t appropriate for mining all proof of work-based cryptocurrencies, they’re great for some like Monero and (for the time being until a long-planned algorithmic changeover kicks in) Ethereum.

The Azure Security Center found a malicious container running as part of a Kubeflow implementation. The container was running a cryptominer to use the same computing power that Kubeflow was using to train AI. Sneaky. So how did it get there?

As is often the case, user misconfiguration was the culprit. Kubernetes uses something called Istio, which is a framework to connect container-based software services. Kubeflow uses Istio to expose an administrative dashboard. For security, it uses something called Istio-ingressgateway to do this. That service is only accessible internally, and this is key, because the only way to access it is via port-forwarding over the Kubernetes API.

That should make the management interface for Kubeflow secure, but some admins apparently modified Istio to make Istio-ingressgateway directly accessible from the public internet. That’s convenient, but not a good idea from a security perspective because it means attackers can see the management interface from the public internet. From there, they could manipulate the system to install their malicious container on the Kubernetes system.

This isn’t the first time that people have hacked Kubernetes or used it to mine for cryptocurrency. Someone pwned a Tesla Kubernetes Amazon Web Services deployment in 2018, exploiting an administrative console that wasn’t password protected and then installing a miner on the system.

More recently in April this year, Microsoft identified a large-scale attack in which the attacker installed tens of malicious pods (collections of containers) on tens of clusters (groups of machines running containers).

Earlier this month, Sophos also documented a cryptomining campaign called Kingminer that attacked servers using exploits including brute forcing RDP, the mechanism used to access Windows machines remotely.

Intel patches chip flaw that could leak your cryptographic secrets

This week, Intel patched a CPU security bug that hasn’t attracted a funky name, even though the bug itself is admittedly pretty funky.

Known as CVE-2020-0543 for short, or Special Register Buffer Data Sampling in its full title, it serves as one more reminder that as we expect processor makers to produce ever-faster chips that can churn through ever more code and data in ever less time…

…we sometimes pay a cybersecurity price, at least in theoretical terms.

If you’re a regular Naked Security reader, you’re probably familiar with the term speculative execution, which refers to the fact that modern CPUs often race ahead of themselves by performing internal calculations, or partial calculations, that might nevertheless turn out to be redundant.

The idea isn’t as weird as it sounds, because modern chips typically break down operations that look to the programmer like one machine code instruction into numerous subinstructions, and they can work on many of these so-called microarchitectural operations on multiple CPU cores at the same time.

If, for example, your program is reading through an array of data to perform a complex calculation based on all the values in it, the processor needs to make sure that you don’t read past the end of your memory buffer, because that could allow someone else’s private data to leak into your computation.

In theory, the CPU should freeze your program every time you peek at the next byte in the array, perform a security check that you are authorised to see it, and only then allow your program to proceed.

But every time there’s a delay in finishing the security check, all the microarchitectural calculation units that your program would otherwise have been using to keep the computation flying along would be sitting idle – even though the outcome of their calculations would not be visible outside the chip core.

Speculative execution says, amongst other things, “Let’s allow internal calculations to carry on ahead of the security checks, on the grounds that if the checks ultimately pass, we’re ahead in the race and can release the final output quickly.”

The theory is that if the checks fail, the chip can just discard the internal data that it now knows is tainted by insecurity, so there’s a possible performance boost without a security risk given that the security checks will ultimately prevent secret data being disclosed anyway.

The vast majority of code that churns through arrays doesn’t read off the end of its allotted memory, so the typical performance boost is huge, and there doesn’t seem to be a downside.

Except for the inconvenient fact that the tainted data sometimes leaves behind ghostly echoes of its presence that are detectable outside the chip, even though the data itself was never offically emitted as the output of a machine code instruction.

Notably, memory addresses that have been accessed recently typically end up cached inside the chip, to speed up access in case they’re needed again soon, because that improves performance a lot. Therefore the speed with which memory locations can be accessed generally gives away information about how recently they were peeked at – and thus what memory address values were used – even if that “peeking” was speculative and was retrospectively cancelled internally for security reasons.

Discernible traces

Unfortunately, any security shortcuts taken inside the core of the chip may inadvertently leave discernible traces that could allow untrusted software to make later inferences about some of that data.

Even if all an attacker can do is guess, say, that the first and last bits of your secret decryption key must be zero, or that the very last cell in your spreadsheet has a value that is larger than 32,767 but smaller than 1,048,576, there’s still a serious security risk there.

That risk is often compounded in cases like this because attackers may be able to refine those guesses by making millions or billions of inferences and greatly improving their reckoning over time.

Imagine, for instance, that your decryption key is rotated leftwards by one bit every so often, and that the attacker gets to “re-infer” the value of its first and last bits every time that rotation happens.

Given enough time and a sufficiently accurate series of inferences, the attackers may gradually figure out more and more about your secret key until they are well-placed enough to guess it successfully.

(If you recover 16 bits of a decryption key that was supposed to withstand 10 years of concerted cracking, you can probably break it 216 or 65,536 times faster than before, which means you now only need a few hours.)

What about CVE-2020-0543

In the case of the Special Register Buffer Data Sampling bug, or CVE-2020-0543, the internal data that might accidentally leak out – or, more precisely, be coaxed out – of the processor chip includes recent output values from the following maching code instructions:

  • RDRAND. This instruction code is short for ReaD secure hardware RANDom number. Ironically, RDRAND was designed to produce top-quality hardware random numbers, based on the physics of electronic thermal noise, which is generally regarded as impossible to model realistically. This makes it a more trusted source of random data than software-derived sources such as keystroke and mouse timing (which doesn’t exist on servers), network latency (which depends on software that itself follows pre-programmed patterns), and so on. If another program running on the same CPU as yours can figure out or guess some of the random numbers you’ve knitted into your recent cryptographic calculations, they might get a handy head start at cracking your keys.
  • RDSEED. This is short for ReaD random number SEED, an instruction that operates more slowly and relies on more thermal noise than RDRAND. It’s designed for cases where you want to use a software random number generator but would like to initialise it with what’s known as a “seed” to kickstart its randomness or entropy. An attacker who knows your software random generator seed could reconstruct the entire sequence, which might enable or at least greatly assist future cryptographic cracking.
  • EGETKEY. This stands for Enclave GET encryption KEY. Enclave means it’s part of Intel’s much vaunted SGX set of instructions which are supposed to provide a sealed-off block of memory that even the operating system kernel can’t look inside. This means an SGX enclave acts as a sort of tamper-proof security module like the specialised chips used in smart cards or mobile phones for storing lock codes and other secrets. In theory, only software that is already running in the enclave can read data stored inside it, and can’t write that data outside it, so that encryption keys generated inside the enclave can’t escape – neither by accident nor by design. An attacker who could make inferences about random cryptographic keys inside an enclave of yours could end up with access to secret data that even you aren’t supposed to be able to read out!

How bad is this?

The good news is that guessing someone else’s most recent RDRAND values doesn’t automatically and instantly give you the power to decrypt all their files and network traffic.

The bad news, as Intel itself admits:

RDRAND and RDSEED may be used in methods that rely on the data returned being kept secret from potentially malicious actors on other physical cores. For example, random numbers from RDRAND or RDSEED may be used as the basis for a session encryption key. If these values are leaked, an adversary potentially may be able to derive the encryption key.

And researchers at the Vrije Universiteit Amsterdam and ETH Zurich have published a paper called CROSSTALK: Speculative data leaks across cores are real (they did come up with a funky name!) which explains how the CVE-2020-0543 flaw could be exploited, concluding that:

The cryptographically-secure RDRAND and RDSEED instructions turn out to leak their output to attackers […] on many Intel CPUs, and we have demonstrated that this is a realistic attack. We have also seen that […] it is almost trivial to apply these attacks to break code running in Intel’s secure SGX enclaves.

What to do?

Intel has released a series of microcode updates for affected chips that dial back speed in favour of security to mitigate these “CROSSTALK” attacks.

Simply put, secret data generated inside the chip as part of the random generator circuitry will be aggressively purged after use so it doesn’t leave behind those “ghostly echoes” that might be detected thanks to speculative execution.

Also, access to the random data generated for RDRAND and RDSEED (and consumed by EGETKEY) will be more strictly regulated so that the random numbers generated for multiple programs running in parallel will only be made available in the order that those programs made their requests.

That may reduce performance slightly – every program wanting RDRAND numbers will have to wait its turn insteading of going in parallel – but ensures that the internal “secret data” used to generate process X’s random numbers will have been purged from the chip before process X+1 gets a look in.

Where to get your microcode updates depends on your computer and your operating system.

Linux distros will typically bundle and distribute the fixes as part of a kernel update (mine turned up yesterday, for example); for other operating systems you may need to download a BIOS update from the vendor of your computer or its motherboard – so please consult your computer maker for advice.

(Intel says that, “in general, Intel Core-family […] and Intel Xeon E3 processors […] may be are affected”, and has published a list of at-risk processor chips if you happen to know which chip is in your computer.)


Facebook paid for a 0-day to help FBI unmask child predator

Facebook paid a cybersecurity firm six figures to develop a zero-day in a Tor-reliant operating system in order to unmask a man who spent years sextorting hundreds of young girls, threatening to shoot or blow up their schools if they didn’t comply, Motherboard’s Vice has learned.

We already knew from court documents that the FBI tricked the man into opening a booby-trapped video – purportedly of child sexual abuse, though it held no such thing – that exposed his IP address. What we didn’t know until now is that the exploit was custom-crafted at Facebook’s behest and at its expense.

Facebook had skin in this game. The predator, a Californian by the name of Buster Hernandez, used the platform and its messaging apps as his hunting grounds for years before he was caught.

Hernandez was such a persistent threat, and he was so good at hiding his real identity, that Facebook took the “unprecedented” step of working with a third-party firm to develop an exploit, Vice reports. According to the publication’s sources within Facebook, it was “the first and only time” that Facebook has helped law enforcement hack a target.

It’s an ethically thorny discovery. On one hand, we’ve got the deeply troubling implications of Facebook paying for a company to drill a hole into a privacy-protecting technology so as to strip away the anonymity of a user – this, coming from a platform that’s promised to slather end-to-end encryption across all of its messaging apps.

On the other hand, it’s easy to cheer for the results, given the nature of the target.

Arrested in 2017 at the age of 26, Hernandez went by the name Brian Kil (among 14 other aliases) online. Between 2012 and 2017, he terrorized children, threatening to murder, rape, kidnap, or otherwise brutalize them if they didn’t send nude images, encouraging some of them to kill themselves and threatening mass shootings at their schools or a mall bombing. In February 2020, he pleaded guilty to 41 counts of terrorizing girls aged 12 to 15.

Although Facebook reportedly hired an unnamed third-party to come up with a zero day that would lead to the discovery of Hernandez’s IP address and eventual arrest, it didn’t actually hand that exploit over to the FBI. It’s not even clear that the FBI knew that Facebook was behind the development of the zero day.

The FBI has, of course, done the same thing itself. One case was the Playpen takedown, when the bureau infamously took over a worldwide child exploitation enterprise and ran it for 13 days, planting a so-called network investigative technique (NIT) – what’s also known as police malware – onto the computers of those who visited.

In the hunt for Hernandez, a zero-day exploit was developed to target a privacy-focused operating system called Tails. Also known as the Amnesic Incognito Live System, Tails routes all incoming and outgoing connections through the Tor anonymity network, masking users’ real IP addresses and, hence, their identities and locations. The Tails zero-day was used to strip away the anonymizing layers of Tor to get at Hernandez’s real IP address, which ultimately led to his arrest.

Facebook: We had no choice

A Facebook spokesperson told Motherboard that the publication got it right: the platform had indeed worked with security experts to help the FBI hack Hernandez. The spokesperson provided this statement:

The only acceptable outcome to us was Buster Hernandez facing accountability for his abuse of young girls. This was a unique case, because he was using such sophisticated methods to hide his identity, that we took the extraordinary steps of working with security experts to help the FBI bring him to justice.

A former Facebook employee with knowledge of the case said that this was an extremely targeted hit that didn’t affect other users’ privacy:

In this case, there was absolutely no risk to users other than this one person for which there was much more than probable cause. We never would have made a change that affected anybody else, like an encryption backdoor.

Since there were no other privacy risks, and the human impact was so large, I don’t feel like we had another choice.

The human impact was not only large: it was vicious and unrelenting. Hernandez lied to victims about having explicit images of them and demanded more, lest he send photos to their friends and family. He did, in fact, publish some victims’ intimate imagery. For one victim – identified as Victim 1 in the criminal complaint – he doctored videos she’d taken of herself dancing. She thought she’d deleted them, Hernandez said in one of his many braggart’s posts. He got the videos anyway, he said, having hacked her cloud account to get the imagery, which he edited to appear explicit.

He lied about having weapons, he lied about plans to shoot up a high school, he lied about a bomb at a mall. His rape threats were long and graphic, describing how he’d slit girls’ throats or kill their families. Sometimes, he encouraged his victims to kill themselves. If they did, he’d post their nude photos on memorial pages, he said.

In December 2015, multiple high schools and shops in the towns of Plainville and Danville, Indiana, were shut down due to Kil’s terrorist threats. The following month, the community, along with police, held a forum to discuss the threats.

After the forum, Kil posted notes about who attended, what they wore, and what was said, as reported to him by a victim whom he’d coerced into attending and reporting back to him.

(IMAGE: Criminal complaint)

What he wrote in 2015, after telling victims he “wants to be the worst cyberterrorist who ever lived”:

I want to leave a trail of death and fire [at your high school]. I will simply WALK RIGHT IN UNDETECTED TOMORROW … I will slaughter your entire class and save you for last. I will lean over you as you scream and cry and beg for mercy before I slit your f**king throat from ear to ear.

Not all Facebook employees agreed

Several employees, both current and former, told Vice that the decision to hack Brian Kil was more controversial than the company’s statement would indicate. You can see why they’d have qualms: the same operating system that hid Hernandez for years as he contacted and harassed hundreds of victims is also widely used by those whose work – or whose very lives – depend on the privacy and anonymity of Tor, including journalists, dissidents, activists and survivors of domestic abuse.

A spokesperson for Tails told Vice that the operating system is used daily by more than 30,000 such people, all of whom seek the shelter of Tor to avoid persecution, surveillance and/or the chance of falling back into the hands of their abusers. The flaw that was exploited in order to catch Hernandez – found in Tails’ video player to reveal the real IP address of the person viewing a video – was never disclosed to Tails. If the flaw hadn’t been done away with in a patch, it could have been used against innocent people.

Besides protecting monsters like Hernandez, anonymizing technologies such as Tails, Tor and encryption protect the privacy of others who deserve products that don’t have holes drilled into them. That’s why we and other encryption supporters have always pledged our support for #NoBackdoors.

But what does a company like Facebook do when it feels it has no other choice but to penetrate Tor in order to stop a menace to society?

Coming to the aid of the FBI

Both the FBI and Facebook were trying to get Hernandez. He was considered public enemy No. 1 at Facebook, which took extraordinary measures to track what employees considered to be the worst criminal to ever use the platform.

The company dedicated one employee to tracking Hernandez for two years. Hernandez’s reign of terror also inspired the platform to develop a new machine learning system: one that could detect users who create new accounts that they use to reach out to kids in order to exploit them. According to former employees, that system detected Hernandez and tied him to a number of pseudonymous accounts and their victims.

The FBI tried to hack Hernandez. But it didn’t go after him by exploiting Tails, and its attempts failed. Hernandez detected the attempt and ridiculed the bureau over it. It was at that point that Facebook decided to help.

Facebook engineers and security researchers felt they had no choice. Others aren’t so sure. Vice referred to a statement from Senator Ron Wyden that questioned the lack of transparency in how law enforcement handles vulnerabilities. From that statement:

Did the FBI re-use [the zero day] in other cases? Did it share the vulnerability with other agencies? Did it submit the zero-day for review by the inter-agency Vulnerabilities Equity Process? It’s clear there needs to be much more sunlight on how the government uses hacking tools, and whether the rules in place provide adequate guardrails.

Some Facebook employees agree: if this is a precedent, it’s not a good one. Vice quoted one such:

The precedent of a private company buying a zero-day to go after a criminal. That entire concept is f**ked up.

It is f**ked up. Ethically, it’s about as problematic as you can get. But, understandably, what Facebook pulled off is also a great source of pride to the engineers who worked on getting this guy, such as this former employee:

I think they totally did the right thing here. They put a lot of effort into child safety. It’s hard to think of another company spending the amount of time and resources to try to limit damage caused by one evil guy.

Twitter wants to know if you meant to share that article

Just about to share an article with a sensational headline? Stop! Did you at least read it first?

Sharing clickbait containing spurious content without bothering to check it over is a perennial problem for attention-challenged social media users (hey! squirrel!) and now Twitter wants to help stop it. The company has launched a test feature that reminds you to read articles before retweeting them.

Reportedly launched on just a few US Android phones for now, the service will warn users if they try to retweet articles that they haven’t opened, the company announced in a tweet from its support channel:

So if you haven’t clicked on a link in Twitter and you try to retweet it, the service will cough politely and ask if you’re sure. You can go ahead and retweet it anyway, if you so choose, meaning that devoted readers of Tin Foil Hat Times, Conspiracy Monthly, or the National Shouty Review can still happily spread the crazy.

It’s a service for folks that want to do the right thing and just need a reminder now and then to hold back on the outrage long enough to collect the facts.

Reactions to the new feature were predictably split. On the one hand was the ‘good job for stopping the thoughtless spread of disinformation’ crowd:

On the other side is the ‘hands off my tweets’ crowd:

We’ll lump this group in with those who criticize Twitter for violating their first amendment rights in the deluded belief that it’s a publicly-owned service as opposed to a private company that owns the platform and provides it for free, and whose main responsibility is to shareholders and which can do what it wants.

There’s a third group that looked at the feature in a broader context, suggesting that Twitter focus on solving other problems (like dealing with white supremacist tweets) before delving into this kind of thing:

There are indeed other issues facing the company, but they’re diverse ones that it’s trying to solve at scale.

The company has taken some other measures, including the introduction of a feature flagging misleading Tweets and pointing users to verified facts. It later famously used that feature on a tweet from President Trump.

It tried to address hateful conduct recently, updating its rules to cover language that dehumanizes people on the basis of age, disability, or disease. It also experimented with a service that would warn people if there was harmful content in their reply:

Amid the judgements and praise, there were some interesting suggestions. Some people wanted the service to tell others when a person had retweeted something without sharing it, by flagging their message. Others point out that they may already have read an article on another device even if they haven’t opened it on Twitter.

Twitter support describes this as an experiment, though, which is presumably why it’s running a canary test of its code. There are lots of ways that it could develop the feature if it ends up meeting its opaque success criteria.

In the meantime, if users don’t like it, they could always find a Mastadon instance with rules they do like – and donate to the operators, thereby taking a stake in the system they’re using.

go top