Category Archives: News

TRRespass research reveals rowhammering is alive and well

We’re not sure quite how dangerous this problem is likely to be in real life, but it has the most piratical name for a bug that we’ve seen in quite some time, me hearties.

TRRespass is how it’s known (rrrroll those Rs if you can!) – or plain old CVE-2020-10255 to the landlubber types amongst us.

Trespass is the legal name for the offence of going onto or into someone else’s property when you aren’t supposed to.

And TRR is short for Target Row Refresh, a high-level term used to describe a series of hardware protections that the makers of memory chips (RAM) have been using in recent years to protect against rowhammering.

So TRRespass is a series of cybersecurity tricks involving rowhammering to fiddle with data in RAM that you’re not supposed to, despite the presence of low-level protections that are supposed to keep you out.

Rowhammering is a dramatically but aptly named problem whereby RAM storage cells – usually constructed as a grid of minuscule electrical capacitors in a silicon chip – are so tiny these days that they can be influenced by their neighbours or near neighbours.

It’s a bit like writing the address on an envelope in which you’ve sealed a letter – a ghostly impression of the words in the address is impinged onto the paper inside the envelope.

With a bit of care, you might figure out a way to write on the envelope in such a way that you alter the appearance of parts of the letter inside, making it hard to read, or even permanently altering critical parts (obscuring the decimal points in a list of numbers, for example).

The difference with rowhammering, however, is that you don’t need to write onto the envelope to impinge on the letter within – just reading it over and over again is enough.

In a rowhammering attack, then, the idea is to be able to modify RAM that you aren’t supposed to access at all (so you are writing to it, albeit in a somewhat haphazard way), merely by reading from RAM that you are allowed to look at, which means that write-protection alone isn’t enough to prevent the attack.

One row at a time

To simplify the otherwise enormous number of individual control connections that would be needed, you can’t read just one bit at a time from most RAM chips.

Instead, the cells storing the individual bits are arranged in a series of rows that can only be read out one full row at a time.

4×4 grid of memory cells representing a DRAM chip

To read cell C3 above, for example, you would tell the row-selection chip to apply power along row wire 3, which would discharge the capacitors A3, B3, C3 and D3 down column wires A, B, C and D, allowing their values to be determined. (Bits without any charge will read out as 0; bits that were storing a charge as 1.)

You’ll therefore get the value of four bits, even if you only need to know one of them.

Incidentally, reading out a row essentially wipes its value by discharging it, so immediately after any read, the row is refreshed by saving the extracted data back into it, where it’s ready to be accessed again.

Also, because the charge in any cell leaks away over time anyway, every row needs regularly refreshing whether it is used or not.

The RAM circuitry does this automatically, by default every 64 milliseconds (that’s about 16 times a second, or just under 1,000 times a minute).

That’s why this sort of memory chip is known as DRAM, short for dynamic RAM, because it won’t keep its value without regular external help.

(SRAM, or static RAM, holds its value as long as it’s connected to a power supply; Flash RAM will hold its value indefinitely, even when the power is turned off.)

Exploiting the refresh

One problem with this 64ms refresh cycle is, if a RAM row loses its charge or otherwise gets corrupted between two cycles, that the corruption won’t be noticed – the “recharge” will kick in and refresh the value using the incorrect bits.

And that’s where rowhammering comes in.

In 64ms you can trigger an enormous number of memory reads along one memory row, and this may generate enough electromagnetic interference to flip some of the stored values in the rows on either side of it.

The general rule is that the more you hammer and the longer the cell has been leaking away its charge, the more likely you are to get a bitflip event.

You can even do what’s called double-sided rowhammering, assuming you can work out what memory addresses in your program are stored in which physical regions of the chip, and hammer away by provoking lots of electrical activity on both sides of your targeted row at the same time.

Think of it as if you were listening to a lecture on your headphones: if attackers could add a heap of audio noise into your left ear, you’d find it hard to hear what the lecturer was saying, and might even misunderstand some words; if they could add interference into both ears at the same time, you’d hear even less, and misunderstand even more.

Reducing the risk

Numerous ways have emerged, in recent years, to reduce the risk of rowhammering, and to make real-world memory-bodging attacks harder to pull off.

Anti-rowhammering techniques include:

  • Increasing the DRAM refresh rate. The longer a bit goes unrecharged, the more likely it is to flip due to on-chip interference. But recharging the cells in a DRAM row is done by reading their bit values out redundantly, thus forcing a refresh. The time spent refreshing the entire chip is therefore a period during which regular software can’t use it, so that increasing the refresh rate reduces performance.
  • Preventing unprivileged software from flushing cached data. If you read the same memory location over and over again, the processor is supposed to remember recently used values in an internal area of super-fast memory called a cache. This naturally reduces the risk of rowhammering, because repeatedly reading the same memory values doesn’t actually cause the chip itself to be accessed at all. So, blocking unauthorised programs from executing the clflush CPU instruction prevents them from bypassing the cache and getting direct access to the DRAM chip.
  • Reducing the accuracy of some system timers. Rowhammering attacks were invented that would run inside a browser, and could therefore be launched by JavaScript served up directly from a website. But these attacks required very accurate timekeeping, so browser makers deliberately added random inaccuracies to JavaScript timing functions to thwart these tricks. The timers remained accurate enough for games and other popular browser-based apps, but not quite precise enough for rowhammering attackers.
  • A Target Row Refresh (TRR) system in the chip itself. TRR is a simple idea: instead of ramping up the refresh rate of memory rows for the entire chip, the hardware tries to identify rows that are being accessed excessively, and quietly performs an early refresh on any nearby rows to reduce the chance of them suffering deliberately contrived bit-flips.

In other words, TRR pretty much does what the name suggests: if a DRAM memory row appears to be the target of a rowhammer attack, intervene automatically to refresh it earlier than usual.

That way, you don’t need to ramp up the DRAM refresh rate for every row, all the time, just in case a rowhammer happens to one row, some of the time.

So, the authors of the TRRespass paper set out to measure the effectiveness of the TRR mitigations in 42 different DRAM chips manufactured in the past five years.

They wanted to find out:

  • How different vendors actually implement TRR. (There’s no standard technique, and most of those used have not been officially documented by the chip vendor.)
  • How various TRR implementations might be tricked and bypassed by an attacker.
  • How effective rowhammering attacks might be these days, even with TRR in many chips.

We’ll leave you to work through the details of the report, if you wish to do so, though be warned that it’s quite heavy going – there’s a lot of jargon, some of which doesn’t get explained for quite a while, and the content and point-making is rather repetitive (perhaps a side-effect of having eight authors from three different organisations).

Nevertheless, the researchers found that they were able to provoke unauthorised and probably exploitable memory modifications on 13 of the 42 chips they tested, despite the presence of hardware-based TRR protections.

Fortunately, they didn’t find any common form of attack that worked against every vendor’s chip – each vulnerable chip typically needed a different pattern of memory accesses unleashed at a different rate.

Even though you can’t change the memory chips in your servers or laptops every few days, this suggests that any successful attack would require the crooks to get in and carry out a fair bit of “hardware reconnaissance and research” on your network first…

…in which case, they probably don’t need to use rowhammering, because they’ve already got a dangerous foothold in your network already.

It also suggests that, in the event of attacks being seen in the wild, changes to various hardware settings in your own systems (admittedly with a possible drop in performance) might be an effective way to frustrate the crooks.

What to do?

Fortunately, rowhammering doesn’t seem to have become a practical problem in real-life attacks, even though it’s widely known and has been extensively researched.

So there’s no need to stop using your existing laptops, servers and mobile phones until memory manufacturers solve the problem entirely.

But at least part of the issue is down to the race to squeeze more and more performance out of the hardware we’ve already got, because faster processors mean we can hammer memory rows more rapidly than ever, while higher-capacity RAM modules gives us more rows to hammer at any time.

As we said last time we reported on rowhammering:

[Whenever] you add features and performance – whether that’s [ramping up memory and processing power], building GPUs into mobile phone chips, or adding fancy graphics programming libraries into browsers – you run the risk of reducing security at the same time.

If that happens, IT’S OK TO BACK OFF A BIT, deliberately reducing performance to raise security back to acceptable levels.

Sometimes, if we may reduce our advice to just seven words, it’s OK to step off the treadmill.


Diagram of DRAM cells reworked from Wikimedia under CC BY-SA-3.0.

Latest Naked Security podcast

Trial for accused CIA leaker ends in hung jury

A Manhattan federal judge on Monday declared a mistrial in the case against ex-CIA employee Joshua Adam Schulte, who was accused of stealing a huge cache of classified hacking tools – dubbed Vault 7 – from the US Central Intelligence Agency and leaking it to WikiLeaks.

WikiLeaks called the initial document dump – published on 28 February 2017 and containing 8,761 documents and files – “Year Zero”. It included documents and files from an isolated, high-security network inside CIA headquarters in Langley, Virginia.

On 7 March 2017, WikiLeaks launched a new series of leaks, which it claimed would be the largest dump of confidential documents on the CIA in history.

Year Zero painted an intimate picture of the US’s cyber-espionage efforts: Vault 7 included cyberattack tools including malware, viruses, Trojans and weaponized zero-day exploits, including those that target a wide range of big tech companies’ most popular products: iPhones, Wi-Fi routers, Android devices, and IoT gadgets. In fact, the dump made one thing clear: the CIA can use the Internet of Things (IoT) to hack anything, anywhere.

Schulte was working at the CIA’s Engineering Development Group at the time of the code theft. He was charged with 13 counts in connection with the alleged theft of national defense information from the CIA; giving the huge cache to WikiLeaks; criminal copyright infringement; and receiving, possessing and transporting about 10,000 child abuse images and videos.

The FBI claimed to have found an “encrypted container” with child abuse imagery files tucked beneath three layers of password protection on Schulte’s PC. The FBI accused Schulte of maintaining lousy security, saying that each layer was unlocked using passwords Schulte previously used on one of his cellphones. FBI agents also claimed to have identified internet chat logs in which Schulte and others discussed distributing child abuse imagery as well as a series of Google searches for such imagery that Schulte allegedly conducted.

Schulte pleaded not guilty to the charges, claiming that the images were on a server he’d maintained for years in order to share movies and other digital files. He argued that between 50 and 100 people had access to that server, and any one of them could have been responsible for the illegal content.

The jury found Schulte guilty of lying to the FBI and of contempt of court. But when it came to the far more serious charges of turning over the spy tools to WikiLeaks, the jury couldn’t reach consensus. Schulte, 31, still faces up to five years on the lesser counts.

On Monday, after US District Judge Paul Crotty declared a mistrial, he ordered both sides back to court on 26 March 2020, when the government is expected to push for a new trial.

The mistrial is embarrassing: prosecutors spent years pulling the case together, and they devoted four weeks of testimony in an effort to portray Schulte as a vindictive and disgruntled employee who put US security at risk by leaking information on how the CIA spied on foreign adversaries.

Prosecutors portrayed the Vault 7 leak as a well-planned theft orchestrated by Schulte, whom they claim gave hackers access to the CIA’s top-secret hacking tools.

According to The Register, the CIA has had a rough time proving that it was Schulte who stole the tools from a secure server in the heart of CIA headquarters. The agency has come up with a convoluted explanation for how he might have pulled off the heist by saving a backup to a thumb drive and then reverting the system to a previous state to cover his tracks, but in the end, all it has is circumstantial evidence. The government hasn’t been able to show any direct proof that Schulte sent the files to WikiLeaks.

The CIA has tried to fill in the gaps by pointing to how Schulte has acted before and after the confidential documents were stolen, including that he downloaded Wikileaks’ cover-your-tracks software. Also, while in prison, Schulte had a contraband phone with which he opened a Twitter account – named @freejasonbourne, referring to the fictional CIA operative played by the actor Matt Damon – so that he could, as the prosecutors put it, launch an “information war” against the US.

Schulte’s defense lawyers have argued that the CIA’s computer network not only had crappy passwords – 123ABCdef and mysweetsummer among the main ones – but that those weak passwords were also published on the department’s intranet. The defense also argued that the network had widely known security vulnerabilities, the New York Times reports. Thus, it’s possible that other CIA employees, or foreign adversaries, could have breached the system.

On Monday, the jurors deadlocked on eight counts, including illegal gathering and transmission of national defense information. It’s no wonder they’ve been unable to reach agreement on Schulte’s guilt or innocence – the “there’s more here than meets the eye” is strong with this one.

The Times’ description of the “scramble” inside CIA headquarters following the discovery of the leak includes this scene:

Sean Roche, a top CIA official at the time, said he got a call from another CIA director who was out of breath. ‘It was the equivalent of a digital Pearl Harbor,’ he testified.

Schulte’s defense called their client an easy scapegoat: somebody who, having filed complaints about prank-playing, Nerf gun shooting colleagues, just didn’t quite fit in. “He had antagonized virtually all of his co-workers at the CIA,” the Times succinctly puts it.

The Register has yet more details about another suspicious character: one of Schulte’s colleagues, identified only as “Michael,” who was found to have a screen capture of “the very server the Vault 7 tools were stolen from at the time that they were allegedly being stolen.”

Hmm… that’s unusual, the government has admitted. Michael didn’t say he was actively monitoring the server at the time, and the screengrab only showed up months later in a forensic deep dive by the Feds, the Register reports.

When asked about it, Michael refused to cooperate, and the next day the CIA suspended him.

No wonder the jury was hung. This case is murky, which is most particularly dismaying given the high stakes involved.


Latest Naked Security podcast

Watch out for Office 365 and G Suite scams, FBI warns businesses

The menace of Business Email Compromise (BEC) is often overshadowed by ransomware but it’s something small and medium-sized businesses shouldn’t lose sight of.

Bang on cue, the FBI Internet Crime Complaint Center (IC3) has alerted US businesses to ongoing attacks targeting organisations using Microsoft Office 365 and Google G Suite.

Warnings about BEC are ten-a-penny but this one refers specifically to those carried out against the two largest hosted email services, and the FBI believes that SMEs, with their limited IT resources, are most at risk of these types of scams:

Between January 2014 and October 2019, the Internet Crime Complaint Center (IC3) received complaints totaling over $2.1 billion in actual losses from BEC scams targeting Microsoft Office 365 and Google G Suite.

As organisations move to hosted email, criminals migrate to follow them.

As with all types of BEC, after breaking into the account, criminals look for evidence of financial transactions, later impersonating employees to redirect payments to themselves.

For good measure, they’ll often also launch phishing attacks on contacts to grab even more credentials, and so the crime feeds itself a steady supply of new victims.

The deeper question is why BEC scams continue to be such a problem when it’s well understood that they can be defended against using technologies such as multi-factor authentication (MFA).

One answer is that older email systems don’t support such technologies, a point Microsoft made recently when the company revealed that legacy protocols such as SMTP and IMAP correlated to a markedly higher chance of compromise.

Lacking that, such accounts immediately become vulnerable to password weaknesses such as re-use.

Turn on MFA

One takeaway is that despite the rise in BEC attacks on hosted email, this type of email is still more secure than the alternatives provided admins turn on the security features that come with it.

For organisations worried about BEC, the FBI has the following general advice:

  • Enable multi-factor authentication for all email accounts
  • Verify all payment changes via a known telephone number or in-person

And for hosted email admins:

  • Prohibit automatic forwarding of email to external addresses
  • Add an email banner to messages coming from outside your organization
  • Ensure mailbox logon and settings changes are logged and retained for at least 90 days
  • Enable alerts for suspicious activity such as foreign logins
  • Enable security features that block malicious email such as anti-phishing and anti-spoofing policies
  • Configure Sender Policy Framework (SPF), DomainKeys Identified Mail (DKIM), and Domain-based Message Authentication Reporting and Conformance (DMARC) to prevent spoofing and to validate email

The FBI also recommends that you prohibit legacy protocols that can be used to circumvent multi-factor authentication, although this needs to be done with care as some older applications might still depend on these.

It’s a pity the IC3 sometimes puts out useful advice like this using Private Industry Notifications (PINs), a narrowcast version of the public warnings issued on the organisation’s website.

Report a BEC

Law enforcement agencies can’t fight what they don’t know about. To that end, please do make sure to report it if you’ve been targeted in one of these scams.

In the US, victims can file a complaint with the IC3. In the UK, BEC complaints should go to Action Fraud. If you’d like to know how Sophos can help protect you against BEC, read the Sophos News article Would you fall for a BEC attack?


Latest Naked Security podcast

Google data puts innocent man at the scene of a crime

You’ve assuredly heard this before about ubiquitous surveillance, or perhaps even said it yourself: “If you have nothing to hide, and you’ve done nothing wrong, why should you worry?”

Zachary McCoy, of Florida, offers this answer:

If you’re innocent, that doesn’t mean you can’t be in the wrong place at the wrong time, like going on a bike ride in which your GPS puts you in a position where police suspect you of a crime you didn’t commit.

As NBC News reports, McCoy, an avid cyclist, got an email from Google in January.

It was from Google’s legal investigations support team. They were writing to let the 30-year-old know that local police had demanded information related to his Google account. He had seven days in which to appear in court if he wanted to block the release of that data, Google told him.

He was, understandably, terrified, in spite of being one of those innocent people who should have nothing to hide. NBC News quotes him:

I was hit with a really deep fear.

I didn’t know what it was about, but I knew the police wanted to get something from me. I was afraid I was going to get charged with something, I don’t know what.

How is it that McCoy didn’t know what police were inquiring about? Because his Android phone had been swept up in a surveillance dragnet called a geofence warrant – a type of warrant done in secret.

McCoy’s device had been located near the scene of a burglary that had taken place near the route he takes to bicycle to his job. Investigators had used the geofence warrant to try to suss out the identity of people whose devices are located near the scene of a crime around the time it occurred.

As NBC News reports, police hadn’t discovered his identity. The first stage of data collection doesn’t return identifying information – only data about devices that might be of interest. It’s during the next stage, when police sift through the data looking for suspicious devices, that they turn to Google to ask that it identify users.

Like many of us, McCoy had an Android phone that was linked to his Google account, and he used plenty of apps that store location data: Gmail, YouTube, and an exercise-tracking app called RunKeeper that feeds off of Google location data and which helps users to track their workouts.

You can look up your location history to find out exactly what Google knows about you, by date. On the day of the burglary – 29 March 2019 – Google knew that McCoy had passed the scene of the crime three times within an hour as he looped through his neighborhood during his workout.

It was a “nightmare scenario,” McCoy said:

I was using an app to see how many miles I rode my bike and now it was putting me at the scene of the crime. And I was the lead suspect.

How McCoy fought his way out of the dragnet

When it receives a request about a user from a government agency, Google’s general policy is to email that user before disclosing information.

There wasn’t much of anything in that notice about why police were asking about him, McCoy said. However, there was one clue: a case number.

McCoy ran a search for that case number on the Gainesville, Florida, police department’s website. What he found was a one-page investigation report on the burglary of an elderly woman’s home 10 months earlier. She lived less than a mile from where McCoy was living.

He knew he had nothing to do with the break-in, but he had very little time – seven days – in which to prove it. So McCoy hired a lawyer, Caleb Kenyon, who did some research and learned that Google’s notice had been prompted by a geofence warrant: one that swept up the GPS, Bluetooth, Wi-Fi and cellular connections of everyone nearby.

After they figured out why police were trying to track McCoy down, Kenyon told NBC News that he called the detective on the case and told him, “You’re looking at the wrong guy.”

On 31 January, Kenyon filed a motion in civil court to render the warrant “null and void” and to block the release of any further information about McCoy, identifying him only as “John Doe.” If he hadn’t done so, Google would have turned over data that would have identified McCoy. In his motion, Kenyon argued that the warrant was unconstitutional because it allowed police to conduct sweeping searches of phone data from untold numbers of people in order to find a single suspect.

Kenyon’s motion gave investigators pause. Kenyon told NBC News that not long after he filed it, a lawyer in the state attorney’s office assigned to represent the Gainesville Police Department told him there were details in the motion that led them to believe that his client wasn’t the culprit. The state attorney’s office withdrew the warrant, saying in a court filing that it was no longer necessary.

Even after police acknowledged that McCoy wasn’t a suspect anymore, Kenyon wanted to make sure they wouldn’t harbor suspicions about his client, whom they still only knew as “John Doe.” So the lawyer met with the detective in order to show him screenshots of McCoy’s Google location history, including data recorded by RunKeeper. The maps showed months of bike rides past the burglarized home, NBC News reports.

McCoy was lucky. He and his family are also a bit poorer because of the incident. If his parents hadn’t helped him out by giving him thousands of dollars to hire a lawyer, things could have turned out differently, he says.

I’m definitely sorry [the burglary] happened to her, and I’m glad police were trying to solve it. But it just seems like a really broad net for them to cast. What’s the cost-benefit? How many innocent people do we have to harass?

Geolocation data: It’s hit or miss

Geolocation data sometimes gets it right when it comes to tracking down criminals. For example, last year, a homicidal cycling and running fanatic known for his meticulous nature in tracking his victims was undone by location data from his Garmin GPS watch.

Other convictions based on location data have included the pivotal Carpenter v. United States, which concerned a Radio Shack robbery – the legal arguments from this case have gone on to inform subsequent decisions, including one from January 2019 in which a judge ruled that in the US, the Feds can’t force you to unlock your phone with biometrics.

Geofence warrants, however, are a whole other thing.

Privacy and civil liberties advocates have voiced concerns about the warrants potentially violating constitutional protections against unreasonable search. Police have countered by insisting that they don’t charge somebody with a crime unless they have evidence to go on besides a device being co-located with a crime scene.

These searches are becoming increasingly widespread, however. In December 2019, Forbes reported that Google had complied with geofence warrants that, at that time, had resulted in what the magazine called an unprecedented data haul for law enforcement.

Google had combed through its gargantuan Sensorvault database to find 1,494 device identifiers for phones in the vicinities of multiple crimes. Sensorvault is where Google stores location data that flows from all its applications. If you’ve got the Location History setting turned on in your Google account, you’re feeding this ocean of data, which is stuffed with detailed location records from what The New York Times reports to be at least hundreds of millions of devices worldwide.

To investigators, this is gold: a geofence demand enables them to pore through location records as they seek devices that may be of interest to an investigation.

Geofence data demands are also known as ‘reverse location searches’. Investigators stipulate a timeframe and an area on Google Maps and ask Google to give them the record of each and every Google user who was in the area at the time.

When police find devices of interest, they’ll ask Google for more personal information about the device owner, such as name, address, when they signed up for Google services and which services – such as Google Maps – they used.

Google’s location history data is routinely shared with police. Detectives have used these warrants as they investigate a variety of crimes, including bank robberies, sexual assaults, arsons, murders, and bombings.

And it’s not just Google. As Fast Company reported last month, recently discovered court documents confirm that prosecutors have issued geofence warrants for data stored by Apple, Uber, Lyft, and Snapchat.

Fast Company reported that it didn’t know what data, if any, the companies had handed over (Apple, for one, has said that it doesn’t have the ability to perform these kind of searches). All it knows was that the warrants had been served.

How to turn off Google’s location history

If you don’t like the notion of Google being able to track your every movement, you can turn off location history.

To do so, sign into your Google account, click on your profile picture and the Google account button. From there, go to Data & personalization, and select Pause next to Location History. To turn off location tracking altogether, you have to do the same for Web & App activity in the same section.


Latest Naked Security podcast

go top