Category Archives: News

Monday review – the hot 16 stories of the week

Get yourself up to date with everything we’ve written in the last seven days – it’s weekly roundup time.

Monday 18 May 2020

Tuesday 19 May 2020

Wednesday 20 May 2020

Thursday 21 May 2020

Friday 22 May 2020

Latest videos

[embedded content]

(Watch directly on YouTube if the video won’t play here.)

[embedded content]

(Watch directly on YouTube if the video won’t play here.)

Latest Naked Security podcast

News, straight to your inbox

Would you like to keep up with all the stories we write? Why not sign up for our daily newsletter to make sure you don’t miss anything. You can easily unsubscribe if you decide you no longer want it.

The ransomware that attacks you from inside a virtual machine

Yesterday, SophosLabs published details of a sophisticated new ransomware attack that takes the popular tactic of “living off the land” to a new level.

To ensure their 49 kB Ragnar Locker ransomware ran undisturbed, the crooks behind the attack bought along a 280 MB Windows XP virtual machine to run it in (and a copy of Oracle VirtualBox to run that).

It’s almost funny, but it’s no joke.

The attack was carried out by the gang behind Ragnar Locker, who break into company networks, make themselves admins, conduct reconnaissance, delete backups and deploy ransomware manually, before demanding multi-million dollar ransoms.

Like a lot criminals who conduct similar “targeted” or “big game” ransomware attacks, the Ragnar Locker gang try to avoid detection as they operate inside a victim’s network with a tactic dubbed “living off the land”.

Living off the land entails using legitimate software administration tools that either already exist on the network the crooks have broken into, or that don’t look suspicious or out of place (PowerShell is a particular favourite).

SophosLabs reports that in the attack, the gang used a Windows GPO (Group Policy Object) task to execute the Microsoft Installer, which downloaded an MSI containing a number of files, including a copy of VirtualBox and a Windows XP virtual machine with the Ragnar Locker executable inside.

VirtualBox is hypervisor software that can run and administer one or more virtual guest computers inside a host computer. Typically, guests are sealed off from the host, and processes running inside the guest are unable to interact with the host’s operating system. This is to prevent hostile processes, like malware, from attacking the host or taking it over, in what’s known as a virtual machine escape.

However, the protections that separate the guests from their host assume a hostile guest inside a friendly host, and that wasn’t the case here, because the attackers had access to both guest and host.

In fact, from the attackers’ perspective they were trying to create the reverse of the normal situation – a friendly (to them) guest environment protected from a hostile host.

To the attackers, the victim’s network is a hostile environment. Living off the land is designed to allow them to work as stealthily as possible, without triggering any alarms in the network’s security software. When they start running malware they’ve broken cover and are at much greater risk of detection.

Running their malware inside a virtual machine allowed them to hide it from the prying eyes of security software on the host.

And because the attackers controlled the host they were easily able to weaken the wall between the host and the guest.

They did this by installing VirtualBox add-ons that allow files on the host to be shared with the guest, and then making every local disk, removable storage and mapped network drive on the host accessible to the guest virtual machine. With those drives mounted inside the guest, the ransomware could encrypt the files on them from inside the protective cocoon of the virtual machine.

Meanwhile, as far as the security software on the host was concerned, data on the local network was being encrypted by legitimate software: VirtualBox’s VboxHeadless.exe process.

So, from the perspective of the host, the attackers never broke cover and continued to “live off the land”, using legitimate software, until they dropped the ransom note.

For the technical details of this attack, read Mark Loman’s in-depth article on Ragnar Locker over on our sister site, Sophos News.

Latest Naked Security podcast

Signal secure messaging can now identify you without a phone number

Signal is a popular instant messaging (IM) app with a difference.

That difference – or at least its major difference – is simple: it’s not owned and operated by an industry behemoth.

WhatsApp belongs to Facebook, Skype is part of Microsoft, and iMessage is owned by Apple, but the open-source app Signal belongs, inasmuch as it belongs to anyone, to Signal.

Signal is a US-registered non-profit organisation that was founded entirely around making and supporting the messaging app.

As a result, Signal’s big selling point is, well, that it isn’t selling anything.

Sharing information about you with third parties isn’t part of Signal’s business model, so there’s actually no point in it figuring out how to do so…

…which means that there’s a much more compelling reason to believe the organisation when it claims to have an unbending focus on end-to-end encryption.

Signal not only has no desire, but also has no need, to take any interest in what you’re saying, or whom you’re saying it to.

Signal is also endorsed by a privacy celebrity that other IM service providers can’t match, namely Edward Snowden.

Snowden is quoted on Signal’s website with the five simple words, “I use Signal every day.”

(With apologies to well-known cryptographers Bruce Schneier and Matt Green, who are two of Signal’s other celebrity endorsers.)

Signal, however, has one curious aspect that puts some people off, this author included.

We’ve never bothered with Signal for the reason that signing up means handing over your phone number.

Conveniently, a phone number is all you need to sign up, but you can’t sign up with your name instead, or with an email address.

You need to use a working phone number that really is yours.

Basing the identity of accounts on a phone number makes a lot of sense, not least because a phone number is something you can easily and cheaply acquire in many countries, and it guarantees that the user has a satisfactory way of verifying their identity.

But in some countries, getting hold of a phone number isn’t an easy process, and may involve proving not only your identity but also your address.

Indeed, getting hold of an “anonymous” SIM card, or using an improperly registered one, is a criminal offence in some jurisdictions.

And there’s something unappealing about entrusting your identity on a secure online service (one that prides itself on immunity to surveillance) to a cryptographic chip that must by law be registered with a central authority so it can keep tabs on you via that same chip.

There’s something even less appealing about the worry that you could be locked out of your own account simply by losing the right to the phone number you used for the account.

This irony isn’t lost on Signal, and it has just announced a new feature called Signal PINs that allow you to keep control of your account even if you lose your phone or are forced to switch numbers and can’t get your old one back.

Signal aims to be easy and safe to use for everyone, which is why it hasn’t insisted on using long and hard-to-remember “recovery codes”.

Signal PINs can be as long and complex as you like, including letters as well as digits, if that’s what you prefer, but you can safely use a short PIN if you want something that’s easy to remember and doesn’t need writing down, an act that could be a risk for some Signal users.

Secure value recovery

Signal is using a technique it announced late last year called SVR, short for Secure Value Recovery.

One obvious problem with short PINs used as recovery codes for databases that aren’t stored in secure memory on your smartphone is the issue of what’s called an “offline attack”.

For example, your iPhone can get away with a 6-digit PIN because you can only type in the PIN on the phone, and the only way to verify the PIN (unless there is a bug somewhere) is to communicate directly with a tamper-resistant chip inside the phone.

That chip can’t be opened up, modified or cloned, so the internal counter it maintains of how many guesses you’ve had at the PIN can’t be reset or bypassed – you get 10 goes and then it’s game over.

You can’t make 10,000 copies of the chip and have 9 guesses on each copy without getting locked out forever.

But regular server databases aren’t as easy to protect against attacks where the crooks aren’t hindered by the presence of dedicated, tamper resistant hardware.

Signal has therefore put a lot of effort into developing hacker-resistant storage “enclaves” that the company can run on its own servers – using Intel’s Software Guard Extensions (SGX) – to keep your master secrets secure with a pass code that’s easy to remember.

As we mentioned, however, you don’t need to use a PIN to secure your Signal account – you can just use your phone number alone, as before, or choose a proper pass-phrase that’s as long as you like. (We recommend the latter, SVR or no SVR.)

No more phone numbers?

The disappointing news here, at least in our opinion, is that Signal isn’t yet announcing a way to use its product without handing over a phone number at all.

We’ve seen excitable reports in the media suggesting that this marks the beginning of the end of phone-based identity for Signal, but we don’t think it does.

You still can’t use the laptop versions of the app without setting Signal up on your phone first, and you can’t set it up on your phone without handing over a real, live phone number right at the start of the installation.

As Signal itself says, PINs aren’t a replacement for phone numbers but they do provide a safer way to recover your account in an emergency than a phone number alone.

In the latest version of our apps, we’re introducing Signal PINs. Signal PINs are based on Secure Value Recovery, which we previewed in December, to allow supporting data like your profile, settings, and who you’ve blocked to be securely recovered should you lose or switch devices. PINs will also help facilitate new features like addressing that isn’t based exclusively on phone numbers, since the system address book will no longer be a viable way to maintain your network of contacts.

It’s a start, not least because it means an interfering government or mobile phone company can’t lock you out of your account simply by cancelling your SIM card.

But you still need a phone to get onto Signal in the first place.


Latest Naked Security podcast

Apple and Google launch COVID-19 contact tracing API

Apple and Google have rolled out the first phase of their COVID-19 contact tracing framework. It makes it possible for public health authorities across the world to connect their apps with data that could help them identify people at risk from the virus.

This is the first phase in a two-part rollout of the Apple and Google framework originally announced on 10 April. It isn’t an app, but rather an Exposure Notification application programming interface (API) that apps can interact with. Those apps must be contact tracing apps from from public health authorities. Users must download and authorise those apps to participate.

Here’s how it works: a phone running an app that uses the API will periodically use Bluetooth to ping other phones with a random beacon – a string of characters that isn’t connected to the user’s identity information. That beacon changes frequently to increase security, but the phone keeps a list of the beacons that it sends out. It also stores a list of all the beacons that it receives from phones nearby.

If a person tests positive for the virus, they can enter the test result into the public health authority’s app to show it that they’re infected, and give it permission to upload the last 14 days of beacons that their phone has transmitted. Those beacons are stored in the cloud, but they’re the phone’s own. It doesn’t send the beacons that it has collected from other phones.

Each day, phones running an app that uses the API will download a list of beacons from phones whose users have tested positive for the virus. It checks the beacons that it has collected locally from interacting with other phones against that downloaded list. If there’s a match, that’s a good indicator that the user has been in contact with an infected person. No one will know who that is, but the app will notify the user that they’re at risk and tell them what to do next.

Google and Apple worked together on the API so that phones using each of their operating systems can exchange beacons with each other.

The framework has some properties designed to preserve privacy while making them more accurate. First, it doesn’t use GPS data, meaning that the API won’t send users’ locations back to the cloud (this doesn’t apply to other apps or operating system features, though). Using Bluetooth rather than GPS location to track proximity is more accurate because the framework can estimate how close people are within a six-foot radius using Bluetooth signal strength. GPS won’t give you this level of accuracy.

Some countries, such as Germany, have agreed to use the framework for their apps. Others, including the UK, have chosen to develop applications using their own data architecture in a more centralized approach. The NHS contact tracing app also uses Bluetooth proximity tracing. Unlike the Apple/Google framework, though, it sends a list of anonymous IDs that an infected user’s phone has collected from other phones. It also stores part of an infected person’s postcode in a central cloud database.

This week, the government delayed the launch of that app, according to the Guardian, following a warning of security flaws in the system. Researchers sounded an alarm about the transmission of more detailed interaction records and long storage times for information. They said:

Whilst we understand that more detailed records may be desirable for the epidemiological models, it must be balanced with privacy and trust if sufficient adoption of the app is to take place.

Widespread adoption is crucial to slow or halt the spread of the virus according to academics, who have said that an app could halt the virus outbreak if 80% of all smartphone users adopted it.

Today, Apple and Google are leaving it up to public health authorities to build apps that can take advantage of their API. In the second phase, the companies will embed the app functionality directly into their operating systems, albeit with an opt-in requirement. It will support the same beaconing and notification functionality, but will then prompt at-risk users to install the appropriate public health authority app.

Latest Naked Security podcast

go top