It’s not a breach… it’s just that someone else has your data

UK telephone, TV and internet provider Virgin Media has suffered a data breach.

Or not, depending on whom you ask.

TurgenSec, the company that alerted Virgin Media to the breached information – or, at least, to the inadvertently disclosed database – says that it “included personal information corresponding to approximately 900,000 UK residents.”

We’re not exactly sure where or how TurgenSec found the errant data, but it sounds as though this was either a cloud blunder, a marketing partner plunder, or both of those at once.

Cloud blunders are, unfortunately, all too common these days – typically what happens is that a company extracts a subset of information from a key corporate database, perhaps so that a research or marketing team can dig into it without affecting the one, true, central copy. In the pre-internet days, you often heard this referred to as a “channel-off”.

In the modern era, channelled-off data seems to leak out in two main ways:

  • The copied data gets uploaded to a cloud service that isn’t properly secured. Crooks regularly trawl the internet looking for files that aren’t supposed to be there – this process can be automated – and are quick to pounce if they find access control blunders that let them download data that should clearly be private.
  • The data gets sent to an outside company, e.g. for a marketing campaign, and it gets stolen from there. Data breaches from partner companies could happen for exactly the reason given above – poor cloud management practices – or for a variety of other reasons that the company responsible for the data can’t control directly.

We’re assuming, in Virgin Media’s case, that what happened was along the lines of the first cause above, given that the company insists that:

No, this was not a cyber-attack. […] No, our database was not hacked. […] Certain sources are referring to this as a data breach. The precise situation is that information stored on one of our databases has been accessed without permission. The incident did not occur due to a hack but as a result of the database being incorrectly configured.

Virgin Media hasn’t done itself any favours with this statement. What it seems to be saying is that, because the crooks merely wandered in uninvited, without even needing to bypass any security measures or exploit any unpatched security holes, this doesn’t count as a “hack” or a “breach”.

We don’t know about you, but to us, this sounds a bit like wrecking your car by driving into a ditch and then claiming that you “didn’t actually have a crash”; instead, you simply didn’t drive with sufficient care and attention to stay safely on the road.

What data went walkabout?

Whether you think it’s a breach or not, it’s certainly a pretty big leak, even though the 900,000 users impacted is well short of Virgin Media’s full customer list.

TurgenSec has published a list of the fieldnames (database columns) that appeared in the exposed data, although not every field contained data for every user listed.

These apparently include: name, email address, home address, phone number and date of birth.

TurgenSec is also claiming that some of the fields reveal “requests to block or unblock various pornographic, gore related and gambling websites,” although a report last Friday by the BBC suggests that this block/unblock data was present only for about 1,100 of the customers affected by the breach leak.

What to do

Virgin Media secured the errant database pretty quickly, so it’s no longer open for any more crooks to find and steal.

The company has also set about contacting customers whose Virgin Media accounts were affected, meaning that are probably millions of people in the UK who will be watching out for an email but ultimately won’t hear anything because they weren’t affected.

As we know, this is the sort of vacuum into which cybercriminals love to step – sending phishing scams that pretend to be security notifications.

Our recommendations, therefore, are as follows:

  • If you receive an email claiming to be from Virgin Media, ignore contact details in that email. Use an existing account or your original contract to find an official phone number or website, and get in touch that way. It’s slightly less convenient (assuming the email is genuine) but it makes it very much harder for the crooks to trick you into contacting them instead (making the more likely assumption that the email is fake).
  • Read our article, What you sound like after a data breach. We wrote it a few years ago as a satirical piece, but there’s a lot in there you can learn from. As Mark Stockley put it back in 2015, “Hopefully you’ve never had anything stolen in a data breach, but if you have, I hope you’ve been spared the salted wound of the non-apology.”
  • Learn how to build a cybersecurity-aware culture in your own business. Sophos CISO Ross McKerchar has six tips to bolster the “human firewall” that makes it less likely you’ll let data leak out in the first place.

One billion Android smartphones racking up security flaws

How long do Android smartphones and tablets continue to receive security updates after they’re purchased?

The slightly shocking answer is barely two years, and that’s assuming you bought the handset when it was first released. Even Google’s own Pixel devices max out at three years.

Many millions of users hang on to their Android devices for much longer, which raises questions about their ongoing security as the number of serious vulnerabilities continues to grow.

Add up all the Android handsets no longer being updated and you get big numbers – according to Google’s developer dashboard last May, almost 40% of Android users still use handsets running versions 5.0 to version 7.0, which haven’t been updated for between one and four years. One in ten run something even older than that, equivalent to one billion devices.

The point is brought home by new testing from consumer group Which?, discovering that it was possible to infect popular older handsets mainly running Android 7.0 – the Motorola X, Samsung Galaxy A5, Sony Xperia Z2, Google Nexus 5 (LG), and the Samsung Galaxy S6 – with mobile malware.

All of the above were vulnerable to a recently discovered Bluetooth flaw known as BlueFrag, and to the Joker strain of malware from 2017. The older the device, the more easily it could be infected – Sony’s Xperia Z2, running Android 4.4.2, was vulnerable to the StageFright flaw from 2015.

Google recently had to remove 1,700 apps containing Joker (aka Bread) from its Play Store, only the latest in an increasingly desperate rearguard action against malware being hosted under its nose.

It’s not simply that these devices aren’t getting security fixes but older models also miss out on a bundle of security and privacy enhancements that Google has added to versions 9 and 10.

Kate Bevan, Which? Computing editor (and formerly of Naked Security), said:

It’s very concerning that expensive Android devices have such a short shelf life before they lose security support – leaving millions of users at risk of serious consequences if they fall victim to hackers.

Bevan raised the interesting point that the idea that a device might only get updates for two years will come as news to most Android users:

Google and phone manufacturers need to be upfront about security updates, with clear information about how long they will last and what customers should do when they run out.

Google has issued the same response to several media outlets in response to the report:

We’re dedicated to improving security for Android devices every day.

We provide security updates with bug fixes and other protections every month, and continually work with hardware and carrier partners to ensure that Android users have a fast, safe experience with their devices.

In truth, users are being squeezed between two forces. On the one hand, Google is determined to drive the evolution of Android for competitive reasons, releasing a new version every year.

On the other are manufacturers, eager to keep people upgrading to new models on the pretext that the older ones won’t run these updated versions (which is not always true).

Security sits somewhere between the two, and despite attempted reforms by Google in recent years to make security fixes happen on a monthly cycle, the reality is some way from that ideal.

Eventually, there comes a time to discard an old device, but for most users that will be longer than two years.

To ram home the point about flaws, the March 2020 Android security bulletin patched a MediaTek flaw, CVE-2020-0069, which has being actively exploited in the wild for several months.

And yet MediaTek thinks it had a fix for the flaw last May, but device makers didn’t apply it. Even now that it’s namechecked in Google’s update, it could take months to percolate through to devices because updates happen so slowly. And this is a flaw known to be exploited in the wild.

Android users can check their Android version and get security updates by following this advice from Google.


Latest podcast – special episode

Remote working due to coronavirus? Here’s how to do it securely…

Many if not most organisations have already crossed the “working from home”, or at least the “working while on the road” bridge.

If you’re on the IT team, you’re probably used to preparing laptops for staff to use remotely, and setting up mobile phones with access to company data.

But global concerns over the current coronavirus (Covid-19) outbreak, and the need to keep at-risk staff away from the office, means that lots of companies may soon and suddenly end up with lots more staff working from home…

…and it’s vital not to let the precautions intended to protect the physical health of your staff turn into a threat to their cybersecurity health at the same time.

Importantly, if you have a colleague who needs to work from home specifically to stay away from the office then you can no longer use the tried-and-tested approach of getting them to come in once to collect their new laptop and phone, and to receive the on-site training that you hope will make them a safer teleworker.

You may end up needing to set remote users up from scratch, entirely remotely, and that might be something you’ve not done a lot of in the past.

So here are our five tips for working from home safely.

1. Make sure it’s easy for your users to get started

Look for security products that offer what’s called an SSP, short for Self-Service Portal.

What you are looking for is a service to which a remote user can connect, perhaps with a brand new laptop they ordered themselves, and set it up safely and easily without needing to hand it over to the IT department first.

Many SSPs also allow the user to choose between different levels of access, so they can safely connect up either a personal device (albeit with less access to fewer company systems than they’d get with a dedicated device), or a device that will be used only for company work.

The three key things you want to be able to set up easily and correctly are: encryption, protection and patching.

Encryption means making sure that full-device encryption is turned on and activated, which protects any data on the device if it gets stolen; protection means that you start off with known security software, such as anti-virus, configured in the way you want; and patching means making sure that the user gets as many security updates as possible automatically, so they don’t get forgotten.

Remember that if you do suffer a data breach, such as a lost laptop, you may well need to disclose the fact to the data protection regulator in your country.

If you want to be able to claim that you took the right precautions, and thus that the breach can be disregarded, you’ll need to produce evidence – the regulator won’t just take your word for it!

2. Make sure your users can do what they need

If users genuinely can’t do their job without access to server X or to system Y, then there’s no point in sending them off to work from home without access to X and Y.

Make sure you have got your chosen remote access solution working reliably first – force it on yourself! – before expecting your users to adopt it.

If there are any differences between what they might be used to and what they are going to get, explain the difference clearly – for example, if the emails they receive on their phone will be stripped of attachments, don’t leave them to find that out on their own.

They’ll not only be annoyed, but will probably also try to make up their own tricks for bypassing the problem, such as asking colleagues to upload the files to private accounts instead.

If you’re the user, try to be understanding if there are things you used to be able do in the office that you have to manage without at home.

3. Make sure you can see what your users are doing

Don’t just leave your users to their own devices (literally or figuratively).

If you’ve set up automatic updating for them, make sure you also have a way to check that it’s working, and be prepared to spend time online helping them fix things if they go wrong.

If their security software produces warnings that you know they will have seen, make sure you review those warnings too, and let your users know what they mean and what you expect them to do about any issues that may arise.

Don’t patronise your users, because no one likes that; but don’t leave them to fend for themselves, either – show them a bit of cybersecurity love and you are very likely to find that they repay it.

4. Make sure they have somewhere to report security issues

If you haven’t already, set up an easily remembered email address, such as security911 @ yourcompany DOT example, where users can report security issues quickly and easily.

Remember that a lot of cyberattacks succeed because the crooks try over and over again until one user makes an innocent mistake – so if the first person to see a new threat has somewhere to report it where they know they won’t be judged or criticised (or, worse still, ignored), they’ll end up helping everyone else.

Teach your users – in fact, this goes for office-based staff as well as teleworkers – only to reach out to you for cybersecurity assistance by using the email address or phone number you gave them. (Consider snail-mailing them a card or a sticker with the details printed on it.)

If they never make contact using links or phone numbers supplied by email, they they are very much less likely to get scammed or phished.

5. Make sure you know about “shadow IT” solutions

Shadow IT is where non-IT staff find their own ways of solving technical problems, for convenience or speed.

If you have a bunch of colleagues who are used to working together in the office, but who end up flung apart and unable to meet up, it’s quite likely that they might come up with their own ways of collaborating online – using tools they’ve never tried before.

Sometimes, you might even be happy for them to do this, if it’s a cheap and happy way of boosting team dynamics.

For example, they might open an account with an online whiteboarding service – perhaps even one you trust perfectly well – on their own credit card and plan to claim it back later.

The first risk everyone thinks about in cases like this is, “What if they make a security blunder or leak data they shouldn’t?”

But there’s another problem that lots of companies forget about, namely: what if, instead of being a security disaster, it’s a conspicuous success?

A temporary solution put in place to deal with a public health issue might turn into a vibrant and important part of the company’s online presence.

So, make sure you know whose credit card it’s charged to, and make sure you can get access to the account if the person who originally created it forgets the password, or cancels their card.

So-called “shadow IT” isn’t just a risk if it goes wrong – it can turn into a complicated liability if it goes right!

Most of all…

Most of all, if you and your users suddenly need to get into teleworking, be prepared to meet each other half way.

For example, if you’re the user, and your IT team suddenly insists that you start using a password manager and 2FA (those second-factor login codes you have to type in every time)…

…then just say “Sure,” even if you hate 2FA and have avoided it in your personal life because you find it inconvenient.

And if you’re the sysadmin, don’t ignore your users, even if they ask questions you think they should know the answer to by now, or if they ask for something you’ve already said “No” to…

…because it might very well be that they’re asking because you didn’t explain clearly the first time, or because the feature they need really is important to doing their job properly.

We’re living in tricky times, so try not to let matters of public health cause the sort of friction that gets in the way of doing cybersecurity properly!

Run ANDROID on an iPhone? Are you SERIOUS?!?

We did a double-take when we saw the tweet.

In hindsight, we’re not sure why, because the announcement was short, even for a tweet, and entirely unambiguous:

IT’S ANDROID. FOR THE IPHONE.

And it really is as simple as that.

Actually, if we’re honest, it’s not quite that simple, as you can see if you look at the “what works” matrix on the Project Sandcastle website.

The “what works by model” matrix shortly after the project was announced.
[Screenshot at 2020-03-05T18:30Z]

The green continents and islands denote the components in each device that work properly, while the pink oceans are the bits that you can’t use.

In other words, the phone part of your phone – the row labelled Cellular – won’t work anywhere, so the one thing you won’t be turning your iPhone into is, to put not too fine a point on it, a phone.

Likewise, no audio, even on an iPod; no camera; no Bluetooth; and on some devices, no display.

But the really bad news is the CPU row, which has only three green squares, and tells you that the Sandcastle builds will only work on iPhone 7 devices (and the iPod 7G) for now.

If you happen to have a surplus-to-requirements iPhone 7 lying around, and you decide to give this Android thing a spin please let us know in the comments how you got along. (Some users are reporting serious overheating issues, so take care out there!)

Jailbreaking revisited

Freeing up Apple iDevices to run alternative firmware builds has always divided the IT industry’s opinion – even if all you want to do is run an official iOS version configured in a non-standard way, for example with an SSH server running so you can log in on the command line from your laptop.

It’s known as jailbreaking, a loaded metaphor that different observers interpret in interestingly different ways.

To some, jailbreaking represents a righteous fight for digital freedom, assuming that you’re jailbreaking a device that you bought yourself with your own after-tax income.

To others, it’s evidence of a scofflaw attitude to digital society, typically carried out to get rid of lawfully implemented controls over intellectual property. (Meaning: people do it so they can pirate stuff.)

Indeed, Corellium, the company behind Project Sandcastle, has only two blog postings on its website, and they relate to legal action from Apple to do with “freeing up” iPhones.

But, as Corellium points out on the Sandcastle page:

Android for the iPhone has many exciting practical applications, from forensics research to dual-booting ephemeral devices to combatting e-waste. Our goal has always been to push mobile research forward, and we’re excited to see what the developer community builds from this foundation.

We’re particularly sympathetic to the idea of “combatting e-waste”, not least because the only way to keep using an iPhone after Apple stops supporting it if you don’t use a jailbreak is to run it indefinitely without any security updates.

In other words, if you prefer to repurpose rather than to recycle/replace old electronics (because we know you’d never dump old phones into landfill), then you’re on the horns of a dilemma.

Either you have to figure out your own security fixes and then jailbreak to apply them, running the risk of being called a scofflaw yourself.

Or you have to run the gauntlet of the scofflaw cybercriminals who already have access to a range of attacks that they know you won’t – can’t, in fact – have patched against.

What to do

For the record, we usually end any stories of this sort by advising against allowing jailbroken phones on your business network – indeed, our own Sophos Mobile product helps you to keep jailbroken and rooted devices at arm’s length if that’s what you want.

That’s for the uncomplicated reason that, for IT staff at work, “life’s already too short” without having to deal with mobile devices that are in an unknown and untested state. (In other words, while jailbreaking may allow you to improve security, it frequently, if inadvertently, does the opposite.)

In this case, we don’t think we need to add a “don’t try this at work” warning, given how limited the range and functionality of the current Sandcastle builds are.

If you do want to try it at home, however, you can indeed have Android on your iPhone, provided you don’t want to make any phone calls (although without audio you wouldn’t be able to hear them anyway), as long as you have an iPhone with a model number greater than 6 and smaller than 8.

As Corellium itself says:

Android for the iPhone is in beta and has only had limited testing. Any impact on battery, performance, or other components is unknown. Please use caution in installing and using this version.


go top