Category Archives: News

Thousands of Android apps contain undocumented backdoors, study finds

What might some Android apps be quietly doing behind the backs of their users?

The answer, according to a succession of studies, is quite a lot, probably more than some users would be comfortable with if they knew about it.

This isn’t necessarily about outright malicious apps so much as legitimate apps taking liberties or installing with capabilities users wouldn’t expect to exist.

For example, in March researchers reported that some apps pay a lot of attention to other apps installed on a device, which in theory could be used to gather data on a user’s behaviour and inclinations.

But a recently published study from researchers at Ohio State University, New York University, and the Helmholtz Center for Information Security (CISPA) offers hard evidence that undocumented and hidden behaviours often extend far beyond mere nosy snooping.

Using a sophisticated static analysis tool called InputScope developed for the purpose, the team analysed the behaviour of 150,000 apps, comprising the 100,000 most popular on Google Play in April 2019, plus 30,000 apps pre-installed on Samsung devices, and 20,000 taken from the alternative Chinese market Baidu.

The study examined two issues – what proportion of apps exhibited secret behaviours and how these might be used or abused.

Of the 150,000, 12,706 exhibited a range of behaviours indicating the presence of backdoors (secret access keys, master passwords, and secret commands) plus another 4,028 that seemed to be checking user input against blacklisted words such as political leaders’ names, incidents in the news, and racial discrimination.

Looking at backdoors, both Google Play and apps from alternative app stores such as Baidu showed roughly the same percentage of apps falling into this category, 6.8% and 5.3% respectively.

Interestingly, for pre-installed ‘bloatware’ apps, the percentage showing this behaviour was double the other sources at around 16%.

This finding chimes with a public letter sent to Google CEO Sundar Pichai in January by Privacy International that criticised the way that pre-installed apps are often not scrutinised for privacy and security problems, creating a tempting workaround for surveillance.

As a separate 2019 Spanish study documented, the provenance of pre-installed apps is often shadowy, based on commercial tie-ups between phone makers that the end user would not be aware of.

The latest results would seem to confirm this, not only for behaviours that can be described as backdoors but for secret blacklisting.

That behaviour was uncovered in nearly 4.5% of apps from Baidu but also nearly 3.9% of pre-installed apps. The figure for Google apps was around 2%.

The important question is what dangers backdoor and blacklists access might result in, beyond the fact they sound like a bad thing.

The team took a closer look at 30 apps, picked at random from apps with more than a million installs, finding that one installed with the ability for someone to remotely log into its admin interface.

Others could reset user passwords, bypass payment interfaces, initiate hidden behaviours using secret commands, or just stop users from accessing specific, sometimes political content.

Backdoor is an emotive term that covers almost any secret, remote feature users don’t know about, some of which might be legitimate in some circumstances, for example remotely resetting a lost device. Others looked downright deceptive.

But even if some were legitimate, the fact they exist creates a potential security hazard should these interfaces become known about. It’s the simple reason why backdoors put there for programming convenience are never a good idea, period.

But perhaps the biggest consequence from the study is simply how many Google Play apps exhibit these behaviours. While the Play Store is large, the fact that several thousand apps have hidden backdoors hardly inspires confidence.

Worse, there is currently no easy way, short of the sort of weeks-long analysis carried out by the researchers using a dedicated tool, to know which apps operate in this way.

That’s not so much a backdoor as a blind spot, another problem Google’s sometimes chaotic Android platform could do without.

Will Apple’s “microphone switch” stop your iPad getting bugged?

There’s been a bit of a buzz in the news lately over an “epic new feature” in the next Apple iPad model – the one that’s supposed to come out this year.

A microphone switch!

A real-life, break-in-the-wire(ish) microphone switch so that you can be sure that your iPad really isn’t recording you while you’re in your car (less of a problem these days because few of us are commuting) or sitting around at home (more of an issue now because we’re living, working and teaching our kids in the same place).

Before you get too excited, we thought we’d add a few details to the story, and offer some tips for those of us who probably won’t be splashing out on new tablets this year, even if we wanted to.

The microphone switch isn’t a switch in the conventional sense – you don’t operate it like a regular light switch.

It’s built into the T2 Security chip, which has been part of Apple’s laptop hardware from about 2018 onwards, including recent MacBook Airs and MacBook Pros.

It’s the Security Chip that turns the microphone off, triggered by the laptop itelf:

All Mac portables with the Apple T2 Security Chip feature a hardware disconnect that ensures the microphone is disabled whenever the lid is closed. On the 13-inch MacBook Pro and MacBook Air computers with the T2 chip, and on the 15-inch MacBook Pro portables from 2019 or later, this disconnect is implemented in hardware alone.

We’re assuming that the same detector that turns of the screen and triggers the software switch to put your Mac to sleep when you close the lid is what activates and deactivates the microphone.

By hooking that function up to the security chip instead of letting the regular software in the operating system take care of the microphone, Apple has effectively cut macOS out of the equation for detecting “should things be on or off”.

Apparently, Apple is extending the T2 Security Chip’s self-contained switching abilities to its new iPad range, activated by an external trigger that is MFi compliant.

Incidentally, the “Fi” in Apple’s MFi programme isn’t like the Fi in Wi-Fi, but originally meant “for iPod“.

MFi now stands for “made for iPad and iPhone”, and it encompasses physical connectors and charging devices as well as the technology and protocols used for close-proximity wireless connectivity.

According to Apple, this covers “technologies and components” all the way from AirPlay audio, CarPlay and GymKit to Lightning connectors and receptables, magnetic charging module, and smart connector.

In future, so called “smart cases” will officially be able to tell the T2 chip they’re closed, and thus to trigger actions, including disconnecting the microphone, in a way that doesn’t rely on the correct behaviour of any apps, the operating system kernel itself, or even the main device firmware on which the operating system runs.

Of course, this is still a long way from turning off a physical switch, or from physically yanking out a jack from a socket.

A system of this sort also relies on the veracity of your smart case, which has got us speculating about a variant of the “evil cleaner” attack, where a malevolent and well-funded threat actor bribes a hotel cleaner to tamper with your laptop, your phone – or, who knows, your smart case – while you’re out of your room.

What to do?

You don’t need to do anything for this one – we thought it would be harmless fun to speculate about “smart case evil cleaner” attacks.

(We also suspect that few of us will be staying in hotels for a while, or even travelling at all, which makes the attack yet more fanciful still.)

Nevertheless, there is a pretty useful habit you can adopt right away if you want, namely actually powering off your phone (or your laptop) once in a while.

For example, if you want to have a truly private chat – and you may have no other reason than you simply want it to feel private – you can’t just leave your phone behind and head off to a remote location with a picnic basket these days.

So you may want to remember the old-school “power off” trick for your truly private times.

Sure, you have to trust that the phone really has turned itself off, but there are some ways you can be fairly certain it has.

There won’t be any detectable radiation coming from it, for a start, whether that’s electromagnetic in the form of visible light or radio frequencies, or heat dissipated by a running processor.

And if it’s not getting hot then you can safely bury it in a bag – or stash it in a cupboard in the basement with a sign saying “Beware of the Leopard.”

PS. Due to the coronavirus situation at the time of writing, some jurisdictions are requiring that at least some people leave their phones turned on and allow themselves to be tracked for health-related reasons. We are not advocating civil disobedience by turning off your phone if you aren’t supposed to. We’re just reminding you that the microphones and cameras in your phone already have a master switch.


Latest Naked Security podcast

Rights groups appeal to governments over COVID-19 surveillance

Digital and human rights groups have joined in a rare worldwide appeal to governments to respect privacy when handling the COVID-19 crisis.

As the number of known COVID-19 cases around the world exceeds 1.2m and the number of deaths reaches 70,000, more than 100 groups signed a letter to governments urging them to be measured in their response to the virus. They should consider human rights in their effort to track the potential spread of the disease among their populations, the letter said:

States’ efforts to contain the virus must not be used as a cover to usher in a new era of greatly expanded systems of invasive digital surveillance.

Signatories included technology-focused groups such as AI Now, Algorithm Watch, and the World Wide Web Foundation, along with human rights groups like Amnesty International and Human Rights Watch. Several country-specific groups like the Irish Council for Civil Liberties and the Swedish Consumers’ Association also signed up.

The letter explained:

These are extraordinary times, but human rights law still applies. Indeed, the human rights framework is designed to ensure that different rights can be carefully balanced to protect individuals and wider societies. States cannot simply disregard rights such as privacy and freedom of expression in the name of tackling a public health crisis. On the contrary, protecting human rights also promotes public health. Now more than ever, governments must rigorously ensure that any restrictions to these rights is in line with long-established human rights safeguards.

The letter called for governments to take the option of increased digital surveillance off the table unless they met eight conditions:

  • Keep surveillance measures lawful and transparent so that third parties can evaluate them.
  • Have an end date when the extra surveillance measures will cease.
  • Only use the data collected for responding to the pandemic, and for no other purpose.
  • Keep the data safe and explain how it has been anonymised.
  • Watch out for algorithmic bias against marginalised populations in surveillance and big data systems, including racial groups and those living in poverty.
  • Only share with third parties according to the law, and make those data sharing agreements public.
  • Give individuals the right to challenge the collection of their personal data.
  • Give all stakeholders the chance to contribute to policy discussions around surveillance, including public health groups.

As we reported in March, several governments have already kicked off surveillance measures to track the spread of the disease, occasionally without informing citizens directly. Initiatives have included gathering cellphone data covertly using anti-terrorism systems, and in some cases forcing people to prove self-isolation with GPS data and selfies, or face the possibility of police coming to their doors. The UK government is said to have tried sourcing location data directly from telcos to help with the public health effort.

Some other initiatives to use mobile phones as a lockdown enforcement mechanism mobilise people to tell on their neighbours. Police in Bellevue, Washington have encouraged residents to report anyone seen violating the state’s stay-at-home order using MyBellvue, a municipal smartphone app originally launched to provide public service information.

Others have used cellphone data to take a more macroscopic view of public movements. Google has used anonymous location data from millions of Android phones that have location history enabled to determine how well people are abiding by shelter-in-place and similar orders. Last week, it released reports showing traffic in public spaces across 131 countries. The UK had seen visits to retail and recreational locations drop by 85%, the data revealed.


Latest Naked Security podcast

Firefox zero day in the wild: patch now!

Mozilla just pushed out an update for its Firefox browser to patch a security hole that was already being exploited in the wild.

If you’re on the regular version of Firefox, you’re looking to upgrade from 74.0 to 74.0.1 and if you’re using the Extended Support Release (ESR), you should upgrade from ESR 68.6.0 to ESR 68.6.1.

Given that the bug needed patching in both the latest and the ESR versions, we can assume either that the vulnerability has been in the Firefox codebase at least since version 68 first appeared, which was back in July 2019, or that it was introduced as a side effect of a security fix that came out after version 68.0 showed up.

(If you have ESR version X.Y.0, you essentially remain on the feature set of Firefox X.0, but with all the security fixes that have come out up to and including Firefox (X+Y).0, so the ESR is popular with IT departments who want to avoid frequent feature updates that might require changes in company workflow, but don’t want to lag behind on security patches.)

What we can’t tell you yet are any details about exactly how long ago the bug was found by the attackers, how they are exploiting it, what they’re doing with it, or who’s been attacked so far.

Right now, Mozilla is saying no more than this:

 CVE-2020-6819: Use-after-free while running the nsDocShell destructor Under certain conditions, when running the nsDocShell destructor, a race condition can cause a use-after-free. We are aware of targeted attacks in the wild abusing this flaw. CVE-2020-6820: Use-after-free when handling a ReadableStream Under certain conditions, when handling a ReadableStream, a race condition can cause a use-after-free. We are aware of targeted attacks in the wild abusing this flaw.

The bug details in Mozilla’s bug database aren’t open for public viewing yet [2020-04-04T14:30Z], presumably because the Mozilla coders who fixed the flaw have, of necessity, described and discussed it in sufficient detail to make additional exploits very much easier to create.

What does use-after-free mean?

A use-after-free is a class of bug caused by incautious use of memory blocks by a program.

Usually, a program returns blocks of memory to the operating system after it has finished with them, allowing the memory to be used again for something else.

Returning memory when you are done with it stops your program from hogging more and more RAM the longer it runs until the whole system bogs down.

The function call by which memory is returned to be used again is called free(), and once you’ve freed the memory, you rather obviously shouldn’t access it again.

Most importantly, if you read and trust data that now belongs to another part of the program – for example, memory that just got re-allocated as a place to store untrusted content that was downloaded from a web page or generated by JavaScript fetched from outside – then you may inadvertently put your code at the mercy of data that was carefully crafted by a crook and served up to trick you on purpose.

Not all use-after-free bugs are exploitable, and not all exploits are made equal – for example, an attacker might only be able to change the content of an icon or a message you are about to display, which could be used to deceive users (for example by giving positive feedback when something actually failed), but not to implant malware directly.

But in some cases, use-after-free bugs can allow an attacker to change the flow of control inside your program, including diverting the CPU to run untrusted code that the attacker just poked into memory from outside, thereby sidestepping any of the browser’s usual security checks or “are you sure” dialogs.

That’s the most serious sort of exploit, known in the jargon as RCE, short for remote code execution, which means just what it says – that a crook can run code on your computer remotely, without warning, even if they’re on the other side of the world.

We’re assuming, because these bugs are dubbed critical, that they involve RCE.

What to do?

What one team of crooks has already found, others might find in turn, especially now they have at least a vague idea of where to start looking.

So, as always, patch early, patch often!

Most Firefox users should get the update automatically, but you might as well check to make sure it’s there – because the act of checking will itself trigger an update if you haven’t got it yet.

Click the three-bar icon (hamburger menu) icon at the top right, then choose Help > About Firefox.

Left. click the hamburger menu icon, then choose Help.
Right. Click on About to bring up the version dialog and check for updates.
This regular Firefox install on Windows is up to date.
This ESR version on Linux is up to date.

Latest Naked Security podcast

5 things you can do today to make Zooming safer

Work still means meetings, and meetings still mean people.

But with the coronavirus pandemic having caused many countries to define a “group” as a maximum of two people, and prohibiting people from meeting up face-to-face anyway, even with friends and family, then meeting with people means an online meeting.

For very many of us, that means Zoom, not least because many of us were using Zoom already, and happily, and…

…or so we thought, safely.

But Zoom has had a bunch of security scares recently, as huge numbers of new users flock to it, and as crooks and miscreants try to take advantage of that.

Fortunately, a lot of the problems and risks people are having can be reduced enormously just by getting the basics right.

Unfortunately, a lot of the habits that existing Zoom users have fallen into need to change.

Insecure shortcuts – ways of using Zoom that the old-timers have inadvertently been teaching to the Zoom newcomers – didn’t seem to matter that much before, but they do now.

So here are our top 5 “things to get right first” – they shouldn’t take you long, and they are easy to do.

1. Patch early, patch often

Zoom’s own CEO just wrote a blog post announcing a “feature freeze” in the product so that the company can focus on security issues instead. It’s much easier to do that if you aren’t adding new code at the same time.

Why not get into the habit of checking you’re up-to-date every day, before your first meeting? Even if Zoom itself told you about an update the very last time you used it, get in the habit of checking by hand anyway, just to be sure. It doesn’t take long.

By the way, we recommend you do this with all your software – even if you have been using your operating system’s or an app’s autoupdating for years and it’s always been on time, a manual cross-check is quick and easy.

Zoom’s guide is here: Where do I download the latest version?

2. Use the Waiting Room option

Set up meetings so that the participants can’t join in until you open it up.

And if you suddenly find yourself “on hold until the organiser starts the meeting” when in the past you would have spent the time chatting to your colleagues and getting the smalltalk over with, don’t complain – those pre-meeting meetings are great for socialising but they do make it harder to control the meeting.

Zoom has a dedicated article on the Waiting Room feature.

3. Take control over screen sharing

Until recently, most Zoom meetings (or at least the ones we attended in the not-too-distant era before coronavirus) took a liberal approach to screen sharing.

But the term ZoomBombing entered our vocabulary very forcefully about two weeks ago, when a public “Happy Hour” meeting that was supposed to buoy everyone’s morale turned into an HR nightmare when one of the participants, who had entered under a false name, started sharing pornographic filth. (Unhappily for the organiser of the meeting, he’d chosen that day to invite his parents along as guests of honour.)

Actually, it’s not just screen sharing that can cause trouble. There are numerous controls you can apply to participants in meetings, including blocking file sharing and private chat, kicking out disruptive users, and stopping troublemakers coming back.

Zoom has a dedicated article on Managing participants in a meeting.

4. Use random meeting IDs and set meeting passwords

We know lots of Zoom users who memorised their own meeting ID long ago and had fallen into the habit of using it for every meeting they held – even back-to-back meetings with different groups – because they knew they’d never need to look it up.

But that convenience is handy for crooks, too, because they already have a list of known IDs that they can try automatically in the hope of wandering in where they aren’t supposed to be.

We recommend using a randomly generated meeting ID, and setting a password on any meeting that is not explicitly open to all. You can send the web link by one means, e.g. in an email or invitation request, and the password by another means, e.g. in an instant message just before the meeting starts. (You can also lock meetings once they start to avoid gaining unwanted visitors after you’ve started concentrating on the meeting itself.)

Zoom has a dedicated article on Meeting and webinar passwords.

5. Make some rules of etiquette and stick to them.

Etiquette may sound like a strange bedfellow for cybersecurity, and perhaps it is.

But respect for privacy, a sense of trust, and a feeling of social and business comfort are also important parts of a working life that’s now dominated by online meetings.

If you’re expected or you need to use video, pay attention to your appearance and the lighting. (In very blunt terms: try to avoid being a pain to watch.) Remember to use the mute button when you can.

And most importantly – especially if there are company outsiders in the meeting – be very clear up front if you will be recording the meeting, even if you are in a jurisdiction that does not require you to declare it. And make it clear if they are any restrictions, albeit informal ones, about what the participants are allowed to do with the information they learn in the meeting.

Etiquette isn’t about keeping the bad guys out. But respectful rules of engagement for remote meetings help to make it easy for everyone in the meeting to keep the good stuff in.


Latest Naked Security podcast

go top