Switch to Signal for encrypted messaging, EC tells staff

Imagine that you work in government or at an NGO – both places that want to keep their communications private.

Understandably, given that governments these days use powerful spyware to surveil political activists, NGOs, and each other, you and your colleagues use an encrypted messaging app.

There’s a good chance that you’ve gone with WhatsApp, which has been a trailblazer in end-to-end encrypted messaging. As early as 2016, The Guardian was referring to the app as a “vital tool” to conduct diplomacy – an app with which diplomats could “talk tactics, arrange huddles, tweak policy – and send Vladimir Putin emojis.”

But given recent events, you have to wonder: what happens if holes develop in that supposed cone of silence?

Like, say, the stupidly simple social engineering hack that the UN said was used – allegedly by the crown prince of Saudi Arabia – to infect Amazon CEO Jeff Bezos’s phone with personal-message-exfiltrating malware, with one single click?

Or the zero-day vulnerability in WhatsApp that allowed attackers to silently install spyware just by placing a video call to a target’s phone? Or, as happened this past weekend, the way that WhatsApp and parent company Facebook shrugged off responsibility for private groups being indexed by search engines, thereby rendering them easy to find and join by anybody who knew the simple search string?

What happens, at least in the case of the European Commission (EC), is that you tell your staff to move over to Signal. Last week, Politico reported that earlier this month, the EC took to internal messaging boards to recommend moving to the alternative end-to-end encrypted messaging app, which it said “has been selected as the recommended application for public instant messaging.”

The EC didn’t mention WhatsApp, per se. It didn’t have to. Security experts have been pointing out reasons why it’s a potential national security risk for a while. Besides its recent and not-so-recent security flubs, there are privacy issues that come with being swallowed up by Facebook. One of WhatsApp’s co-founders, Brian Acton, left the company after the Facebook acquisition, saying that Facebook wanted to do things with user privacy that made him squirm. In his words: “I sold my users’ privacy.”

As Politico notes, privacy activists favor Signal not just because of its end-to-end encryption. Bart Preneel, cryptography expert at the University of Leuven, told the news outlet that, unlike WhatsApp, Signal is open-source, which makes it easy to find security flaws and privacy-jeopardizing pitfalls:

It’s like Facebook’s WhatsApp and Apple’s iMessage, but it’s based on an encryption protocol that’s very innovative. Because it’s open-source, you can check what’s happening under the hood.

Signal is recommended by a who’s who list of cybersecurity pros, including Edward Snowden, Laura Poitras, Bruce Schneier, and Matthew Green. “Use anything by [Signal’s protocol, called] Open Whisper Systems,” as Snowden is quoted as saying on the app’s homepage, while Poitras praises its scalability.

Cryptographer Green says he literally started to drool when he looked at the code. While WhatsApp is based on Open Whisper Systems, it’s not open-source, so it’s not as easy to spot something that goes awry. Another plus of Signal: unlike WhatsApp, it doesn’t store message metadata that could expose users in worldwide data centers. Nor does it use the cloud to back up messages, further exposing them to potential interception.

Sorry, WhatsApp, but you just don’t induce drooling among cryptographers.

Unlike WhatsApp, Signal is operated by a non-profit foundation – one that WhatsApp co-founder Brian Acton put $50 million into after he ditched Facebook – and is applauded for putting security above all else. Like, say, in October 2019, when it immediately fixed a FaceTime-style eavesdropping bug. It fixed the bug in both Android and iOS on 27 September – the same day on which it was reported.

It’s not just Signal’s reputation and WhatsApp’s problems that have pushed the EC into recommending that Signal become the private messaging app of choice – also motivating the Commission are multiple high-profile security incidents that have rattled officials and diplomats.

EC officials are already required to use encrypted email when exchanging sensitive, non-classified information, an official told Politico. The recommendation to use Signal mainly pertains to communications between EC staff and people outside the organization, the news outlet reported, and is a sign that diplomats are trying to bolster security in the wake of recent breaches.

The EC isn’t the only governmental body to dump WhatsApp in favor of Signal. As The Guardian reported in December 2019, the UK’s Conservative party switched to Signal following years of leaks from WhatsApp groups.

What’s ironic, of course, is that governments have been hounding companies to put backdoors in all of these products. While law enforcement in multiple governments have been demanding an end to encrypted messaging that they can’t penetrate, they themselves are increasingly turning to ever more reliable forms of encrypted messaging.

What’s good for the gander isn’t quite up to snuff for the goose, apparently.

But while WhatsApp suffers in comparison to Signal, and while at least two government outfits have shed it in favor of Signal, WhatsApp still matters. It’s one of the messaging apps that’s at the heart of the encryption debate. Facebook, alongside Apple, has stood up to the US Congress to defend end-to-end encryption, in the face of lawmakers telling the companies that they’d better put in backdoors – or else they’ll pass laws that force an end to end-to-end encryption.

As Politico reported, in June 2019, senior Trump administration officials met to discuss whether they should seek legislation to ban unbreakable encryption. They didn’t come to an agreement, but such laws are undeniably on the table.

That matters. Regardless of which messaging app the EC switches to, or the Tories, they’re all liable to being outlawed if the world’s superpowers get their way and legislate backdoors into existence. As goes WhatsApp and Apple encryption, so goes Signal, or Wickr, or any other flavor of secure IP messaging.

And, of course, so goes the stronger security that some government bodies are, ironically enough, moving to embrace.

Watch it, goose and gander, before you wind up cooking both yourself and your own sensitive communications.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Mystery zero-day in Chrome – update now!

Google has issued an update for its widespread Chrome browser to fix three security holes.

Unfortunately, one of those holes is what’s known as a zero-day: a bug that was already being exploited by cyerbcrooks before Google tracked it down and fixed it.

Google, which is often vociferous about bugs and how they work, especially those found by its own Project Zero and Threat Analysis teams, is playing its cards close to its chest in this case.

As the company’s update notification says:

Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.

We’re guessing that Google is worried that giving away too much at this stage might encourage additional attackers – ones who haven’t figured this bug out yet – to try to get in on the act.

If those crooks know other Bad Guys have already figured out how to exploit this vulnerability for active attacks, then they know that there’s more than just a theoretical chance of abusing the bug if they happen to rediscover it themselves.

So far, then, Google has only offered this comment about the vulnerability:

CVE-2020-6418: Type confusion in V8. Reported by Clement Lecigne of Google’s Threat Analysis Group on 2020-02-18

Google is aware of reports that an exploit for CVE-2020-6418 exists in the wild.

Two researchers at a business called Exodus Intelligence have already published a proof-of-concept exploit, which they devised by studying recent changes in the V8 source code. Fortunately, their example requires you to visit a web page using Chrome with its so-called sandbox protection turned off. In regular use, however, Chrome runs with its protective sandbox enabled, so even if this proof-of-concept exploit were to trigger the bug, it couldn’t then grab control from the browser to run malware code of an attacker’s choosing. We assume that Google’s statement about an exploit “in the wild” refers to an attack that works even if Chrome is run in the usual way.

To explain.

A type confusion bug is where you are able trick a program into saving data for one purpose (data type A) but then using it later for a different purpose (data type B).

Imagine that a program is very careful about what values it allows you to store into memory when you are treating it as type B.

For example, if a ‘type B’ memory location keeps track of a memory address (a pointer, to use the jargon word), then the program will probably go to great lengths to stop you modifying it however you like.

Otherwise, you might end up with the power to read secret data from memory locations you aren’t supposed to access, or to execute unknown and untrusted program code such as malware.

On the other hand, a memory location that’s used to store something such as a colour you just chose from a menu might happily accept any value you like, such as 0x00000000 (meaning completely transparent) all the way to 0xFFFFFFFF (meaning bright white and totally opaque).

So if you can get the program to let you write to memory under the low-risk assumption that it is storing a colour, but later to use that “colour” as what it thinks is a trusted memory address in order to transfer program execution into your malware code…

…you just used type confusion to bypass the security checks that should have been applied to the memory pointer.

(For performance reasons, a lot of software verifies the safety of data when its value is modified, not every time it is used, on the grounds that if the data was safe when it was saved, it should remain safe until the next time it is modified.)

What’s V8?

V8, in case you are wondering, is the JavaScript “engine” that is built into the Chrome browser.

Numerous other projects use V8, notably the node.js software development system, widely used these days for server programming, and Microsoft’s new-but-not-quite-official-yet variant of its Edge browser, which is based on Google’s V8 engine rather than Microsoft’s own ChakraCore JavaScript system.

We’re assuming that if other V8-based applications do turn out to share this bug, they will soon be patched too – but as far as we know now [2020-02-25T18:50Z], the in-the-wild exploit only applies to V8 as used in Chrome itself.

What to do?

As Google reports:

The [regular release version] has been updated to 80.0.3987.122 for Windows, Mac, and Linux, which will roll out over the coming days/weeks.

However, given what seems to be a clear and present danger in this case, we suggest that you don’t wait for your Chrome to get round to updating by itself – go and check for yourself if you’re up-to-date.

And remember, patch early, patch often, especially if the crooks are already ahead of you!

The “Cloud Snooper” malware that sneaks into your Linux servers

SophosLabs has just published a detailed report about a malware attack dubbed Cloud Snooper.

The reason for the name is not so much that the attack is cloud-specific (the technique could be used against pretty much any server, wherever it’s hosted), but that it’s a sneaky way for cybercrooks to open up your server to the cloud, in ways you very definitely don’t want, “from the inside out”.

The Cloud Snooper report covers a whole raft of related malware samples that our researchers found deployed in combination.

It’s a fascinating and highly recommended read if you’re responsible for running servers that are supposed to be both secure and yet accessible from the outside world – for example, websites, blogs, community forums, upload sites, file repositories, mail servers, jump hosts and so forth.

In this article, we’re going to focus on just one of the components in the Cloud Snooper menagerie, because it’s an excellent reminder of how devious crooks can be, and how sneakily they can stay hidden, once they’re inside your network in the first place.

If you’ve already downloaded the report, or have it open in another window, the component we’re going to be talking about here is the file called snd_floppy.

That’s a Linux kernel driver used by the Cloud Snooper crooks so that they can send command-and-control instructions right into your network, but hidden in plain sight.

If you’ve heard of steganography, which is where you hide snippets of data in otherwise innocent-looking files such as videos or images where a few “noise” pixels won’t attract any attention, then this is a similar sort of thing, but for network traffic.

As we say in the steganography video that we linked to in the previous paragraph:

You don’t try and scramble the message so nobody can read it, so much as deliver a message in a way that no one even realises you’ve sent a message in the first place.

In-band signalling

The jargon term for the trick that the snd_floppy driver uses is known as in-band signalling, which is where you use unexceptionable but unusual data patterns in regular network traffic to denote something special.

Readers whose IT careers date back to the modem era will remember – probably unfondly – that many modems would “helpfully” interpret three plus signs (+++) at any point in the incoming data as a signal to switch into command mode, so that the characters that came next would be sent to the modem itself, not to the user.

So if you were downloading a text file with the characters HELLO+HOWDY in it, you’d receive all those characters, as expected.

But if the joker at the other end deliberately sent HELLO+++ATH0 instead, you would receive the text HELLO, but the modem would receive the text ATH0, which is the command to hang up the phone – and so HELLO would be the last thing you’d see before the line went dead.

This malware uses a similar, but undocumented and unexpected, approach to embedding control information in regular-looking data.

The crooks can therefore hide commands where you simply wouldn’t think to watch for them – or know what to watch for anyway.

A sneaky name

In case you’re wondering, there isn’t a legitimate Linux driver called snd_floppy, but it’s a sneakily chosen name, because there are plenty of audio drivers called snd_somethingorother, as you can see from this list we extracted from our own Linux system:

# awk '/^snd_/ {print $1}' /proc/modules | sort
snd_hda_codec
snd_hda_codec_generic
snd_hda_codec_hdmi
snd_hda_codec_realtek
snd_hda_core
snd_hda_intel
snd_hwdep
snd_intel_nhlt
snd_pcm
snd_timer
#

In real life, the bogus snd_floppy driver has nothing to do with floppy disks, emulated or real, and nothing to do with sound or audio support.

What snd_floppy does is to monitor innocent-looking network traffic to look for “in-band” characteristics that act as secret signals.

There are lots of things that “sniffer-triggered” malware like this could look out for – slightly weird HTTP headers, for instance, or web requests of a very specific or unusual size, or emails with an unlikely but not-too-weird name in the MAIL FROM: line.

But snd_floppy has a much simpler and lower-level trick than that: it uses what’s called the network source port for its sneaky in-band signals.

You’re probably familiar with TCP destination ports – they’re effectively service identifiers that you use along with an IP address to denote the specific program you want to connect to on the server of your choice.

When you make an HTTP connection, for example, it’s usually sent to port 80, or 443 if it’s HTTPS, on the server you’re reaching out to, denoted in full as http://example.com:80 or https://example.com:443. (The numbers are typically omitted whenever the standard port is used.)

Because TCP supports multiple port numbers on every server, you can run multiple services at the same time on the same server – the IP number alone is like a street name, with the port number denoting the specific house you want to visit.

But every TCP packet also has a source port, which is set by the other end when it sends the packet, so that traffic coming back can be tracked and routed correctly, too.

Now, the destination port is almost always chosen to select a well-known service, which means that everyone sticks to a standard set of numbers: 80 for HTTP and 443 for HTTPS, as mentioned above, or 22 for SSH, 25 for email, and so on.

But TCP source ports only need to be unique for each outbound connection, so most programmers simply let the operating system choose a port number for them, known in the jargon as an ephemeral port.

Wireshark packet capture showing an ephemeral source port (53922)
chosen for a connection to a web server (port 80).

Ports are 16-bit numbers, so they can vary from 1 to 65535; ephemeral ports are usually chosen (randomly or in sequence, wrapping around back to the start after the end of their range) from the set 49152 to 65535.

Windows and the BSD-based operating systems use this range; Linux does it slightly differently, usually starting at 32768 instead – you can check the range used on your Linux system as shown below.

On our Linux system, for example, ephemeral (also known as dynamic) ports vary between 32768 and 60999:

$ /sbin/sysctl net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768 60999

But there are no rules to say you can’t choose numbers outside the ephemeral range, and most firewalls and computers will accept any legal source port on incoming traffic – because it is, after all, legal traffic.

You can see where this is going.

Secret source port signals

The devious driver snd_floppy uses the usually unimportant numeric value of the TCP source port to recognise “secret signals” that have come in from outside the firewall.

The source port – just 16 pesky bits in the entire packet – is what sneaks the message in through the firewall, whereupon snd_floppy will perform one of its secret functions based on the port number, including:

  • Extract and launch a malware program. The malware program is packaged up as data inside the driver and is only extracted and run when this command arrives. This means the malware program itself isn’t visible when it’s not in active use. (Source port=6060.)
  • Redirect this packet to the malware. This means that packets unexpectionably aimed at, say, a web server – traffic that the firewall will typically accept – can be sneakily diverted once inside to act as malware command-and-control signals. (Source port=7070.)
  • Terminate and remove the running malware. This not only kills the malware process but also gets rid of its program file when it is no longer active. You won’t find the malware file because it will no longer be there. (Source port=9999.)
  • Divert this packet to the internal SSH server. If SSH (typically used for remote logins) is blocked from the outside, the crooks can now sneak their SSH traffic in via, say, the web server’s TCP port and then have it diverted once it’s through. (Source port=1010.)

Sure, the crooks are taking a small risk that traffic that wasn’t specially crafted by them might accidentally trigger one of the their secret functions, which could get in the way of their attack.

But most of the time it won’t, because the crooks use source port numbers below 10000, while conventional software and most modern operating systems stick to source port numbers of 32768 and above.

What to do?

  • If you’re worried about this particular malware, you could try setting special rules in your firewall to block the control packets specific to Cloud Snooper.

For details of the port numbers used and what they are for, please see the full Cloud Snooper report.

As suggested above, there is a small chance that source port filtering of this sort might block some legitimate traffic, because it’s not illegal, merely unusual, to use source port numbers below 32768.

Also, the crooks could easily change the “secret numbers” in future variants of the malware, so this would be a temporary measure only.

Traffic flow of the Cloud Snooper network filter showing the source port numbers used.
See the report for a full explanation.

There are five TCP source port numbers that the driver watches out for, and one UDP source port number. Ironically, leaving just TCP source port 9999 unblocked would allow any “kill payload” commands to get through, thus allowing the crooks to stop the malware but not to start it up again.

  • If you aren’t already, consider using a Linux anti-virus that can detect and prevent malware files from launching.

This will help you to spot and stop dangerous files of many types, including rogue kernel drivers, unwanted userland programs, and malicious scripts.

  • Revisit your own remote access portals – pick proper passwords, and use 2FA.

Crooks need administrator-level access to your network to load their own kernel drivers, which means that by the time you are vulnerable to an attack like Cloud Snooper, the crooks are potentially in control of everything anyway.

Many network-level attacks where criminals need root or admin powers are made possible because the crooks find their way in through a legimitate remote access portal that wasn’t properly secured.

  • Review your system logs regularly.

Yes, crooks who already have root powers can tamper with your logging configuration, and even with the logs themselves, making it harder to spot malicious activity.

But it’s rare that crooks are able to take over your servers without leaving some trace of their actions – such log entries showing unauthorised or unexpected kernel drivers being activated.

The only thing worse than being hacked is realising after you’ve been hacked you could have spotted the attack before it unfolded – if only you’d taken the time to look.


KidsGuard stalkerware leaks data on secretly surveilled victims

“KidsGuard?”

What an inappropriate name. It should be called KidsStalk-N-Dox, given that the makers of this consumer-grade stalkerware left a server open and unprotected, regurgitating the private data it slurped up from thousands of victims’ devices after a parent or other surveillance-happy person stealthily installed it.

The spyware app’s unprotected Alibaba cloud storage bucket was found by Till Kottmann. He’s a developer who reverse-engineers apps to see how they tick (or leak, in this case). Kottmann shared a copy of the Android version of KidsGuard with TechCrunch, which first reported on the data breach on Thursday.

Kottmann’s findings amount to “Goodness, Grandma, what enormous bites you take out of victims’ privacy with those big, keyloggy teeth of yours.”

KidsGuard comes from a company called ClevGuard that promises that its “excellent products” will deliver “all the information” from a targeted device, including real-time location, text messages, browser history, photos, videos, recordings of phone calls, keylogger data for every keystroke entered and the app where it came from, and all the data from all the social apps – hopping over the end-to-end encryption of, for example, WhatsApp.

<img data-attachment-id="477724" data-permalink="https://nakedsecurity.sophos.com/2020/02/24/kidsguard-stalkerware-leaks-data-on-secretly-surveilled-victims/kidsguardprokeylogger/" data-orig-file="https://sophosnews.files.wordpress.com/2020/02/kidsguardprokeylogger.jpg" data-orig-size="1179,672" data-comments-opened="1" data-image-meta="{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}" data-image-title="KidsGuard Pro keylogger" data-image-description="

KidsGuard Pro keylogger capture of WhatsApp message. IMAGE: ClevGuard demo

” data-medium-file=”https://sophosnews.files.wordpress.com/2020/02/kidsguardprokeylogger.jpg?w=300″ data-large-file=”https://sophosnews.files.wordpress.com/2020/02/kidsguardprokeylogger.jpg?w=775″ class=”size-large wp-image-477724″ src=”https://sophosnews.files.wordpress.com/2020/02/kidsguardprokeylogger.jpg?w=775&h=442″ alt width=”775″ height=”442″ srcset=”https://sophosnews.files.wordpress.com/2020/02/kidsguardprokeylogger.jpg?w=775&h=442 775w, https://sophosnews.files.wordpress.com/2020/02/kidsguardprokeylogger.jpg?w=150&h=85 150w, https://sophosnews.files.wordpress.com/2020/02/kidsguardprokeylogger.jpg?w=300&h=171 300w, https://sophosnews.files.wordpress.com/2020/02/kidsguardprokeylogger.jpg?w=768&h=438 768w, https://sophosnews.files.wordpress.com/2020/02/kidsguardprokeylogger.jpg?w=1024&h=584 1024w, https://sophosnews.files.wordpress.com/2020/02/kidsguardprokeylogger.jpg 1179w” sizes=”(max-width: 775px) 100vw, 775px”>

KidsGuard Pro keylogger capture of WhatsApp message. IMAGE: ClevGuard demo

According to TechCrunch’s Zack Whittaker, the Alibaba storage bucket was apparently set to public: a common mistake with cloud storage buckets. Another mistake: it was left wide open, without a password.

After TechCrunch contacted ClevGuard, it shut down the exposed cloud storage bucket. The news outlet also contacted Alibaba, which similarly alerted the company about the leak.

Here we go again

KidsGuard is like other many other commercial-grade spyware in that the stalker needs to have physical access to a device in order to install it. It just takes a few minutes. Whittaker reports that after installation, there’s no rooting or jailbreaking required.

ClevGuard says the app can also be used for iPhones without access to the device (as long as the user doesn’t have 2FA on, in which case you would need to access the phone) if you give it the target’s iCloud credentials.

The Android version that TechCrunch and Kottmann checked out also requires that some security features be disabled, such as allowing non-Google approved apps to be installed and disabling Google Play Protect, Google’s built-in malware protection for Android.

After that, it runs in stealth mode, convincingly posing as an Android “system update” app. It’s tough for a victim to know that their device has been boobytrapped, given that there’s no app icon for them to spot.

That leaves KidsGuard to freely siphon photos, videos, recordings of phone calls, and to monitor activity on a slew of apps, including on dating apps such as Tinder. It also secretly takes screenshots of a victim’s conversations in apps such as Snapchat and Signal, which have supposedly ephemeral messages that disappear. As we’ve noted in the past with regards to Snapchat, those messages don’t disappear, KidsGuard being one of many ways for them to be captured.

Cooper Quintin, senior staff technologist at the Electronic Frontier Foundation (EFF), told TechCrunch that it’s “both alarming and sickening” that the exposed data includes not only that of adults, but also of children.

This is evidence that not only are spouseware and stalkerware companies morally bankrupt, they are also often failing to protect their stolen user data once they have it.

KidsGuard isn’t the first spyware maker that has fumbled victims’ data. It happened with MobiiSpy in March 2019. It happened twice with mSpy, which leaked millions of records in September 2018 and, before that, had its database leaked online in 2015.

For its part, Retina-X Studios, the company behind PhoneSheriff, TeenShield, SniperSpy and Mobile Spy, was repeatedly hacked, first in April 2017 and again in February 2018.

Retina-X finally threw in the towel on the surveillance business a month after that… and then had to settle charges brought by the Federal Trade Commission (FTC) for failing to keep its products from being used as illegal stalking apps.

What to do

Whittaker put together a “detect-and-destroy” guide for identifying and removing KidsGuard from your Android phone, but first, you need to to check whether the app has been installed: Go to Settings > Apps, and see if “System Update Service” is listed. This is the name that ClevGuard has given the stalkerware to hide it from the user.

If you think your Android device has been infected with KidsGuard stalkerware, check out the rest of his guide for instructions on removing it.

For iPhone users, Paul Ducklin has the following advice:

If someone has full remote access to your iCloud then you’re in big trouble. They can find out loads about you, and can change it all, too, including resetting your own password and locking you out of your account. So don’t delay, use 2FA today.

If you suspect someone else has access to your iCloud but hasn’t locked you out, go in yourself, change your password and review your settings.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Google purges 600 Android apps for “disruptive” pop-up ads

You know those ads that obscure your whole screen when you’re trying to make a phone call, unlock your device or use your phone’s GPS?

Technically, they’re called disruptive or out-of-app ads, and they maddeningly pop up outside of the app that hosts them, sometimes causing users to mistakenly click them, thereby frustrating users and wasting advertisers’ money.

On Thursday, Google kicked nearly 600 of the offending apps off its Play store and banned them from its ad monetization platforms, Google AdMob and Google Ad Manager, for violating its disruptive ads policy and disallowed interstitial policy.

Disruptive ads are those that come at you in unexpected ways, including by getting in the way of a device’s functions. While they do occur in-app, Google has recently seen a rise in what it calls “out-of-context ads” – those created by malicious developers who program them to pop up when the user isn’t actually active in their app.

Per Bjorke, Google’s senior product manager for ad traffic quality, said in a Google security blog post that the developers behind these apps keep coming up with ways to deploy them and mask what they’re up to. But Google has been working on technology to detect them, and it’s led to Thursday’s purge:

We recently developed an innovative machine-learning based approach to detect when apps show out-of-context ads, which led to the enforcement we’re announcing today.

Also on Thursday, Google detailed a three-step plan to keep the Play Store and Android ad ecosystem from getting polluted by disruptive ads and other challenges.

One of those steps is doubling down on protecting advertisers from invalid traffic like that coming from disruptive, out-of-app ads. Sweeping the Play store of such apps on Thursday is one example, Google said, given that its investigations are ongoing and it plans to keep taking action against this kind of abuse.

Bjorke told BuzzFeed News that the apps removed on Thursday had been installed more than 4.5 billion times and that they primarily targeted English-speaking users. He also said that the apps mainly came from developers based in China, Hong Kong, Singapore, and India.

Bjorke declined to name specific apps or developers but said that many were utilities or games, although BuzzFeed News reporter Craig Silverman, who’s been reporting about Play Store fraud for a number of years, says that one of the app developers banned on Thursday is Cheetah Mobile, which had about 45 apps removed.

Google says that it’s going to crack down harder on ad policy abusers in the future. It will also publish better tools for app makers to keep compliant with ad industry standards and not annoy Android users.

Finally, Google says it’s going to fundamentally change the Android platform in order to minimize interruptions in app experiences. However, it didn’t elaborate on how it plans to give the user more control over what’s shown on their screen.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

go top