This home delivery scam arrives in an SMS that lures you to a website, but then instead of stealing your data directly via the phoney website, it sweet-talks you into installing an app…
…and the app steals your data later on:
[embedded content]
Watch directly on YouTube if the video won’t play here. Click the on-screen Settings cog to speed up playback or show subtitles.
Why not join us live next time?
Don’t forget that these talks are streamed weekly on our Facebook page, where you can catch us live every Friday.
We’re normally on air some time between 18:00 and 19:00 in the UK (late morning/early afternoon in North America).
Just keep an eye on the @NakedSecurity Twitter feed or check our Facebook page on Fridays to find out the time we’ll be live.
Swiss cybersecurity researchers recently found security holes in Composer, the software tool that programming teams use to access Packagist, the PHP ecosystems’s major online repository of PHP software modules.
These bugs could have allowed cybercriminals to poison the Packagist system itself, thus tainting the very watering hole at which a large part of the PHP community comes to drink.
That sort of cyberassault is known, for obvious reasons, as a supply chain attack.
Fortunately the Composer team responded with a hotfix within just 12 hours, and an official patch within five days.
Even though the researchers reported that “[s]ome of the vulnerable code [was] present since the first versions of Composer, 10 years ago,” it seems that this was the first time these flaws were spotted.
In other words, it looks as though the Good Guys got to these bugs before any Bad Guys did.
Why use a common code supply hub?
If you’re surprised that so many software vendors, both open source and commercial, rely directly on central code repositories that they don’t themselves control, don’t be.
After all, few businesses (or hobbyists) make all their own components these days.
Most jobbing builders order bricks from a supply company rather than operating a miniature brickworks in their back yard, for example; even companies as big as Apple get their phones and computers made in other people’s factories, with many or most of the parts bought in from external suppliers.
Almost all modern software development communities have giant, online treasure troves of source code already packaged up and ready to slurp up into your own software, as a way of promoting what’s known in the trade as code re-use.
The idea is to obviate the need for every programmer and every software company in the world to reinvent, redesign and reimplement core software components.
Even companies that compete head-to-head in the marketplace often have programmers working informally with their counterparts from competitors, along with volunteers, hobbyists and other interested individuals, on software packages that everyone needs.
Simply put, last millennium’s attitude known as NIH (short for not invented here) has largely been stood on its head in the 21st century, because it’s now often seen to be more dangerous, or perhaps to be inefficient and even arrogant, to insist on reimplementing as much code as you can from scratch.
When it comes to cryptography, for instance, using well-known, public code that has had years of scrutiny from the community is generally considered much safer than trying to knit your own, unless you are a cryptographer yourself. Even though open source cryptography tools are not perfect (the infamous Heartbleed bug in OpenSSL springs to mind), they rarely turn out to contain the sort of disastrous “flawed by poor design” problems that regularly show up in home-made cryptographic programming.
Of course, when many or most of a programming community all “shop at the same store”, as it were, a dangerous bug in the store itself is likely to affect many more people very much more quickly than if everyone used different code of their own…
…but there is a good-news flipside to this, given that patches are usually devised, tested and published much more quickly in an active community that’s open to public scrutiny.
Better yet, any software suppliers who needlessly drag their heels in deploying those patches are likely to get noticed and pressured into doing the right thing by everyone else.
Infected rather than just affected
The Packagist problem that the Swiss researchers found was similar to, but more subtle than, the critical Packagist flaw that we reported on in 2018.
Back then, supply chain researcher Max Justicz noticed that he could upload new PHP packages that would trick the Packagist system into running commands of his choice, rather than simply dowloading and publishing his submission.
This sort of bug constitutes an exploitable vulnerability dubbed RCE, short for remote code execution.
At this point, you may be wondering what all the fuss is about, given that by supplying the Packagist system with a rogue URL that links to a booby-trapped package, anyone with a Packagist account can abuse the the repository by uploading malware anyway.
However, that sort of attack only affects those other users who decide for themselves to trust the new package, and to download and start using it before anyone spots the malware.
(Compare this situation to Android malware in Google Play, which is both regrettable and dangerous, but doesn’t directly affect the security of all of Google Play itself, or of other apps already in the Play Store.)
Justicz’s trick didn’t involve adding booby-trapped commands that would run on a victim’s computer if they chose to download his dodgy package.
Instead, his trick involved running booby-trapped commands inside the Packagist system itself right at the time his package was uploaded, thus potentially compromising the entire ecosystem, including other packages already hosted there.
Simply put, his booby-trapped uploads wouldn’t just passively affect Packagist and thereby potentially attack some of its users, but actively infected Packagist itself and from there possibly all its users.
Indirect attack
The bug fixes put into the Composer software after Justicz’s bug report made an identical attack unlikely in 2021.
The 2018 exploit involved simply swapping out a URL for a system command, and instead of Composer downloading data from a URL, it would inadvertently run the command inserted where the URL was supposed to be.
The Composer programmers added a step to their code to do what’s known as a command line sanitising, so that any URL that contains sneaky system commands no longer works as an attacker intended.
Notably, the programmers took extra care to ensure that supplied data such $(value) in a Bash command-line argument would be treated directly as the text “[DOLLAR SIGN][LEFT BRACKET]value[RIGHT BRACKET]”, rather than processed as a special shell trick that means “run the command called value and use its output as the data instead”, a dangerous feature in bash known as command subsitution.
$ uname # Run the uname command explicitly
Linux $ uname=whoami # Set a Bash variable called uname $ echo uname # Prints the text uname directly
uname $ echo $uname # Print the value of the variable uname
whoami $ echo $(uname) # Run the command uname and pass its output to 'echo'
Linux $ echo $($uname) # Run the command stored in $uname and pass that output to 'echo'
duck $ echo \$\(\$uname\) # 'Escape' the chars $() so they get taken literally
$($uname)
This time, the Swiss researchers found a way of supplying a dangerous command-line option to the Composer process that was supposed to donwnload their package into the Packagist ecosystem.
For example, one of the Composer functions they tried ultimately relied on calling out to the cURL software on the Packagist server itself to fetch the source code they’d specified.
Thanks to the command line sanitising above, the researchers couldn’t supply a booby-trapped URL to mislead the remote cURL command, as Max Justicz did in 2018.
But they did figure out a way to add an extra command-line option to cURL by which they were able instruct cURL to run a command of their choice.
That’s remote code execution (RCE) right there.
This time, the problem was that Composer didn’t check whether the URL supplied started with two dashes (“--)”, which signifies an command-line option used to configure the command itself, rather than the URL that the command is supposed to fetch.
Even though the researchers couldn’t embed a command directly inside the URL, they could nevertheless turn the URL, which should have been pure data consumed by cURL, into a command-line option, which is effectively metadata that controls cURL instead.
Fortunately, there was a quick fix for this problem, namely for the Composer code to insert the special command-line option consisting of just two dashes (in other words, “--” immediately followed by a space character) in front of the user-supplied URL.
The special double-dash option is supposed to tell the program being run that “this is the end of the options, and no arguments after this point are to be processed as options, no matter how enticing they look”.
The primary reason for having a standardised “there are no more options” option is so that you don’t get stuck if you have a filename that happens to look like an option when you put it on the command line.
It’s always a security problem if you have legal filenames that can cause trouble if they are passed to system commands and misinterpreted as command options rather than command arguments.
$ echo 'Hello' > '--help'
[ creates a file called '--help' ] $ ls -l *
[ tries to list all files, but the filename '--help' in the ]
[ generated argument list accidentally turns into an option ]
Usage: /bin/ls [OPTION]... [FILE]...
List info about the FILEs (current directory by default).
Sort entries alphabetically if no sort options specified.
[. . .] $ ls -l -- *
[ 'protects' the filenames after the double-dash ]
[ from being misinterpreted as options ]
-rw-r--r-- 1 duck duck 6 Apr 30 16:19 --help $ cat --help
[ same problem as above, where '--help' gives help ]
Usage: cat [OPTION]... [FILE]...
Concatenate FILE(s) to standard output.
[. . .] $ cat -- --help
[ Ensures '--help' is an argument, not an option ]
Hello
What to do?
If you are a using the Composer tool yourself to manage your own repositories, make sure you have a full version number of 1.10.22 or 2.0.13, depending on which major version branch you are using. (Packagist itself has, of course, already updated the Composer code it relies on.)
If you are a web programmer and use system commands to help implement your server-side functionality, review all the places where you “shell out” to external programs. Make sure that dangerous character combinations that could appear in data from external, untrusted users never get fed directly into internal, trusted command invocations.
If you have a PC or laptop with an Nvidia graphics card (colloquially known as a GPU, short for graphics processing unit), make sure you’ve installed any of the company’s April 2021 updates that you need.
GPU cards affected by the bugs include those branded GeForce, RTX, Quadro, NVS and Tesla, on both Windows and Linux.
Nvidia’s “virtual GPU” (vGPU) software packages, which support its GPUs inside virtual machines that use software such as VMWare vSphere, Citrix Hypervisor, Nutanix AHV and Linux’s KVM (kernel-based virtual machine), also get updates on Windows and Linux.
The patches cover 13 different CVE numbers, running from CVE-2021-1074 to CVE-2021-1078 for the GPU driver fixes, and from CVE-2021-1080 to CVE-2021-1087 for the vGPU products.
For an explanation of the mysterious gap at the slot numbered CVE-2021-1079, please see the What to do? section below.
Local code execution
The GPU software bug that ended up with the highest “base score” using the well-known CVSS bug-rating system was CVE-2021-1074, described as a “vulnerability in the [GPU driver] installer where an attacker with local system access may replace an application resource with malicious files.”
Nvidia isn’t saying exactly what form this bug took, but when installer vulnerabilities of this sort appear, they are often down to one of two things:
A DLL that’s executed by the installer when it loads up. DLL is short for dynamic link library, a sort of additional program component that is delivered in a separate executable file with a .DLL extension instead of being built in to the main .EXE file itself. If a crook can substitute a fake DLL file then the installer may use it instead of the proper one.
An installation script that gives customised instructions to the installer. If the installation script language gives control over copying and replacing files or can specify external programs to run at install or update time, a crook may be able to trick the installer into performing malicious activities without needing a new and suspicious-looking .DLL or .EXE file at all.
Even though vulnerabilities of this type are rightly considered serious, they’re actually hard to prevent altogether, given that attackers who want to exploit them typically need write access to your hard disk already.
In other words, an attacker who could introduce a booby-trapped DLL or script file for your installer to launch could probably just run the malicious file directly anyway, or replace the installer itself with malware.
Nevertheless, installation utilities should be hardened as much as possible against this sort of treachery, because:
When booby-trapped components are executed by a known-good top-level .EXE file, they are effectively hiding in “trusted sight”, with the implicit blessing of (and in the same memory space as) the official installation program.
The original installer is the most likely component that a well-informed user will check out, for example by looking at its digital signature before running it. So, if the main file is left unmodified in an attack of this type, it will still pass many of the static checks you might use to validate it.
Any UAC requests that ask you to approve additional runtime “update privileges” will come from the official installer. UAC is short for user account control, and it’s the behaviour in Windows that reports and requests permission for system-wide operations such as software installations and updates, even (especially!) if you are an administrator already.
Acquiring superpowers
Just as worrying, in our opinion, though with a lower CVSS score, is the CVE‑2021‑1075 bug in one of Nvidia’s kernel drivers.
This vulnerability is described with the words “the program dereferences a pointer that contains a location for memory that is no longer valid, which may lead to code execution, denial of service, or escalation of privileges.”
This sort of error is oftern referred to in the jargon as a use-after-free bug, because the system command free() is commonly the function you call when you want to invalidate and hand back memory that your program doesn’t intend to use again.
And unauthorised code execution in the kernel usually means big trouble because it often provides a way for a regular user to award themselves system-wide superpowers, without needing to know or guess the Administrator password first.
In other words, crooks who have already broken into your computer could probably do a lot more damage with CVE‑2021‑1075 than they could with CVE-2021-1074.
That’s because CVE-2021-1074 might allow the crooks to run commands indirectly that they could probably already run anyway, albeit more obviously, while CVE‑2021‑1075 might give them access to sysadmin utilities that would otherwise be off limits.
Virtual escapes
The vGPU bugs include a number of vulnerabilities that Nvidia says could “lead to information disclosure and tampering of data,” flaws that are definitely of concern.
A virtual machine, or VM, is a sort of simulated software computer known as a guest that may co-exist with several other VMs on the same physical hardware, known as the host.
One of the security promises that you rely upon when you use VMs is that the virtualisation software should keep the guest VMs apart from each other at least as effectively as running each VM on its own dedicated, standalone computer.
Likewise, although you want the host operating system to be able to control and manage the guest VMs, you expect that this flow of control wont’t work in the other direction.
No guest operating VM should be able to mess with other guests, which could be running software for other departments inside your company or even hosting software for multiple different customers, and no guest should ever be able to make unauthorised changes to the host operating system that controls it.
(As you can imagine, a full-blown guest-to-host escape, as it’s known, also facilitates guest-to-guest tampering as well, given that once attackers have escaped to the host, they can misuse the host to mess with the other guests that are running on it.)
So it is always important to patch virtualisation bugs that could allow data to leak between parts of the system that are supposed to be kept strictly apart.
What to do?
As always, patch early, patch often!
Nvidia has a list of affected products plus the updated driver version numbers you want, as well as instructions on how to figure out which versions of its driver software are installed already.
By the way, if you were wondering where the missing bug number CVE-2021-1079 went from the sequences listed above, the answer is that it was allocated to a flaw in the Nvidia GeForce Experience software, not in any bugs in GPU drivers or vGPU packages.
If you use GeForce Experience, the bug that was patched could lead to code execution or to elevation of privilege, so you need to patch that software too, as explained in a separate Nvidia security bulletin.
When it comes to all the various types of malware out there, none has ever dominated the headlines quite as much as ransomware.
Sure, several individual malware outbreaks have turned into truly global stories over the years.
The LoveBugmass-mailing virus of 2000 springs to mind, which blasted itself into hundreds of millions of mailboxes within a few days; so does CodeRed in 2001, the truly fileless network worm that squeezed itself into a single network packet and spread worldwide literally within minutes.
There was Conficker, a globally widespread botnet attack from 2008 that was programmed to deliver an unknown warhead on April Fool’s Day, but never did. (Conficker remains a sort-of unsolved mystery: no one ever figured out what it was really for.)
And, there was Stuxnet, discovered in 2010 but probably secretively active for years before that, carefully orchestrated to spread via hand-carried USB drives in the hope of making it across security airgaps and into undislosed industrial plantrooms (allegedly Iran’s uranium enrichment facility at Natanz).
But none of these stories, as dramatic and as alarming as they were at the time, ever held the public’s attention as durably or as dramatically as ransomware has done since the early 2010s.
Send money, or else
Ransomware, of course, probably ought to be called “extortionware”, “blackmailware” or “menaceware”, because that’s precisely what it does: “Send money, OR ELSE.”
Interestingly, ransomware first raised its ugly head way back in 1989, when a software program that was supposed to be an AI system to give advice about HIV and AIDS was sent to tens of thousands of unsuspecting victims all over the world…
…only to scramble their files 90 days later and demand the payment of $378 by international money order to an accommodation address in Panama.
If you paid up, said the malware, you would be sent an unscrambling program that would decrypt your ruined files and restore your computer to its pre-infection state.
Or so the malware author claimed.
Fortunately, due to the difficulty and expense of distributing the malware in the first place (the AIDS information Trojan was snail-mailed on floppy diskettes), collecting the money via the international banking network, and sending out the “fix” program, ransomware remained a rarity for the next 25 years.
Follow the money
Unfortunately, once cryptocurrencies such as Bitcoin became well-known and comparatively easy to use, cybercriminals adopted them enthusiastically as an ideal tool for collecting extortion payments.
That was back in about 2013, when the infamous CryptoLocker malware appeared, and the cyberunderground has thrown itself vigorously into creating and spreading ransomware ever since.
Boy, how the ransomware scene has changed since then.
Blackmail demands in 2013 were typically about $300 per PC, with ransomware attacks aimed broadly at everyone and anyone, one computer at a time, whether the victim was at work or at home.
Now, ransomware gangs typically go after entire networks, breaking into them one-by-one and preparing for a moment (typically timed for when the network IT team is asleep) when all the computers are scrambled simulataneously.
In attacks like this, where organisations may be brought to a complete operational halt, the extortion demand may be as high as millions of dollars in a single payment, in return for a “fix” for the entire network.
Even worse, many ransomware gangs take the time to upload (or to steal, to put it more bluntly) as much corporate data as they can before scrambling it, and they add this nasty detail into their blackmail notes.
Instead of simply, “Send us money, OR ELSE you won’t see your files again,” the criminals are saying, “Send us money, OR ELSE we’ll sell off all your trophy data to the highest bidder, or send it to your competitors, or upload it to the regulator, or taunt your customers with it, or dump it for everyone to see, or all of the above. Oh, and you won’t see your files again, either.”
In fact, many ransomware gangs run their very own “negative PR” portals on the dark web, where they publish the confidential data of victims who don’t pay, or blog about how bad their victims’ cybersecurity was, or both.
In other words, even if the encryption part of the attack fails, or if you have a backup from which you can recover your computers without paying up, the criminals can and will demand money with menaces anyway.
As we said above, ransomware really ought to be called “blackmailware”, not least because the crooks have figured out how to make their crime pay even when there’s no data that they’re actually holding to ransom.
Here’s the good news
The good news is that, as part of our ongoing efforts to track the evolving ransomware scene, we’ve just published our very latest State of Ransomware report for 2021.
And the percentage of respondents who said they did get hit was noticeably lower than in 2020, when we published our previous report, and lower still than in 2017, when we did our first.
The bad news, of course, is that “only 37% got hit” is the the good news, because “more than a third” is still a disappointingly large proportion of those surveyed.
There are many fascinating, and probably quite surprising, facts that are revealed out in the report, which is why we strongly recommend that you read it now.
For example, of companies that either decided to pay up (e.g. thinking it would be quicker), or were forced to do so (e.g. because their backups turned out to be useless)…
…about one-third of them got less than half their data back, and (in an intriguing flip of the numbers), about half of them lost more than a third of their data.
A truly unfortunate 4% of victims who paid up got nothing for their money at all, and only 8% claim to have recovered everything after submitting to the blackmail.
In blunter words: 92% of victims lost at least some data, and more than 50% of them lost at least a third of their precious files, despite paying up and expecting the crooks to keep their promise that the data would be restored.
Broken promises
Remember also that an additional “promise” you are paying for in many contemporary ransomware attacks is that the criminals will permanently and irrevocably delete any and all of the files they stole from your network while the attack was underway.
You’re not only paying for a positive, namely that the crooks will restore your files, but also for a negative, namely that the crooks won’t leak them to anyone else.
And unlike the “how much did you get back” figure, which can be measured objectively simply by running the decryption program offline and seeing which files get recovered, you have absolutely no way of measuring how properly your already-stolen data has been deleted, if indeed the criminals have deleted it at all.
Indeed, many ransomware gangs handle the data stealing side of their attacks by running a series of upload scripts that copy your precious files to an online file-locker service, using an account that they created for the purpose.
Even if they insist that they deleted the account after receiving your money, how can you ever tell who else acquired the password to that file locker account while your files were up there?
If the crooks buried the upload password on a command line or in a configuration file that they copied around your network during the attack, any number of other threat actors could have stumbled upon it before, during or after the attack, even if the crooks didn’t intend to share it with anyone else.
What to do?
Read the report. The figures tell an interesting and important story about the scale and the nature of the danger posed by ransomware. By reading the report, you’re getting an insight into what victims are experiencing in real life, not merely what the cybersecurity industry is saying about the threat.
Assume you will be attacked. Ransomware remains highly prevalent, even though the relative numbers are down from 51% last year to 37% this year. No industry sector, country, or size of business is immune. It’s better to be prepared but not hit, than the other way round.
Make backups. Backups are the still the most useful way of recovering scrambled data after a ransomware attack that runs its full course. Even if you pay the ransom, you rarely get all your data back, so you’ll need to rely on backups anyway. (And keep at least one backup offline, and ideally also offsite, where the crooks can’t get at it.)
Use layered protection. Given the considerable increase in extortion-based attacks, it’s more important than ever to keep the bad stuff out and the good stuff in.