British seaside resorts are famous for their piers, walkways that stretch out over the sea so that visitors can get the feeling of being “at sea” without actually boarding a boat and risking sea-sickness, and without even having to set foot on the shingles/gravel/mudflats/sand of the beach at all.
In their Victorian heyday, piers were quite the tourist attraction, featuring shops, fairground rides and even theatres suspended over the water, but the mixture of steel supports, corrosive seawater, winter storms, wooden decking and mains electricity made them prone to fires and collapse.
Nevertheless, those that survived and have been restored to their former glory have been enjoying a renaissance in popularity in recent years… at least until coronavirus lockdown.
Fortunately for the operators of the Palace Pier in Brighton, England, a relaxation in English lockdown rules from early April 2021 meant that visitors could return.
They brought their coronavirus-friendly credit cards with them to pay for admission fees, rides and – of course – the fairground staple known variously around the world as candy floss, cotton candy, ghost breath, fairy floss, Daddy’s beard and no doubt many other names that disguise the marketing-unfriendly fact that it is, in fact, 100% refined sugar.
English piers aren’t particularly cheap to visit – they do require a lot of maintenance, after all, as the numerous ruined examples around the British coast will remind you – but a trip to one, even for the whole family, certainly isn’t supposed to cost thousands of pounds.
However, as reporters from the UK’s Guardian newspaper report, a few unlucky visitors who went to the Palace Pier shortly after lockdown restrictions lifted did indeed end up getting charged that much.
Intriguingly, the people affected by this SNAFU somehow didn’t get charged for their April 2021 visit back in April 2021, as you might expect.
Apparently, their payments were only put through two months later by the processor WorldPay, at the end of June 2021.
Unfortunately, the delay also brought with it another glitch, namely that the batch of payments put through were billed using the date as the amount.
One visitor to the pier, who told the Guardian she expected to be billed about £85, ended up getting billed £2104.08 for her visit on 08 April 2021 (2021-04-08.)
In a small mercy, Worldpay seems to have gone with the rather Y2K-unfriendly date format YYMMDD rather than the more robust and reliable YYYYMMDD, or else she might have ended up paying £202,104.08.
The good news is that now the reason for the miscalculations is known, the batch of defective transactions has been identified.
As a result, anyone affected in this incident ought to receive a refund, although they may, of course, end up with their card frozen or overdrawn in the interim, which could have a knock-on effect on other payments.
What to do?
Check those statements. As we’ve suggested before, don’t just look out for transactions that shouldn’t be there. Be wary of outgoings that you expect to see on your statement that don’t appear in a timely fashion. It’s tempting to ignore missed payments in the hope that the vendor simply neglected to charge you and therefore that you “got a freebie”, but it’s more likely that something went wrong, and you might end up getting billed later on when you don’t expect it.
Don’t use YYMMDD when recording dates. In this case, the use of YYMMDD limited the maximum erroneous debit to under £2200, given that we’re not in AD 2022 yet (although debits of £202,100.00 and above would probably have exceeded the transaction limit and wouldn’t have gone through at all), but that’s not the point. Record dates and times unambiguously, ideally using a text-based format that cannot be misinterpreted as, or automatically converted into, a number at all. See RFC 3339 for advice.
Use cash if you are uncertain about how your payment will be handled. Cash takes up a bit more space in your wallet than a credit card, but most local vendors we know readily accept cash even if they have signs out preferring card payments during the coronavirus pandemic. For small payments, cash is convenient enough, as well as being better for your privacy.
By they way, the Bank of England’s £50 banknote, featuring perhaps the most famous computer scientist of all time, Alan Turing, officially went into circulation this week.
So all English banknotes are now made of polymer, and can be cleaned with a sanitiser spray if you’re worried about infection.
[05’06”] Ukrainian cops bring out the BFG (Big Fearsome Grinder) and cut open some doors. [10’23”] A repeated request for destructive Linux code enters its 15th year. [19’39”] Peloton exercise bicycles found to be rootable. [28’43”] What’s the point of paying ransomware money? [33’53”] Oh! No! of the week
With Kimberly Truong, Doug Aamoth and Paul Ducklin.
Governments and law enforcement hate it when ransomware victims pay the blackmail demands that almost always follow a ransomware attack, and you can understand why, given that today’s payments fund tomorrow’s cybercriminality.
Of course, no one needs to be told that.
Paying up hurts in any number of ways, whether you feel that hurt in your head, in your heart or even just in the pit of your stomach.
“I was happy to pay up for a job well done,” said no ransomware victim ever.
However, it’s easy for people who aren’t looking down the wrong end of the cybercrime barrel to say, “You should never, ever pay. You should let your entire business implode, and let everyone in the company lose their job, because that’s just the price of failure.”
So, if your back’s against the wall and you DO pay up in the hope that you’ll be able to restart a business that has ground to a total halt…
…how well will it all go?
Guess what? You can find out by tuning into a fun but informative talk that we’re giving twice this week.
You need to register, but both events are free to join. (They’re both 100% virtual, given that the UK is still in coronavirus lockdown, so feel free to attend from anywhere.)
We’ll give you a clue by sharing a key slide from the talk:
As you can see, paying up often doesn’t work out very well anyway, even if you have no ethical qualms about doing so, and enough money burning a hole in your pocket to pay without flinching.
And remember that if you lose 1/3 of your data, like 1/2 of our respondents said they did, you don’t get to choose which computers will decrypt OK and which will fail.
Murphy’s law warns you that the laptops you could have reimaged easily enough will probably decrypt just fine, while those servers you really meant to backup but didn’t… probably won’t.
We’re going to try to make the talk amusing (as amusing as we dare be when talking about such a treacherous subject), but with a serious yet not-too-technical side.
We’ll be giving some tips you can use both at work and at home to reduce the risk of getting ransomed in the first place.
Both talks are live, not pre-recorded, so we’d love you to bring along your questions: you can Ask Us Anything (about ransomware, that is) in the Q&A at the end of each session.
If you can’t make the talks, or even if you can, please take a look at the survey from which our data was drawn.
This report gives some fascinating insights into which countries and industry sectors are most at risk (spoiler alert, everywhere, and everyone):
We don’t often put out programming appeals on Naked Security, especially when the code that we’re looking for is dangerous and destructive.
But this time we’re prepared to make an exception, given that it’s a rainy Friday afternoon where we are, and that this issue is now in its fifteenth consecutive year.
Our attention was drawn to the problem by a tweet from well-known Google cybersecurity researcher Tavis Ormandy, who tweeted today to say:
The legend continues, the question was posted for the 15th consecutive year today! 👻 https://t.co/NkTngOopoY
With just one exception that I know of (an email that appeared in July in 2008), the same person has emailed the Linux Kernel Mailing List (LKML) sometime in the month of June, ever since 2007, to ask the same question
Every year for 15 years in a row, including 2021, the mysterious R.F. Burns (yes, we think it’s a pun, too) has wanted to know:
From: "R.F. Burns" To: linux-kernel@vger.kernel.org Subject: PC speaker Date: Mon, 14 Jun 2021 23:32:32 -0400 Is it possible to write a kernel module which, when loaded, will blow the PC speaker?
Despite many helpful and not-so-helpful answers each year, the mysterious questioner still doesn’t seem to have figured out how to do the job.
A tongue-in-cheek exchange at the very first time of asking explains the reason for the potential cybervandalism as follows:
I am helping a small school system with a number of Linux
workstations. Previously, the students (middle and high schools)
abused the sound cards in the systems. This was remedied by changing
the permissions on sound devices so that non-root users would be
denied access (something easily done remotely, and on an automated
basis.) At that point, the students started finding creative ways to abuse the
PC speaker, which became rather distracting. We unloaded and disabled
the PC speaker kernel module, which remedied the situation for a
while. So, the idea was raised about seeing if there was a way to blow the PC
speaker by loading a kernel module. If so, a mass-deployment of a
kernel module overnight would take care of the PC speaker problem once
and for all.
Is a PC speaker the same as a laptop speaker?
Ironically, modern laptops don’t really have PC speakers any more.
Sure, they have speakers built in, but they’re connected up to the sound card that’s also build in, so they merely provide a low-quality version of the same sound output you’d hear if you plugged in headphones.
But those are just speakers, not specifically a PC speaker, which wasn’t connected to a sound circuit at all.
The original PC speaker was only ever intended to be used to make beeps to alert you to some sort of error, notably during startup when the screen might not be working and you wouldn’t be able to see any error messages that might have been displayed.
Back in the day, most PC components ran at 5 volts DC, and the speaker was no different: it was connected to a 5V supply on its positive terminal and earthed (grounded) on the other.
The 5V input wire could be turned on and off via an otherwise unused bit in the keyboard controller (bit 1 of port 0x61, in case you want to try writing your own PC speaker code).
If you wrote a value of 1 into the speaker control bit, the speaker magnet would actuate and the speaker would jump to its “energised” position.
Set the bit back to zero and the speaker cone would move back to its “silent” position.
Flip that magic bit on and off at a suitable frequency and you would effectively create a square wave of constant pitch and volume.
Vary the frequency every so often, and you could vary the pitch to play rudimentary tunes, and when we say rudimentary, we really mean it.
Hacking PC speakers to speak
But rudimentary wasn’t good enough for gaming hackers.
As well as controlling the speaker directly via what’s known as bit-banging (where you directly program a control wire by writing a timed stream of 1s and 0s to it yourself), you could also connect the speaker’s voltage wire up to the PC’s programmable interval timer (PIT).
Then, you could vary the pitch of the sound that came out by reprogramming the PIT every so often, meaning that you had more precise control of the speaker’s frequency, and you didn’t need to have code running in a tight loop just to generate the bit-flips needed for a specific note.
Instead, you could dedicate what little CPU power you had at your disposal to tweak the PIT continuously to drive the speaker at varying frequencies, including ones faster than it could actually handle, given that PC speakers were both tiny and tinny and could reproduce only a narrow frequency band.
Instead of producing a very high frequency at a constant volume, the electromechanical limitations of the speaker – basically, its inertia, or lag in starting to move when energised – meant that it wouldn’t have time to describe a full square wave at all.
In this way, you could produce controlled sounds at a lower volume that normal, so you could simulate a sound card that supported, say, 6-bit (64 different sound levels) or even 8-bit (256 different levels), instead of having a speaker that could only reproduce 1-bit sound (playing at full volume or totally silent).
By this method, a crude form of pulse width modulation, early PC games achieved astonishing results without sound cards.
Many games of the DOS era could not only play back music that sounded way better than the mere sequence of square-wave beeps that the speaker was designed to produce, but even reproduce human speech, though it was often hard to understand or sounded as if the narrator had a really weird and nasal accent.
What to do?
So, could you actually blow a PC speaker if you had the sort of precise control over it that you would get at Linux kernel level?
As our legendary questioner keeps asking, could you blow a PC speaker with a kernel driver?
Volume alone, the means by which many a cheaply powerful-but-clippy amp turned too high for too long in student digs has ruined many a set of not-quite-as-highly-rated-for-power-as-you-thought-they-were speakers, isn’t going to do the trick.
The PC speaker is supposed to run at a constant volume, based on that on-or-off 5V input wire, so it’s intended to operate in a “turned up to 10” state all the time.
There’s no way to turn that 5V input to 5.5V, which would be the same percentage increase as turning it up from 10 to 11, and blow the speaker that way.
You can trick the speaker into running at a lower volume that it thinks, and therefore to produce better sounding output by effectively turning it down below 10, but you can’t turn it up above 10.
You could try to freak out the speaker by running it through a carefully-constructed cascade of frequencies that would tax its physical resilience, except that the PC speaker almost certainly isn’t good enough to notice, let alone to reproduce reliably enough, the complex and chaotic physical motion you had in mind.
One tongue-in-cheek but helpful responder to R.F. Burns (we’re now as good as certain that the name is part of the joke), in the first year of asking, suggested that it might be possible to find a specific frequency for each speaker at which you would cause resonance, and get it to shake itself to bits.
Resonance is the sort constructive interference that old vehicles tended to experience at certain speeds, when body panels or window glass would start to vibrate in exagerated and ever-increasing and brain-jarring sympathy with the engine until you sped up or slowed down a tiny bit.
Is it possible? Can it be done?
We’re pretty sure it can’t, or else R.F. Burns (now we know it’s a joke it’s not really funny any more) would surely have figured out the magic frequency in the past 14 years, and stopped asking how to do it.
So, if it can’t be done, this question must, surely, have a hidden meaning…
…but what is that hidden meaning? Answers below, please!
Riding a bicycle is very popular these days, even if many cyclists in the developed world seem to ride them as a way to get from A to A for exercise, rather than as a way to get from A to B to avoid driving a car.
In fact, if what you’re after is exercise, then many people don’t even ride from A to A, but instead stay in one spot while the bicycle wheels roll underneath them on a stationary rig.
It’s not really cycling, because there’s no air resistance, which is the biggest energy obstacle that a real-world cyclist has to overcome when riding at anything but the most modest and wobbly speeds.
But you can ride an exercise bicycle indoors, which is handy if you don’t like going out in the wind, the dark, the rain, or the traffic.
It also means you can vary the mechanical resistance artificially and almost infinitely, or get an attached computer to do it in an automated way, which is handy if you want to train for a race with lots of steep climbs but you live somewhere flat.
And during the coronavirus pandemic, of course, indoor “cycling” means that you can do long and strenuous rides even when you’re stuck with lockdown regulations about how long you can spend outdoors, and how far you can travel from home.
Additionally, connected products like Peloton Bikes, which are basically exercise bicycles with a built-in Android tablet, an internet connection and a cloud service that the “bikes” connect to, allow you go out on virtual club rides, or take part in realistic road races, with any number of other “cyclists”.
POTUS in the peloton
Apparently, even keen cyclist Joe Biden has a Peloton Bike, which caused considerable cybersecurity commentary when he became President of the United States.
We were never quite sure why Biden’s online stationary bicycle was trumpeted as a huge national security risk, assuming that POTUS didn’t use the bike’s tablet to look at confidential briefings while racking up the kilometres. (Cycling always happens in kilometres, just as ships travel in knots.)
We assumed that the President’s cybersecurity advisors would be certain to hook up the Bidenbike to the internet – if they hooked it up at all – via a connection all of its own, used only for cycling and never interconnected to any other White House or governmental network.
Nevertheless, from a privacy and cybersecurity point of view, especially for anyone with an online bike hooked up to their regular home Wi-Fi network, a security flaw in the bike’s tablet or one of the apps it relies upon could cause serious trouble indeed.
We know people who not only work while exercising but even join and participate in online meetings at the same time (hint: don’t do it, except as a method to keep meetings nice and short, because it makes everyone else in the meeting feel slightly seasick).
As you can imagine, any sort of spyware with access to the audio feed of a meeting, or that could take surreptitious screenshots, would be sitting on a cybercrime goldmine.
Of course, that’s true for any mobile device that you use for meetings, or for doing work of any sort, so the risks here aren’t unique to the world of online bicycles…
…but it’s vital to remember, by the same token, that your online bicycle therefore needs to be designed, implemented and updated to be as secure as your regular mobile phone or tablet.
Simply put, your phone doesn’t have to be engineered like a bicycle so it can bear the full weight of your physical body, but your connected bicycle needs to be engineered at least as well as a phone so it can bear the full weight of keeping your digital life safe.
That’ll never work – let’s try it!
You can see, therefore, why researchers at McAfee were recently not only astonished to find a security hole in the latest Peloton Bike+ product, but also amused, and perhaps slightly amazed, at the way they came across it.
One problem with hacking on top-end specialised devices such as electric cars or fancy online bicycles, rather than on low-end devices such as light bulbs and webcams, is that budget and availability become an important issue.
As the McAfee researchers explain in their report, and this is sage advice:
One of the first things that we usually try do when starting a new project, especially when said projects involve large expenses like the Peloton, is to try to find a way to take a backup or a system dump that could be used if a recovery is ever needed. […] Having the ability to restore the device to its factory settings is a safety net that we try to implement on our targets.
In other words, if you’re hacking on a “device” that cost you serious money, your first step is likely to be to figure out if you can reliably restore it before you start fiddling with it, just in case something goes wrong.
After all, your boss isn’t going to be very happy if you brick the device by mistake early on and it’s no longer any good for anything at all. (A Peloton Bike without its computer doesn’t even revert to being a bicycle – it just becomes an impractical and rather uncomfortable chair.)
Greatly simplified, the researchers decided that they weren’t going to use any tricks like unlocking and rooting their Bike+ to extract its secrets or backup its factory state, at least if they could find a way to do it “by the book”.
Avoiding the Orange State
The researchers decided to take a real-world approach for two main reasons: they didn’t have another bike handy, and they were keen to look for vulnerabilities that would work out of the box against stock products, rather than needing any “pre-hacking” to be carried out on the device.
In particular, a device that needs to remain unlocked in order to be compromised may well be spotted by a well-informed user because the device will produce a visible “Orange State” warning every time it starts up:
So the researchers decided, just as a start, to try booting the device using a standard, open-source, unofficial recovery kernel. (They used TWRP, short for TeamWin Recovery Project, an handy tool for Android research, especially on rooted devices.)
This is an important lesson in cybersecurity research, by the way: they knew this wouldn’t work, because they knew that the device was boot-locked, meaning that the only way to boot an unofficial Android kernel was to do a firmware unlock first…
…and modern versions of Android are programmed to wipe all the non-system content at the moment you unlock the firmware, and to wipe it again when you relock it to restore it to a factory state, as you see in the Google Pixel 4a warning below:
But even though they knew it wouldn’t work, they followed the cybersecurity mantra of never say never, and tried it anyway.
The method you usually use to boot your phone into a non-standard kernel is to restart the device in what’s called “fastboot” mode.
You can access your device’s fastboot mode either via via a magic sequence of button presses while it’s powering up, or by using debugging mode while the phone is running:
$ adb reboot fastboot
error: device unauthorized.
This adb server's $ADB_VENDOR_KEYS is not set
Try 'adb kill-server' if that seems wrong.
Otherwise check for a confirmation dialog on your device. # No good, debugging isn't enabled on this device. We turned
# on debug mode and authorised adb (Android Debug Bridge): $ adb reboot fastboot # Now we see the fastboot menu on the device, as shown below.
# We can now send commands over USB using the fastboot command:
Fastboot means that the phone stops short of a full Android bootup, and you can now send it special commands from a laptop via USB, like this:
On an unlocked device, this will probably work, but you won’t be able to backup the original device content first because the unlocking process forces a device wipe, as shown in the warning screen above.
On a still-locked device, however, you’re wasting your time, because this is what happens:
$ fastboot /home/duck/twrp/latest/twrp-3.5.2-test.img Sending 'boot.img' (65536 KB) FAILED (remote: 'Download is not allowed on locked devices')
fastboot: error: Command failed
$
Guess what actually happened?
Well, guess what happened?
The MacAfee researchers didn’t get a FAILED error, as shown above.
They ended up with the device sort-of booting but ending up hung at a black screen.
As unauspicious as that might sound, given that they hadn’t actually got control of the device, the researchers immediately scented victory.
After all, they wouldn’t have expected their generic recovery kernel (one that was neither designed for nor tested on a Peloton Bike+) to work correctly, even on an unlocked device…
…but if the bootloader were properly locked, they wouldn’t expect it to boot an alternative kernel at all, let alone badly.
In other words, Peloton had apparently turned on all the security settings needed to protect a locked device from being rooted-and-booted, except for the one to suppress the use of the fastboot boot kernel.img command.
As you can imagine, the researchers still had loads more work to do to get their out-of-the-box Peloton Bike+ into a rooted-but-still-locked state, but as they wryly remark in their report, “[t]his is where our luck or maybe naïveté worked to our advantage.”
By trying what shouldn’t have worked, just in case it did, they got hold of a remarkable shortcut to victory.
They were able to come up with a process whereby anyone, techical or not, equipped with a laptop, a USB cable and physical access to a Peloton Bike+, whether in a gym or in your home office, could quickly and unintrusively run a script to leave you with an apparently unmodified but actually completely compromised online bike.
This sort of hack is often referred to in the jargon as an evil cleaner attack, as a reminder that you should avoid leaving your laptop (which may not have the same level of boot security as your mobile phone) unattended in a hotel room, where corrupt staff with an excuse to to enter while you’re out would have time to implant malware using little more than a poisoned USB drive.
A rooted Android device is open to having its system configuration changed, app permissions altered, security features overridden, and malicious apps installed.
That could leave your exercise sessions blighted with spyware that could surreptitiously access the camera and microphone, read out private data, take screenshots, sniff inside encrypted network packets – therefore effectively snooping on the entire device – and then exfiltrate the data quietly and unobtrusively.
What to do?
This bug was reponsibly disclosed and Peloton pushed out a “non-optional” update early this month, so owners of the Peloton Bike+ product should already be patched against this flaw, assuming they’ve gone online with the device in the past two weeks.
Check for software version PTX14A-290 or later.
For the record, McAfee researchers praised Peloton, saying that “[t]he Peloton vulnerability disclosure process was smooth, and the team were receptive and responsive with all communications.”
If you’re a software developer and someone reports a security flaw in your product, that’s the sort of response you should be aiming for.