Colonial Pipeline facing $1,000,000 fine for poor recovery plans

If you were in the US this time last year, you won’t have forgotten, and you may even have been affected by, the ransomware attack on fuel-pumping company Colonial Pipeline.

The organisation was hit by ransomware injected into its network by so-called affiliates of a cybercrime crew known as DarkSide.

DarkSide is an example of what’s known as RaaS, short for ransomware-as-a-service, where a small core team of criminals create the malware and handle any extortion payments from victims, but don’t perform the actual network attacks where the malware gets unleashed.

Teams of “affiliates” (field technicians, you might say), sign up to carry out the attacks, usually in return for the lion’s share of any blackmail money extracted from victims.

The core criminals lurk less visibly in the background, running what is effectively a franchise operation in which they typically pocket 30% (or so they say) of every payment, almost as though they looked to legitimate online services such as Apple’s iTunes or Google Play for a percentage that the market was familiar with.

The front-line attack teams typically:

  • Perform reconnaisance to find targets they think they can breach.
  • Break in to selected companies with vulnerabilties they know how to exploit.
  • Wrangle their way to administrative powers so they are level with the official sysadmins.
  • Map out the network to find every desktop and server system they can.,
  • Locate and often neutralise existing backups.
  • Exfiltrate confidential corporate data for extra blackmail leverage.
  • Open up network backdoors so they can sneak back quickly if they’re spotted this time.
  • Gently probe existing malware defences looking for weak or unprotected spots.
  • Pick a particularly troublesome time of day or night…

…and then they automatically unleash the ransomware code they were supplied with by the core gang members, sometimes scrambling all (or almost all) computers on the network within just a few minutes.

Now it’s time to pay up

The idea behind this sort of attack, as you know, is that the computers aren’t wiped out completely.

Indeed, after most ransomware attacks, the Windows operating system still boots up and and the primary applications on each computer will still load, almost as a taunt to remind you just how close you are to, yet how far away from, normal operation.

But all the files that you need to keep your business running – databases, documents, spreadsheets, system logs, calendar entries, customer lists, invoices, bank transactions, tax records, shift assignments, delivery schedules, support cases, and so on – end up encrypted.

You can boot your laptop, load up Word, see all your documents, and even try desperately to open them, only to find the digital equivalent of shredded cabbage everywhere.

Only one copy of the decryption key exists – and the ransomware attackers have it!

That’s when “negotiations” start, with the criminals hoping that your IT infrastructure will be so hamstrung by the scrambled data as to be dysfunctional.

“Pay us a ‘recovery fee’,” say the crooks, “and we’ll quietly provide you will the decryption tools you need to unscramble all your computers, thus saving you the time needed to restore all your backups. If you even have any working backups.”

Of course, they don’t put it quite that politely, as this chilling recording supplied to the Sophos Rapid Response team reveals:

That’s the sort of wall against which Colonial Pipeline found itself about 12 months ago.

Even though law enforcement groups around the world urge ransomware victims not to pay up (as we know only too well, today’s ransomware payments directly fund tomorrow’s ransomware attacks), Colonial apparently decided to hand over what was then $4.4 million in Bitcoin anyway.

Sadly, as you’ll no doubt remember if you followed the story at the time, Colonial ended up in the same sorry state as 4% of the ransomware victims in the Sophos Ransomware Survey 2021: they paid the crooks in full, but were unable to recover the lost data with the decryption tool anyway.

Apparently, the decryptor was so slow as to be just about useless, and Colonial ended up restoring its systems in the same way it would have if it had turned its back on the crooks altogether and paid nothing.

In a fascinating “afterlude” to Colonial’s ransomware payment, the US FBI managed, surprisingly quickly, to infiltrate the criminal operation, to acquire the private key or keys for some of the bitcoins paid over to the criminals, to obtain a court warrant, and to “transfer back” about 85% of the criminal’s ill-gotten gains into the safe keeping of the US courts. If you are a ransomware victim yourself, however, remember that this sort of dramatic claw-back is the exception, not the rule.

More woes for Colonial Pipeline

Now, Colonial looks set to be hit by a further demand for money, this time in the form of a $986,400 civil penalty proposed by the US Department of Transportation.

Ironically, perhaps, it looks as though Colonial would have been in some trouble even without the ransomware attack, given that the proposed fine comes about as the result of an investigation by the Pipeline and Hazardous Materials Safety Administration (PHMSA).

That investigation actually took place from January 2020 to November 2020, the year before the ransomware attack occurred, so the problems that the PHMSA identified existed anyway.

As the PHMSA points out, the primary operational flaw, which accounts for more than 85% of the fine ($846,300 out of $986,400), was “a probable failure to adequately plan and prepare for manual shutdown and restart of its pipeline system.”

However, as the PHMSA alleges, these failures “contributed to the national impacts when the pipeline remained out of service after the May 2021 cyber-attack.”

What about the rest of us?

This may seem like a very special case, given that few of us operate pipelines at all, let alone pipelines of the size and scale of Colonial.

Nevertheless, the official Notice of Probable Violation lists several related problems from which we can all learn.

In Colonial Pipeline’s case, these problems were found in the so-called SCADA, ICS or OT part of the company, where those acronyms stand for supervisory control and data acquisition, industrial control systems, and operational technology.

You can think of OT as the industrial counterpart to IT, but the SecOps (security operations) challenges to both types of network are, unsurprisingly, very similar.

Indeed, as the PHMSA report suggests, even if your OT and IT functions look after two almost entirely separate networks, the potential consequence of SecOps flaws in one side of the business can directly, and even dangerously, affect the other.

Even more importantly, especially for many smaller businesses, is that even if you don’t operate a pipeline, or an electricity supply network, or a power plant…

…you probably have an OT network of sorts anyway, made up of IoT (Internet of Things) devices such as security cameras, door locks, motion sensors, and perhaps even a restful-looking computer-controlled aquarium in the reception area.

And if you do have IoT devices in use in your business, those devices are almost certainly sitting on exactly the same network as all your IT systems, so the cybersecurity postures of both types of device are inextricably intertwined.

(There is indeed, as we alluded to above, a famous anecdote about a US casino that suffered a cyberintrusion via a “conected thermometer” in a fishtank in the lobby.)

The PHMSA report lists seven problems, all falling under the broad heading of Control Room Management, which you can think of as the OT equivalent of an IT department’s Network Operations Centre (or just “the IT team” in a small business).

These problems distill, loosely speaking, into the following six items:

  • Failure to keep a proper record of operational tests that passed.
  • Failure to test and verify the operation of alarm and anomaly detectors.
  • No advance plan for manual recovery and operation in case of system failure.
  • Failure to test backup processes and procedures.
  • Poor reporting of missing or temporarily suppressed security checks.

What to do?

Any (or all) of the problem behaviours listed above are easy to fall into by mistake.

For example, in the Sophos Ransomware Survey 2022, about 2/3 of respondents admitted they’d been hit by ransomware attackers in the previous year.

About 2/3 of those ended up with their files actually scrambled (1/3 happily managed to head off the denouement of the attack), and about 1/2 of those ended up doing a deal with the crooks in an attempt to recover.

This suggests that a significant proportion (at least 2/3 × 2/3 × 1/2, or just over one-in-five) IT or SecOps teams dropped the ball in one or more of the categories above.

Those include items 1 and 2 (are you sure the backup actually worked? did you formally record whether it did?); item 3 (what’s your Plan B if the crooks wipe out your primary backup?); item 4 (have you practised restoring as carefully as you’ve bothered backing up?); and item 5 (are you sure you haven’t missed anything that you should have drawn attention to at the time?).

Likewise, when our Managed Threat Response (MTR) team get called in to mop up after a ransomware attack, part of their job is to find out how the crooks got in to start with, and how they kept their foothold in the network, lest they simply come back later and repeat the attack.

It’s not unusual for the MTR investigation to reveal numerous loopholes that aided the crooks, including item 5 (anti-malware products that would have stopped the attack turned off “as a temporary workaround” and then forgotten), item 2 (plentiful advance warnings of an impending attack either not recorded at all or simply ignored), and item 1 (accounts or servers that were supposed to be shut down, but with no records to reveal that the work didn’t get done).

We never tire of saying this on Naked Security, even though it’s become a bit of a cliche: Cybersecurity is a journey, not a destination.

Unfortunately for many IT and SecOps teams these days, or for small businesses where a dedicated SecOps team is a luxury that they simply can’t afford, it’s easy to take a “set-and-forget” approach to cybersecurity, with new settings or policies considered and implemented only occasionally.

If you’re stuck in a world of that sort, don’t be afraid to reach out for help.

Bringing in third-party MTR experts is not an admission of failure – think of it as a wise preparation for the future.

Afer all, if you do get attacked, but then remove only the end of the attack chain while leaving the entry point in place, then the crooks who broke in before will simply sell you out to the next cybergang that’s willing to pay their asking price for instructions on how to break in next time.


Not enough time or staff? Learn more about Sophos Managed Threat Response:
Sophos MTR – Expert Led Response  ▶
24/7 threat hunting, detection, and response  ▶


go top