Category Archives: News

VMware flaw allows takeover of multiple private clouds

VMWare’s VMware Cloud Director has a security flaw that researchers believe could be exploited to compromise multiple customer accounts using the same cloud infrastructure.

Formerly known as vCloud Director, Cloud Director is a popular enterprise platform for managing virtual datacenters across multiple sites.

A few weeks back, security pen testing company Citadelo chanced upon what looks like a significant vulnerability while it was varying out an audit for a VMware customer.

The vulnerability was a code injection flaw, now identified as CVE-2020-3956. The researchers developed a proof-of-concept that used the web-based interface or the platform’s Application Programming Interface (API) capable of taking over multiple private clouds on any vulnerable provider.

That would have allowed an attacker to modify the Cloud Director login page to capture credentials, take over account privileges for a provider, access some sensitive data such as IP addresses, email addresses, names, and password hashes, and tinker with virtual machines (VMs):

The vulnerability would enable a user to gain control over all customers within the cloud. It also grants access to an attacker to modify the login section of the entire infrastructure to capture the username and password of another customer.

VMware learned of the flaw in early April, issuing patches for affected versions of vCloud Director and Cloud Director during early May.

The updated, fixed versions are vCloud Director versions 9.7.0.5, 10.0.0.2, 9.1.0.4, and 9.5.0.6 (some older versions are not affected so it’s important to check the version matrix), with the patch alert going out on 19 May.

Organisations that can’t update for whatever reason are offered suggestions for mitigating the issue.

It seems that the only reason that the flaw is rated ‘important’ (CVSS score 8.8) rather than ‘critical’ on VMware’s security advisory (VMSA-2020-0010) is that an attacker would require an authenticated account to start an attack.

But that might not be as hard to achieve as it sounds given that Citadelo says some providers offer free trial accounts.

These days, despite numerous layers of encryption and segmentation, VMware still needs careful attention, having fixed a significant but lower-level VM flaw in March.

The fact that Citadelo only discovered the flaw during pen-testing is a lucky break for VMware customers and an encouraging sign that large companies are not taking cloud platforms and tools for granted.

Amtrak breached, some customers’ logins and PII potentially exposed

Amtrak, the national rail service for the US, has suffered a data breach that may have exposed some customers’ logins and other personally identifiable information (PII), the service has disclosed.

The state-backed transportation company, which is also known as the National Railroad Passenger Corporation, says that a third party got unauthorized access to some Amtrak Guest Rewards accounts on the evening of 16 April. The rewards program enables customers to earn points – by spending on travel, hotels, car rentals and more – that they can then apply to Amtrak purchases.

Amtrak revealed the breach on Friday in a regulatory filing – namely, a sample letter to consumers about the breach – with the Office of the Vermont Attorney General.

The service said that it determined that the intruder used compromised usernames and passwords to access some reward accounts and that they may have also viewed customers’ personal information. However, the attacker didn’t access financial data, be it credit card information or Social Security taxpayer IDs.

Amtrak said that its security team immediately investigated the issue, stitching up the hole and blocking the unauthorized access within a few hours. Its security team also reset passwords on potentially affected accounts and pulled in outside cybersecurity expertise in order to ensure that the incident was in fact contained. Amtrak says it also implemented “additional safeguards to protect customers,” but it didn’t give any detail on what its new safeguards are.

To help protect customers from identity theft, Amtrak is offering consumers a free year of fraud monitoring from Experian. That’s all well and good, but do note that such a service only flags suspicious activity after it happens, not before.

Nor do such monitoring services work to prevent phishing attempts that exploit any PII attackers get their hands on. This should be of particular concern to the organizations whose employees travel via Amtrak: as of October 2018, phishing was cited as the most commonly used method in attacks, according to organizations surveyed for IDG’s 2018 US State of Cybercrime report.

Amtrak says that it hasn’t yet seen any indication of customers’ PII having been misused, but advised consumers to keep an eye out for fraud and ID theft by regularly reviewing their financial statements.

We don’t know how the attacker got hold of Amtrak Guest Reward usernames and passwords. It’s quite possible that Amtrak wasn’t breached itself but that its customers reused their logins across multiple sites/services/accounts, one or more of which may have been breached. Lists of breached credentials are regularly listed for sale on the dark web. After a crook hacks them or buys them, the credentials can then be plugged into automated spray-and-pray attack tools: a way to quickly plug logins into wherever else they might gain access, be it social media accounts or your bank account.

We’ve said it before, and we’ll keep saying it: password reuse is truly a bad idea!

We won! Naked Security scoops “Legends of security” award

We’re absolutely delighted – delighted and proud! – to report that we won not one but two awards at last night’s European Cybersecurity Blogger Awards 2020.

The awards usually take place alongside the Infosec show in London, England, but for rather obvious reasons both Infosec and the awards bash were cancelled this year.

So the cocktails were virtual this time, but the prizes weren’t – and we bagged two of them:

  • Best corporate blog: Naked Security
  • Legends of security – Best overall blog: Naked Security

Wow!

Thanks to all of you who took the trouble to vote for us – we really appreciate it, because you’re the Naked Security community that makes it all worthwhile.

Indeed, to everyone who reads our articles, watches our videos, listens to our podcasts, comments on Naked Security and joins in with us on social media: we couldn’t do what we do without you.

To earn your votes and come out on top in these awards really means a lot to us!

By the way, while we’re on the subject of “legends of security”, please join me in congratulating our Editor-in-Chief, Anna Brading, who really does make the Naked Security team greater than the sum of its parts.

Many of you will know Anna as the host of our Naked Security podcast, but that’s just one tiny part of what she does.

It’s thanks to Anna’s insightful editorial direction that we’re able to pick the right stories to cover, to maintain a consistent style and quality, and most importantly to ensure that our articles don’t just tell you the news but include “What to do” advice that is technically correct, useful, and – best of all – written in plain English.

Oh, and, if you don’t mind us mentioning it again, it’s thanks to Anna that we keep on winning awards!

Once more, to all of you who voted for us, thanks, and “Woo hoo!”


Latest Naked Security podcast

The mystery of the expiring Sectigo web certificate

There’s a bit of a kerfuffle in the web hosting community just at the moment over an expired web security certificate from a certificate authority called Sectigo, formerly Comodo Certificate Authority.

Expired certificates are a problem because they cause the web server that relies on them to show up as “invalid” to any program that tries to do the right thing and verify the validity of the site it’s connecting to.

But this problem isn’t Sectigo’s fault – indeed, the company has had a warning about the impending problem available for a while now, explaining what was about to happen and why.

The problem comes from what’s known as backwards compatibility, which is a jargon way of saying “trying to support old software reliably even though it really ought to have been upgraded to a newer and more reliable version”.

When your browser visits a website, it’s almost certain to be using HTTPS, short for secure HTTP, which means using the Transport Layer Security protocol (or TLS for short) to encrypt and validate the connection.

As you probably know, TLS doesn’t vouch for the content that’s ultimately served up by a web server – crooks can use TLS to deliver malware “securely” if they like – but it is nevertheless a vital part of everyday browsing.

Not only does it shroud your traffic from surveillance and snooping, it stops someone in between you and the server you’re visiting from tampering with the content on the way out or back. (Because rogues of this sort can be anywhere along the network path, it’s known colloquially as a MiTM attack, short for man-in-the-middle.)

Of course, if crooks could trivially issue certificates in the names of other websites, MiTM attacks would still be easy, even with TLS, because the crooks could put a fake site half way along the network path to the real one, and you would be unable to tell it from the real deal.

So, to make it harder for crooks to mint a web certificate in your name, you need to get your certificate vouched for by someone else, known as a certificate authority.

You then present your certificate and their certificate, and they vouch for you; if their certificate is, in turn, vouched for by your browser itself (i.e. is in a list of already-trusted-certificates-that-can-sign-other-certificates), then your browser will automatically accept your certificate because it’d been signed by someone that the browser already trusts.

This forms a chain of trust.

What this means is that every browser (or every operating system on behalf of the browsers you might use) needs to have access to an up-to-date list of what are called root certificates, which is the name given to certificates that aren’t vouched for by anyone else, but that are explicitly trusted to vouch for others.

Intermediate certificates

Obviously, the part of a root certificate that’s called the private key, which is used for signing purposes, needs to be kept extra-super-secure, because replacing or re-issuing root certificates is a much trickier exercise than updating or issuing so-called leaf certificates – the ones that go with your website and typically only last anywhere for 3 months to 2 years anyway.

To make it easier and safer to sign and distribute new keys, most leaf certificates use a chain of three links, not just two, to “prove” their validity.

There’s the leaf certificate that vouches for your website; there’s an intermediate certificate that vouches for your leaf; and then the intermediate certificate is vouched for by a root certificate that is itself magically imbued with vouching power because it is trusted directly by your browser or your operating system.

Root certificates therefore often have long lifetimes, typically 10 or 20 years, and the assumption is that everyone will have plenty of time to stop relying on old root certificates long before they expire.

But old software programs, and old operating systems, have long shelf-lives too, and old software programs, tied to an old database of trusted root certificates, often end up relying on ageing root certificates in their so-called “chain of trust” long after they should.

So, even if you do the right thing and ask your certificate authority – the company that’s vouching for you – to use their latest intermediates and their latest root certificates every time you renew your certificates, which is usually at least once a year, you might end up confusing customers with old software (possibly even with old software of your own manufacture).

That’s because old software that hasn’t yet been taught about the latest and greatest root certificates that are available – because it’s not getting reliably updated, for example – will keep on trusting the old root certificates you are keen to move away from, even as they edge towards expiry, yet will keep on rejecting the new ones as “untrusted” even though the new ones have years of life left in them.

Ironically, then, the newer and fresher your chain of trust, the less reliable your certificates will seem to old-timer programs out there.

Cross-signing

What many companies do, to support both ends of the equation, is what’s called cross-signing, where they denote two different intermediate certificates to vouch for your leaf certificate, one signed by an old root; the other by a new one.

The idea is to please most of the people most of the time.

Of course, that can make your security situation seem better than it is.

Old and possibly insecure web clients – which will include all sorts of software tools other than browsers, notably including autoupdate programs and licence-checking tools that are supposed to keep the software running correctly – will give you a false sense of being “up to scratch”.

When the tired old root certificate expires, software that has never heard of the all-new root certificate that replaced it will simply stop working. (Unless it isn’t checking the validity of your web certificate at all, but that’s increasingly rare because it’s easy for researchers to detect and will guarantee bad publicity if they do.)

It’s worse than that

But, as Andrew Ayer of SSLMate explains, the situation is worse than that.

Technically speaking, certificate chains where there’s a choice of cross-signed intermediate certificates, can be “resolved” more than one way.

You can follow the old-style intermediate certificate to the now-expired root certificate, or you can try the other way home, validating with the new-style intermediate and correctly determining that it is signed by a new and valid root.

Ideally, newer certificates should trump older ones, so that as long as one of the certificate chains checks out, the leaf certificate should be accepted.

But, as Ayer explains, some older TLS software (or some older versions of current TLS libraries) fail if the first certificate chain they try has expired, even though trying again with fresher data would find that the HTTPS connection was valid.

That’s the trouble here – even though one of Sectigo’s backwards-compatible root certificates has now expired, some web software is still relying on that old root certificate, which expired on 30 May 2020, even though it already knows about the new root certificate and should be verifying the certificate chain as valid.

What to do?

If you are getting web connection errors on software that was working fine until the end of last month, where the error lists an invalid certificate called AddTrust External CA Root, you need to take action.

You may need to update the software that’s trying to make the connection, or its root certificate “trust store”, or both.

If you’re stuck, consult your vendor – and if you are the vendor because it’s your own software, you may need to consider upgrading to a more modern TLS programming library that handles web certificate verification in a more future-proof way.

Ayer has some advice in his blog article – notably, if you are using a TLS library that ought to validate Sectigo certificates but isn’t doing so, you may be able to fix the problem simply by deleting the now-expired AddTrust External CA Root certificate – which is no use anyway but may nevertheless get in the way – from the certificate database on your computer.

The expired certificate was replaced a decade ago (!) by one denoted USERTrust RSA Certification Authority, so many TLS libraries do known about the “new” root certificate perfectly well; the problem is that they still know about the old one too, and get hung up on it even though it serves no purpose any more.


Latest Naked Security podcast

Hacker posts database stolen from Dark Net free hosting provider DH

In March, some 7,600 dark-web sites – about a third of all dark-web portals – were obliterated in an attack on Daniel’s Hosting (DH), the most popular provider of .onion free hosting services. Its portal was breached, its database was stolen, and its servers were wiped.

That was punch one. Punch two landed on Sunday, when a hacker going by the name KingNull or @null uploaded a copy of DH’s stolen database to a file-hosting portal and then gave ZDNet a heads-up about the leak.

ZDNet reports that a cursory analysis of the data dump shows that it includes 3,671 email addresses, 7,205 account passwords, and 8,580 private keys for .onion (dark web) domains.

Back in March, Daniel Winzen, the German software developer who runs DH, originally said that his portal was kaput, at least for the foreseeable future… which he also said, more or less, after DH suffered an earlier attack in September 2018. During the 2018 attack, hackers had rubbed 6,500 sites off the Dark Web in one fell swoop.

DarkOwl – a darknet intelligence, tools, and cybersecurity outfit that keeps an eye on DH and other Dark Web goings-on and which analyzed the September 2018 breach – had spotted Winzen’s post acknowledging the most recent attack and shared it on Twitter on 10 March. That’s the same day that DH’s hosting database got knocked out.

Who is KingNull – the hacker who went on to post DH’s database – and who else has it in for DH? Since they first spotted Winzen’s March tweet, DarkOwl analysts have looked for answers and published their take on the involved parties, which dark-net subcultures they can be traced to, and online chats about the attack. In one such discussion, an actor claimed that Winzen was compromised while accessing child abuse content.

DarkOwl connected the actor making the accusation, @Sebastian, to an anti-pedophilia hacking group formerly known as Ghost Security (#GhostSec) that was known for tracking and de-anonymizing criminals who harm children. However, the group tends to claim credit for attacks and hadn’t done so for the March attack, the firm said:

An organized hacking collective like GhostSec definitely has the capabilities and motivation to take down Winzen’s servers, especially if there was questionable content hosted and shared, but the group has not published any declaration or claim of responsibility for the hack, like they have with other groups and individuals they’ve targeted in the past.

Daniel’s was down for the count

After the March attack, Winzen said that he was fed up. He gives freely of his time, he claimed, which adds on to his full-time job. It’s time-consuming, he said, particularly given the work it takes to “keep the server clean from illegal and scammy sites.”

How clean were those servers, exactly? Not so much: after the 2018 attack, DarkOwl had analyzed the shuttered hidden services and found that hundreds contained content related to hacking and/or malware development, included drug-specific keywords, contained content related to counterfeiting, specifically mentioned carding, or referred to weapons and explosives.

No database backups, redux

Was Winzen really all that committed to his darknet projects, though? DarkOwl has monitored skepticism among darknet users regarding Winzen’s commitment. In fact, @null had referred to the DH chatroom as actually being a honeypot – a claim that well might be legitimate, one anonymous user suggested. Those suspicions are underscored by a server upgrade or move that happened mere weeks before the March attack, according to the darknet discussion.

If it were in fact a honeypot, that could explain why Winzen didn’t maintain backups, some have suggested. That’s how DH was wiped out so thoroughly, twice. DarkOwl:

Those who suspect that Daniel’s chatroom was actually a honey pot surmise that Daniel didn’t maintain backups of his data because they were being monitored (and probably managed) by international or German law officials. This was supported by the fact that a change in rule regarding sharing any pornographic content occurred in 2018, around the same time that Daniel was hacked and their databases disappeared.

There have been numerous pastes circulated around the darknet in the last year claiming many of the members, including [the chatroom’s controversial super administrator @Syntax] were Law Enforcement.

DarkOwl’s post includes transcriptions of many of the conversations it’s monitored and is well worth a read.

ZDNet asked threat intelligence firm Under the Breach to analyze the recent leak of DH’s database. The firm told the media outlet that the leaked database contains “sensitive information on the owners and users of several thousand darknet domains”- information such as email addresses that can be used to link their owners with certain dark-web portals, Under the Breach said:

This information could substantially help law enforcement track the individuals running or taking part in illegal activities on these darknet sites.

The darknet’s doing just fine without Daniel

DarkOwl reports that following the March attack, users of DH’s services spent several weeks scrambling to figure out where to congregate and how to communicate, with or without Winzen’s support. The darknet did just fine without DH, though: in fact, since the 11 March hack, DarkOwl said that it’s observed an average growth of 387 new domains per day across the entire darknet.

While many darknet site owners pulled up stakes and parked with new hosting providers, they could be vulnerable to hackers taking over the new accounts if they didn’t change their old passwords, ZDNet points out, if in fact their leaked, hashed passwords get cracked.

While that might not seem like much of a crying shame when it comes to the criminally inclined dark-web services such as those devoted to child sexual abuse, we can’t cheer their downfall. After all, besides shielding criminals, the hidden services of the darknet include outlets for those who are persecuted and/or living under repressive regimes.

ZDNet reports that IP addresses weren’t included in the leak. That will serve to protect both darknet criminals and those who are only looking to escape surveillance and prosecution.

In March, following the hack, Winzen told ZDNet that he was planning to relaunch the service in coming months, but only after several improvements, and that “this was not a priority.”

Will those improvements finally include database backups? … or, in keeping with the suspicion that DH is actually running a honeypot, will the relaunch include a way to penetrate the dark web in order to collect IP addresses of hidden services?

If so, we’ll be sure to bring you whatever news might be in the offing regarding law enforcement action on this huge slice of the darknet pie.

Latest Naked Security podcast

go top