Category Archives: News

Dutch suspect locked up for alleged personal data megathefts

The Public Prosecution Service in the Netherlands [Dutch: Openbaar Ministerie] has just released information about an unnamed suspect arrested back in December 2022 for allegedly stealing and selling personal data about tens of millions of people.

The victims are said to live in countries as far apart as Austria, China, Columbia, the Netherlands itself, Thailand and the UK.

Apparently, the courts have taken a strict approach to this case, effectively keeping the arrest secret from late 2022 until now, and not allowing the suspect out on bail.

According to the Ministry’s report, a court order about custody was made in early December 2022, when the authorities were given permission to keep the suspect locked up for a further 90 days, meaning that they can hold him until at least March 2023 as work on his case continues.

The suspect is being investigated for multiple offences: possessing or publishing “non-public” data, possessing phishing software and hacking tools, computer hacking, and money laundering.

The prosecutors claim that he laundered close to half-a-million Euros’ worth of cryptocurrency during 2022, so we’re assuming that the court considered him a flight risk, decided that if released he might be able to destroy evidence and, presumably, thought that he might try to warn others in the cybercrime forums where he’d been active to start covering their tracks, too.

Governmental breach?

Intriguingly, the investigation was triggered by the appearance on a cybercrime forum of a multi-million record stash of personal data relating to Austrian residents.

Those data records, it seems, turned out to have a common source: the company responsible for collecting radio and TV licence fees in Austria.

Austrian cops apparently went undercover to buy up a copy of the stolen data for themselves, and in the process of doing so (their investigative methods, unsurprisingly, weren’t revealed) identified an IP number that was somehow connected to the username they’d dealt with on the dark web.

That IP number led to Amsterdam in the Netherlands, where the Dutch police took the investigation further.

As the Dutch Ministry writes:

The team has strong indications that the suspect was operating under that user name and that he had, for a long time, been offering non-public personal data – including patient data from medical records – on the forum for payment under that name. […]

With the theft of large amounts of digital data, combining different databases and trading access to this data, more and more criminals know where a person lives, performs bank transactions, what car they have, what their password is, what phone numbers they have, where they work, go to school etc. Where it used to be necessary to observe people for weeks to identify the right victim, now a push of a button suffices.

What next?

We’ll let you know if and when we learn more about this case.

We know for sure that the Dutch police and prosecutors are not going to lose interest, because the Ministry concludes its annoucement with these words:

This kind of criminal activity not only grossly violates the privacy of millions of people but also causes financial damage to individuals and businesses. Police and prosecutors are committed to fighting this complex form of crime by detecting and prosecuting cybercriminals.

But we can’t help wondering whether the Austrian radio and TV licence fee collection company might attract the interest of investigators of different sort, this time from the Austrian data protection regulators rather than the police.

Although companies that suffer breaches are undeniably cybercrime victims themselves, they sometimes end up in legal trouble of their own if the regulator forms the opinion that they could and should have done more to protect their customers.

After all, as the Dutch prosecutors point out, it is the individuals whose data actually gets stolen who are the primary victims here.

S3 Ep119: Breaches, patches, leaks and tweaks! [Audio + Text]

BREACHES, PATCHES, LEAKS AND TWEAKS

Latest epidode – listen now.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Breaches, breaches, patches, and typios.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Daul Pucklin…

…I’m sorry, Paul!


DUCK.  I think I’ve worked it out, Doug.

“Typios” is an audio typo.


DOUG.  Exactly!


DUCK.  Yes… well done, that man!


DOUG.  So, what do typos have to do with cybersecurity?

We’ll get into that…

But first – we like to start with our This Week in Tech History segment.

This week, 23 January 1996, version 1.0 of the Java Development Kit said, “Hello, world.

Its mantra, “Write once, run anywhere”, and its release right as the web’s popularity was really reaching a fever pitch, made it an excellent platform for web-based apps.

Fast-forward to today, and we’re at version 19, Paul.


DUCK.  We are!

Java, eh?

Or “Oak”.

I believe that was its original name, because the person who invented the language had an oak tree growing outside his office.

Let us take this opportunity, Doug, to clear up, for once and for all, the confusion that lots of people have between Java and JavaScript.


DOUG.  Ooooooh…


DUCK.  A lot of people think that they are related.

They’re not related, Doug.

They’re *exactly the same* – one is just the shortened… NO, I’M COMPLETELY KIDDING YOU!

Java is not JavaScript – tell your friends!


DOUG.  I was, like, “Where is this going?” [LAUGHS]


DUCK.  JavaScript basically got that name because the word Java was cool…

…and programmers run on coffee, whether they’re programming in Java or JavaScript.


DOUG.  Alright, very good.

Thank you for clearing that up.

And on the subject of clearing things up, GoTo, the company behind such products as GoToMyPC, GoToWebinar, LogMeIn, and (cough, cough) others says that they’ve “detected unusual activity within our development environment and third party cloud storage service.”

Paul, what do we know?

GoTo admits: Customer cloud backups stolen together with decryption key


DUCK.  That was back on the last day of November 2022.

And the (cough, cough) that you mentioned earlier, of course, is GoTo’s affiliate/subsidiary, or company that’s part of their group, LastPass.

Of course, the big story over Christmas was LastPass’s breach.

Now, this breach seems to be a different one, from what Goto has come out and said now.

They admit that the cloud service that ultimately got breached is the same one that is shared with LastPass.

But the stuff that got breached, at least from the way they wrote it, sounds to have been breached differently.

And it took until this week – nearly two months later – for GoTo to come back with an assessment of what they found.

And the news is not good at all, Doug.

Because a whole load of products… I’ll read them out: Central, Pro, join.me, Hamachi and RemotelyAnywhere.

For all of those products, encrypted backups of customer stuff, including account data, got stolen.

And, unfortunately, the decryption key for at least some of those backups was stolen with them.

So that means they’re essentially *not* encrypted once they’re in the hands of the crooks.

And there were two other products, which were Rescue and GoToMyPC, where so-called “MFA settings” were stolen, but were not even encrypted.

So, in both cases we have, apparently: hashed-and-salted passwords missing, and we have these mysterious “MFA (multifactor authentication) settings”.

Given that this seems to be account-related data, it’s not clear what those “MFA settings” are, and it’s a pity that GoTo was not a little bit more explicit.

And my burning question is…

..do those settings include things like the phone number that SMS 2FA codes might be sent to?

The starting seed for app-based 2FA codes?

And/or those backup codes that many services let you create a few of, just in case you lose your phone or your SIM gets swapped?

SIM swapper sent to prison for 2FA cryptocurrency heist of over $20m


DOUG.  Oh, yes – good point!


DUCK.  Or your authenticator program fails.


DOUG.  Yes.


DUCK.  So, if they are any of those, then that could be big trouble.

Let’s hope those weren’t the “MFA settings”…

…but the omission of the details there means that it’s probably worth assuming that they were, or might have been, in amongst the data that was stolen.


DOUG.  And, speaking of possible omissions, we’ve got the requisite, “Your passwords have leaked. But don’t worry, they were salted and hashed.”

But not all salting-and-hashing-and-stretching is the same, is it?

Serious Security: How to store your users’ passwords safely


DUCK.  Well, they didn’t mention the stretching part!

That’s where you don’t just hash the password once.

You hash it, I don’t know… 100,100 times, or 5000 times, or 50 times, or a million times, just to make it a bit harder for the crooks.

And as you say… yes., not all salting-and-hashing is made equal.

I think you spoke fairly recently on the podcast about a breach where there were some salted-and-hashed passwords stolen, and it turned out, I think, that the salt was a two digit code, “00” to “99”!

So, 100 different rainbow tables is all you need…

…a big ask, but it’s do-able.

And where the hash was *one round* of MD5, which you can do at billions of hashes a second, even on modest equipment.

So, just as an aside, if you’re ever unfortunate enough to suffer a breach of this sort yourself, where you lose customers’ hashed passwords, I recommend that you go out of your way to be definitive about what algorithm and parameter settings you are using.

Because it does give a little bit of comfort to your users about how long it might take crooks to do the cracking, and therefore how frenziedly you need to go about changing all your passwords!


DOUG.  Alright.

We’ve got some advice, of course, starting with: Change all passwords that relate to the services that we talked about earlier.


DUCK.  Yes, that is something that you should do.

It’s what we would normally recommend when hashed passwords are stolen, even if they’re super-strongly hashed.


DOUG.  OK.

And we’ve got: Reset any app-based 2FA code sequences that you’re using on your accounts.


DUCK.  Yes, I think you might as well do that.


DOUG.  OK.

And we’ve got: Regenerate new backup codes.


DUCK.  When you do that with most services, if backup codes are a feature, then the old ones are automatically thrown away, and the new ones replace them entirely.


DOUG.  And last, but certainly not least: Consider switching to app-based 2FA codes if you can.


DUCK.  SMS codes have the advantage that there’s no shared secret; there’s no seed.

It’s just a truly random number that the other end generates each time.

That’s the good thing about SMS-based stuff.

As we said, the bad thing is SIM-swapping.

And if you need to change either your app-based code sequence or where your SMS codes go…

…it’s much, much easier to start a new 2FA app sequence than it is to change your mobile phone number! [LAUGHS]


DOUG.  OK.

And, as I’ve been saying repeatedly (I might get this tattooed on my chest somewhere), we will keep an eye on this.

But, for now, we’ve got a leaky T-Mobile API responsible for the theft of…

(Let me check my notes here: [LOUD BELLOW OFF-MIC] THIRTY-SEVEN MILLION!?!??!)

37 million customer records:

T-Mobile admits to 37,000,000 customer records stolen by “bad actor”


DUCK.  Yes.

That’s a little bit annoying, isn’t it? [LAUGHTER]

Because 37 million is an incredibly large number… and, ironically, comes after 2022, the year in which T-Mobile paid out $500 million to settle issues relating to a data breach that T-Mobile had suffered in 2021.

Now, the good news, if you can call it that, is: last time, the data that got breached included things like Social Security Numbers [SSNs] and driving licence details.

So that’s really what you might call “high-grade” identity theft stuff.

This time, the breach is big, but my understanding is that it’s basic electronic contact details, including your phone number, along with date of birth.

That goes some way towards helping crooks with identity theft, but nowhere near as far as something like an SSN or a scanned photo of your driving licence.


DOUG.  OK, we’ve got some tips if you are affected by this, starting with: Don’t click “helpful” links in emails or other messages.

I’ve got to assume that a tonne of spam and phishing emails are going to be generated from this incident.


DUCK.  If you avoid the links, as we always say, and you find your own way there, then whether it’s a legitimate email or not, with a genuine link or a bogus one…

…if you don’t click the good links, then you won’t click the bad links either!


DOUG.  And that dovetails nicely with our second tip: Think before you click.

And then, of course, our last tip: Report those suspicious emails to your work IT team.


DUCK.  When crooks start phishing attacks, the crooks generally don’t send it to one person inside the company.

So, if the first person that sees a phish in your company happens to raise the alarm, then at least you have a chance of warning the other 49!


DOUG.  Excellent.

Well, for you iOS 12 users out there… if you were feeling left out from all the recent zero-day patches, have we got a story for you today!

Apple patches are out – old iPhones get an old zero-day fix at last!


DUCK.  We have, Doug!

I’m quite happy, because everyone knows I love my old iOS 12 phone.

We went through some excellent times, and on some lengthy and super-cool bicycle rides together until… [LAUGHTER]

…the fateful one where I got injured well enough to recover, and the phone got injured well enough that you can barely see through the cracks of the screen anymore, but it still works!

I love it when it gets an update!


DOUG.  I think this was when I learned the word prang.


DUCK.  [PAUSE] What?!

That’s not a word to you?


DOUG.  No!


DUCK.  I think it comes from the Royal Air Force in the Second World War… that was “pranging [crashing] a plane”.

So, there’s a ding, and then, well above a ding, comes a prang, although they both have the same sound.


DOUG.  OK, gotcha.


DUCK.  Surprise, surprise – after having no iOS 12 updates for ages, the pranged phone got an update…

…for a zero-day bug that was the mysterious bug fixed some time ago in iOS 16 only… [WHISPER] very secretively by Apple, if you remember that.


DOUG.  Oh, I remember that!

Apple pushes out iOS security update that’s more tight-lipped than ever


DUCK.  There was this iOS 16 update, and then some time later updates came out for all the other Apple platforms, including iOS 15.

And Apple said, “Oh, yes, actually, now we think about it, it was a zero-day. Now we’ve looked into it, although we rushed out the update for iOS 16 and didn’t do anything for iOS 15, it turns out that the bug only applies to iOS 15 and earlier.” [LAUGHS]

Apple patches everything, finally reveals mystery of iOS 16.1.2

So, wow, what a weird mystery it was!

But at least they patched everything in the end.

Now, it turns out, that old zero-day is now patched in iOS 12.

And this is one of those WebKit zero-days that sounds as though the way it’s been used in the wild is for malware implantation.

And that, as always, smells of something like spyware.

By the way, that was the only bug fixed in iOS 12 that was listed – just that one 0-day.

The other platforms got loads of fixes each.

Fortunately, those all seem to be proactive; none of them are listed by Apple as “actively being exploited.”

[PAUSE]

Right, let’s move on to something super-exciting, Doug!

I think we’re into the “typios”, aren’t we?


DOUG.  Yes!

The question I’ve been asking myself… [IRONIC] I can’t remember how long, and I’m sure other people are asking, “How can deliberate typos improve DNS security?”

Serious Security: How dEliBeRaTe tYpOs might imProVe DNS security


DUCK.  [LAUGHS]

Interestingly, this is an idea that first surfaced in 2008, around the time that the late Dan Kaminsky, who was a well-known security researcher in those days, figured out that there were some significant “reply guessing” risks to DNS servers that were perhaps much easier to exploit than people thought.

Where you simply poke replies at DNS servers, hoping that they just happen to match an outbound request that hasn’t had an official answer yet.

You just think, “Well, I’m sure somebody in your network must be interested in going to the domain naksec.test just about now. So let me send back a whole load of replies saying, ‘Hey, you asked about naksec.test; here it is”…

…and they send you a completely fictitious server [IP] number.

That means that you come to my server instead of going to the real deal, so I basically hacked your server without going near your server at all!

And you think, “Well, how can you just send *any* reply? Surely there’s some kind of magic cryptographic cookie in the outbound DNS request?”

That means the server could notice that a subsequent reply was just someone making it up.

Well, you’d think that… but remember that DNS first saw the light of day in 1987, Doug.

And not only was security not such a big deal then, but there wasn’t room, given the network bandwidth of the day, for long-enough cryptographic cookies.

So DNS requests, if you go to RFC 1035, are protected (loosely speaking, Doug) by a unique identification number, hopefully randomly generated by the sender of the request.

Guess how long they are, Doug…


DOUG.  Not long enough?


DUCK.  16 bits.


DOUG.  Ohhhhhhhh.


DUCK.  That’s kind-of quite short… it was kind-of quite short, even in 1987!

But 16 bits is *two whole bytes*.

Typically the amount of entropy, as the jargon has it, that you would have in a DNS request (with no other cookie data added – a basic,original-style, old-school DNS request)…

…you have a 16-bit UDP source port number (although you don’t get to use all 16 bits, so let’s call it 15 bits).

And you have that 16-bit, randomly-chosen ID number… hopefully your server chooses randomly, and doesn’t use a guessable sequence.

So you have 31 bits of randomness.

And although 231 [just over 2 billion] is a lot of different requests that you’d have to send, it’s by no means out of the ordinary these days.

Even on my ancient laptop, Doug, sending 216 [65,536] different UDP requests to a DNS server takes an almost immeasurably short period of time.

So, 16 bits is almost instantaneous, and 31 bits is do-able.

So the idea, way back in 2008 was…

What if we take the domain name you’re looking up, say, naksec.test, and instead of doing what most DNS resolvers do and saying, “I want to look up n-a-k-s-e-c dot t-e-s-t,” all in lowercase because lowercase looks nice (or, if you want to be old-school, all in UPPERCASE, because DNS is case-insensitive, remember)?

What if we look up nAKseC.tESt, with a randomly chosen sequence of lowercase, UPPERCASE, UPPERCASE, lower, et cetera, and we remember what sequence we used, and we wait for the reply to come back?

Because DNS replies are mandated to have a copy of the original request in them.

What if we can use some of the data in that request as a kind of “secret signal”?

By mashing up the case, the crooks will have to guess that UDP source port; they will have to guess that 16-bit identification number in the reply; *and* they will have to guess how we chose to miS-sPEll nAKsEc.TeST.

And if they get any of those three things wrong, the attack fails.


DOUG.  Wow, OK!


DUCK.  And Google decided, “Hey, let’s try this.”

The only problem is that in really short domain names (so they’re cool, and easy to write, and easy to remember), like Twitter’s t.co, you only get three characters that can have their case changed.

It doesn’t always help, but loosely speaking, the longer your domain name, the safer you’ll be! [LAUGHS]

And I just thought that was a nice little story…


DOUG.  As the sun begins to set on our show for today, we have a reader comment.

Now, this comment came on the heels of last week’s podcast, S3 Ep118.

S3 Ep118: Guess your password? No need if it’s stolen already! [Audio + Text]

Reader Stephen writes… he basically says:

I’ve been hearing you guys talk about password managers a lot recently – I decided to roll my own.

I generate these secure passwords; I could store them on a memory stick or sticks, only connecting the stick when I need to extract and use a password.

Would the stick approach be reasonably low risk?

I guess I could become familiar with encryption techniques to encode and decode information on the stick, but I can’t help feeling that may take me way beyond the simple approach I am seeking.

So, what say you, Paul?


DUCK.  Well, if it takes you way beyond the “simple” approach, then that means it’s going to be complicated.

And if it’s complicated, then that’s a great learning exercise…

…but maybe password encryption is not the thing where you want to do those experiments. [LAUGHTER]


DOUG.  I do believe I’ve heard you say before on this very programme several different times: “No need to roll your own encryption; there are several good encryption libraries out there you can leverage.”


DUCK.  Yes… do not knit, crochet, needlepoint, or cross-stitch your own encryption if you can possibly help it!

The issue that Stephen is trying to solve is: “I want to dedicate a removable USB drive to have passwords on it – how do I go about encrypting the drive in a convenient way?”

And my recommendation is that you should go for something that does full-device encryption [FDE] *inside the operating system*.

That way, you’ve got a dedicated USB stick; you plug it in, and the operating system says, ‘”That’s scrambled – I need the passcode.”

And the operating system deals with decrypting the whole drive.

Now, you can have encrypted *files* inside the encrypted *device*, but it means that, if you lose the device, the entire disk, while it’s unmounted and unplugged from your computer, is shredded cabbage.

And instead of trying to knit your own device driver to do that, why not use one built into the operating system?

That is my recommendation.

And this is where it gets both easy and very slightly complicated at the same time.

If you’re running Linux, then you use LUKS [Linux Unified Key Setup].

On Macs, it’s really easy: you have a technology called FileVault that’s built into the Mac.

On Windows, the equivalent of FileVault or LUKS is called BitLocker; you’ve probably heard of it.

The problem is that if you have one of the Home versions of Windows, you can’t do that full-disk encryption layer on removable drives.

You have to go and spend the extra to get the Pro version, or the business-type Windows, in order to be able to use the BitLocker full-disk encryption.

I think that’s a pity.

I wish Microsoft would just say, “We encourage you to use it as and where you can – on all your devices if you want to.”

Because even if most people don’t, at least some people will.

So that’s my advice.

The outlier is that if you have Windows, and you bought a laptop, say, at a consumer store with the Home version, you’re going to have to spend a little bit of extra money.

Because, apparently, encrypting removable drives, if you’re a Microsoft customer, isn’t important enough to build into the Home version of the operating system.


DOUG.  Alright, very good.

Thank you, Stephen, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH.  Stay secure!

[MUSICAL MODEM]


GoTo admits: Customer cloud backups stolen together with decryption key

GoTo is a well-known brand that owns a range of products, including technologies for teleconferencing and webinars, remote access, and password management.

If you’ve ever used GoTo Webinar (online meetings and seminars), GoToMyPC (connect and control someone else’s computer for management and support), or LastPass (a password manangement service), you’ve used a product from the GoTo stable.

You’ve probably not forgotten the big cybersecurity story over the 2022 Christmas holiday season, when LastPass admitted that it had suffered a breach that was much more serious than it had first thought.

The company first reported, back in August 2022, that crooks had stolen proprietary source code, following a break-in into the LastPass development network, but not customer data.

But the data grabbed in that source code robbery turned out to include enough information for attackers to follow up with a break-in at a LastPass cloud storage service, where customer data was indeed stolen, ironically including encrypted password vaults.

Now, unfortunately, it’s parent company GoTo’s turn to admit to a breach of its own – and this one also involves a development network break-in.

Security incident

On 2022-11-30, GoTo informed customers that it had suffered “a security incident”, summarising the situation as follows:

Based on the investigation to date, we have detected unusual activity within our development environment and third-party cloud storage service. The third-party cloud storage service is currently shared by both GoTo and its affiliate, LastPass.

This story, so briefly told at the time, sounds curiously similar to the one that unfolded from August 2022 to December 2022 at LastPass: development network breached; customer storage breached; investigation ongoing.

Nevertheless, we have to assume, given that the statement explicitly notes that the cloud service was shared between LastPass and GoTo, while implying that the development network mentioned here wasn’t, that this breach didn’t start months earlier in LastPass’s development system.

The suggestion seems to be that, in the GoTo breach, the development network and cloud service intrusions happened at the same time, as though this was a single break-in that yielded two targets right away, unlike the LastPass scenario, where the cloud breach was a later consequence of the first.

Incident update

Two months later, GoTo has come back with an update, and the news isn’t great:

[A] threat actor exfiltrated encrypted backups from a third-party cloud storage service related to the following products: Central, Pro, join.me, Hamachi, and RemotelyAnywhere. We also have evidence that a threat actor exfiltrated an encryption key for a portion of the encrypted backups. The affected information, which varies by product, may include account usernames, salted and hashed passwords, a portion of Multi-Factor Authentication (MFA) settings, as well as some product settings and licensing information.

The company also noted that although MFA settings for some Rescue and GoToMyPC customers were stolen, their encrypted databases were not.

Two things are confusingly unclear here: firstly, why were MFA settings stored encrypted for one set of customers, but not for others; and secondly, what do the words “MFA settings” encompass anyway?

Several possible important “MFA settings” come to mind, including one or more of:

  • Phone numbers used for sending 2FA codes.
  • Starting seeds for app-based 2FA code sequences.
  • Stored recovery codes for use in emergencies.

SIM swaps and starting seeds

Clearly, leaked telephone numbers that are directly linked to the 2FA process represent handy targets for crooks who already know your username and password, but can’t get past your 2FA protection.

If the crooks are certain of the number to which your 2FA codes are being sent, they may be inclined to try for a SIM swap, where they trick, cajole or bribe a mobile phone company staffer into issuing them a “replacement” SIM card that has your number assigned to it.

If that happens, not only will they receive the very next 2FA code for your account on their phone, but your phone will go dead (because a number can only be assigned to one SIM at a time), so you are likely to miss any alerts or telltales that might otherwise have clued you in to the attack.

Starting seeds for app-based 2FA code generators are even more useful for attackers, because it’s the seed alone that determines the number sequence that appears on your phone.

Those magic six-digit numbers (they can be longer, but six is usual) are computed by hashing the current Unix-epoch time, rounded down to the start of the most recent 30-second window, using the seed value, typically a randomly-chosen 160-bit (20-byte) number, as a cryptographic key.

Anyone with a mobile phone or a GPS receiver can reliably determine the current time within a few milliseconds, let alone to the closest 30 seconds, so the starting seed is the only thing standing between a crook and your own personal code stream.

Lua code showing how a TOTP code (time-based one-time password) is generated from a 160-bit sequence seed.

Similarly, stored recovery codes (most services only let you keep a few valid ones at a time, typically five or ten, but one may well be enough) are also almost certainly going to get an attacker past your 2FA defences.

Of course, we can’t be sure that any of this data was included in those missing “MFA settings” that the crooks stole, but we do wish that GoTo had been more forthcoming about what was involved in that part of the breach.

How much salting and stretching?

Another detail that we recommend you to include if ever you’re caught out in a data breach of this sort is exactly how any salted-and-hashed passwords were actually created.

This will help your customers judge how quickly they need to get through all the now-unavoidable password changes they need to make, because the strength of the hash-and-salt process (more precisely, we hope, the of salt-hash-and-stretch process) determines how quickly the attackers might be able to work out your passwords from the stolen data.

Technically, hashed passwords aren’t generally cracked by any sort of cryptographic trickery that “reverses” the hash. A decently-chosen hashing algorithm can’t be run backwards to reveal anything about its input. In practice, attackers simply try out a hugely long list of possible passwords, aiming to try very likely ones up front (e.g. pa55word), to pick moderately likely ones next (e.g. strAT0spher1C), and to leave the least likely as long as possible (e.g. 44y3VL7C5%TJCF-KGJP3qLL5). When choosing a password hashing system, don’t invent your own. Look at well-known algorithms such as PBKDF2, bcrypt, scrypt and Argon2. Follow the algorithm’s own guidelines for salting and stretching parameters that provide good resilience against password-list attacks. Consult the Serious Security article above for expert advice.

What to do?

GoTo has admitted that the crooks have had at least some users’ account names, password hashes and an unknown set of “MFA settings” since at least the end of November 2022, close to two months ago.

There’s also the possibility, despite our assumption above that this was an entirely new breach, that this attack might turn out to have a common antecedent going back to the original LastPass intrusion in August 2022, so that the attackers might have been in the network for even longer than two months before this recent breach notification was published.

So, we suggest:

  • Change all passwords in your company that relate to the services listed above. If you were taking password risks before, such as choosing short and guessable words, or sharing passwords between accounts, stop doing that.
  • Reset any app-based 2FA code sequences that you are using on your accounts. Doing this means that if any of your 2FA seeds were stolen, they become useless to the crooks.
  • Re-generate new backup codes, if you have any. Previously-issued codes should automatically be invalidated at the same time.
  • Consider switching to app-based 2FA codes if you can, assuming you are currently using text message (SMS) authentication. It’s easier to re-seed a code-based 2FA sequence, if needed, than it is to get a new phone number.

Apple patches are out – old iPhones get an old zero-day fix at last!

Last year, on the last day of August 2022, we wrote with mild astonishment, and perhaps even a tiny touch of excitement, about an unexpected but rather important update for iPhones stuck back on iOS 12.

As we remarked at the time, we’d already decided that iOS 12 had slipped (or perhaps been quietly pushed) off Apple’s radar, and would never be updated again, give that the previous update had been a year before that, back in September 2021.

But we had to scrap that decision when iOS 12.5.6 appeared unexpectedly, fixing a mysterious zero-day bug that had been patched several weeks earlier in Apple’s other products.

Given that the iOS 12 bug fixed back then was in WebKit, Apple’s web rendering engine that’s used in all web browsers on iDevices, not just in Safari; given that real-world attackers were already known to be exploiting the hole; given that browser bugs almost always mean that merely looking at an apparently innocent and unimportant-looking web page could be enough to implant spyware on your phone in the background…

…we decided that iOS 12.5.6 was an important update to get:

Updates you thought you’d never see are important to check up on, espeically if you own an older “backup” iPhone that you don’t use every day any more, or that you’ve passed on to a less tech-savvy member of your family.

Well, here’s some déjà vu all over again: Apple’s latest updates just dropped, and as far as we can tell, there’s only one zero-day fix amongst the updates, and once again it’s for iOS 12.

Just as importantly, this patch also fixes a hole in WebKit that sounds as though it’s already being abused by attackers for implanting malware.

As it happens, this is the only bug fixed in the iOS 12.5.7 update, and it’s got the official bug number CVE-2022-42856

That rings a bell

If the bug number CVE-2022-42856 rings a bell, that’s probably because Apple fixed it in two rounds of updates to all its other products in December 2022.

Firstly, there was a mysterious round of updates that turned out to be not so much a round as a solo effort, patching iOS 16.1 up to iOS 16.2.

No other devices in the Apple stable got updated, not even iOS 15, the previous version of iOS that some users stuck to by choice, and others because their older phones couldn’t be upgraded to iOS 16.

Secondly, a few weeks later, came the updates that somehow felt as though they’d been delayed from the first “round”.

At this point, Apple rather curiously (or perhaps we mean confusingly?) admitted that the update already published for iOS 16 was, in fact, a patch against CVE-2022-42856, which had been a zero-day bug all along…

…but a zero-day that applied only to iOS 15.1 and earlier.

In other words, the early availability of the iOS 16.1.2 update, though it did no harm, turned out to have been a “fix” for the one version of iOS that didn’t need it.

That early iOS 16 update would much more usefully have made its first appearance as an iOS 15 patch instead.

Now iOS 12 joins the club

As you already know, because we mentioned the bug number above, there’s now a belated zero-day patch, for that very same bug, that applies to Apple’s oldest extant iOS flavour, namely iOS 12.

Get this update now, because the crooks have known about this one for close to two months at least.

(We’re guessing that the attackers developed a keen interest in fine-tuning their CVE-2022-42856 exploit for iOS 12 as soon as the more widely-used iOS 15 got its updates at the end of 2022.)

Go to Settings > General > Software Update to check if you have the patch already, or to force an update if you don’t:

Lots of other updates, too

For all that the critical iOS 12 zero-day patch fixes one and only one listed bug, Apple’s other products get a wide range of patches, though we didn’t find any that are listed as “already actively exploited”.

In other words, none of the many bugs fixed in any products other than iOS 12 count as zero-days, and therefore by patching right away you are getting ahead of the crooks, not merely catching up with them.

The updated version numbers you’re looking for after you’ve installed the patches are as follows, with their security bulletin pages for easy reference, and the hardware products they apply to:

  • Bulletin HT213597: iOS 12.5.7. For iPhone 5s, iPhone 6, iPhone 6 Plus, iPad Air, iPad mini 2, iPad mini 3, and iPod touch (6th generation).
  • Bulletin HT213603: macOS Big Sur 11.7.3. Typically used on older Macs that don’t support the latest versions, such as the original 12″ MacBook from 2015.
  • Bulletin HT213604: macOS Monterey 12.6.3.
  • Bulletin HT213605: macOS Ventura 13.2.
  • Bulletin HT213598: iOS 15.7.3 and iPadOS 15.7.3. iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Air 2, iPad mini (4th generation), and iPod touch (7th generation).
  • Bulletin HT213606: iOS 16.3 and iPadOS 16.3. iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later
  • Bulletin HT213599: watchOS 9.3: Apple Watch Series 4 and later.

As usually happens with Mac updates, there’s a new version of the WebKit rendering engine and the Safari browser, dubbed Safari 16.3, presumably to match the biggest product version number on the list above, namely iOS 16.3 and iPadOS 16.3

If you have the latest version of macOS, namely macOS Ventura 13, this new Safari version arrives along with the macOS update, so that’s all you need to download and install.

But if you’re still on macOS 11 Big Sur or macOS 12 Monterey, the Safari patches come as a separate download, so there will be two updates waiting for you, not one. (That second update isn’t one you forgot from last time!)

What to do?

On macOS, use: Apple menu > About this Mac > Software Update…

As mentioned above, on iPhones and iPads, use: Settings > General > Software Update.

Don’t delay, especially if you’re still running an iOS 12 device…

…please do it today!


Serious Security: How dEliBeRaTe tYpOs might imProVe DNS security

Over the years, we’ve written and spoken on Naked Security many times about the thorny problem of DNS hijacking.

DNS, as you probably know, is short for domain name system, and you’ll often hear it described as the internet’s “telephone directory” or “gazetteer”.

If you’re not familiar with the word gazeteer, it refers to the index at the back of an atlas where you look up, say, Monrovia, Liberia in a convenient alphabetic list, and it says something like 184 - C4. This tells you to turn straight to page 184, and to follow the grid lines down from the letter E at the top of the map, and across from the number 4 on the left. Where the lines meet, you’ll find Monrovia.

For most users, most DNS lookups go out containing a server name, asking for a reply to come back that includes what’s known as its A-record or its AAAA-record.

(A-records are used for 32-bit IPv4 internet numbers, such as 203.0.113.42; AAAA-records are the equivalent answers for a 128-bit IPv6 addresses, such as 2001:db8:15a:d0c::42 – in this article, we’ll just use A-records and IPv4 numbers, but the same security issues apply to the lookup process in both cases.)

Here’s an example, where we’re looking up the imaginary domain name naksec.test via a DNS server that was specially created to track and teach you about DNS traffic.

We’ve used the old-school Linux tool dig, short for domain internet groper, to generate a simple DNS request (dig defaults to looking up A-records) for the server we want:

$ dig +noedns @127.42.42.254 naksec.test ;; QUESTION SECTION:
;naksec.test. IN A ;; ANSWER SECTION:
NAKSEC.TEST. 5 IN A 203.0.113.42 ;; Query time: 1 msec
;; SERVER: 127.42.42.254#53(127.42.42.254) (UDP)
;; WHEN: Mon Jan 23 14:38:42 GMT 2023
;; MSG SIZE rcvd: 56

Here’s how our DNS server dealt with the request, showing a hex dump of the incoming request, and the successful reply that went back:

---> Request from 127.0.0.1:57708 to 127.42.42.254:53
---> 00000000 62 4e 01 20 00 01 00 00 00 00 00 00 06 6e 61 6b |bN. .........nak| 00000010 73 65 63 04 74 65 73 74 00 00 01 00 01 |sec.test..... | DNS lookup: A-record for naksec.test ==> A=203.0.113.42 <--- Reply from 127.42.42.254:53 to 127.0.0.1:57708
<--- 00000000 62 4e 84 b0 00 01 00 01 00 00 00 00 06 6e 61 6b |bN...........nak| 00000010 73 65 63 04 74 65 73 74 00 00 01 00 01 06 4e 41 |sec.test......NA| 00000020 4b 53 45 43 04 54 45 53 54 00 00 01 00 01 00 00 |KSEC.TEST.......| 00000030 00 05 00 04 cb 00 71 2a |......q* |

Note that, for performance reasons, most DNS requests use UDP, the user datagram protocol, which works on a send-and-hope basis: you fire off a UDP packet at the server you want to talk to, and then wait to see if a reply comes back.

This makes UDP much simpler and faster than its big cousin TCP, the transmission control protocol, which, as its name suggests, automatically takes care of lots of details that UDP doesn’t.

Notably, TCP deals with detecting data gets lost and asking foir it again; ensuring that any chunks of data arrive in the right order; and providing a single network connection that, once set up, can be used for sending and receiving at the same time.

UDP doesn’t have the concept of a “connection”, so that requests and replies essentially travel independently:

  • A DNS request arrives at the DNS server in a UDP packet of its own.
  • The DNS server keeps a record of which computer sent that particular packet.
  • The server sets about finding an answer to send back, or deciding that there isn’t one.
  • The server sends a reply to the original sender, using a second UDP packet.

From the level of the operating system or the network, those two UDP packets above are independent, standalone transmissions – they aren’t tied together as part of the same digital connection.

It’s up to the server to remember which client to send each reply to; and it’s up to the client to figure out which replies relate to which requests it originally sent out.

How can you be sure?

At this point, especially looking at the diminutive size of the DNS request and reply above, you’re probably wondering, “How can the client be sure that it’s matched the right reply, and not one that’s been garbled in transit, or directed incorrectly by mistake, either by accident or design?”

Unfortunately, many, if not most, DNS requests (especially those from server to server, when the first server you ask doesn’t know the answer and needs to find one that does in order to formulate its reply) aren’t encrypted, or otherwise labelled with any sort of cryptographic authentication code.

In fact, by default, DNS requests include a single “identification tag”, which is referred to in the DNS data-format documentation simply as ID.

Amazingly, despite having received numerous updates and suggested improvements over the years, the official internet RFC (request for comments) document that acts as the DNS specification is still RFC 1035 (we’re currently into RFCs in the mid-9000s), dating all the way back to November 1987, just over 35 years ago!

Back then, both bandwidth and processing power were in short supply: typical CPU speeds were about 10MHz; desktop computers had about 1MByte of RAM; internet access speeds, for organisations who could get online at all, were often 56kbits/sec or 64 kbits/sec, shared between everyone; and 1200bits/sec was the affordable choice for personal connectivity via the dialup modems of the day.

That’s why DNS request and reply headers were – and still are – squashed into a measly 12 bytes, of which the ID tag takes up the first two, as RFC 1035’s cute ASCII art makes clear:

 1 1 1 1 1 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ | ID | +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ |QR| Opcode |AA|TC|RD|RA| Z | RCODE | +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ | QDCOUNT | +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ | ANCOUNT | +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ | NSCOUNT | +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ | ARCOUNT | +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+

You can see the ID in action in the hex dumps shown above, where both the request and the reply packets start with the same two characters bN, which correspond to the 16-bit identifier 62 4e in hex.

Very loosely speaking, those 16 bits are as much as the official DNS protocol provides by way of “authentication” or “error detection”.

Meddling by guesswork

As you can imagine, given the end-to-end simplicity of regular DNS traffic, anyone with a so-called middlebox or scanning proxy who can intercept, examine and modify your network traffic can trivially meddle with your DNS traffic.

This includes sending back replies that deliberately give you inaccurate information, such as your IT team redirecting you away from servers that it knows to be littered with malware.

It may also include your ISP complying with legislation in your country that requires some servers to be reported as non-existent, even if they are alive and running fine, because they’re on a blocklist of illegal content such as child abuse material.

But, at first glance, this ultra-weak sort of DNS ID tagging also seems to make it trivial even for attackers who have no visibility of your network traffic at all to fire fake DNS replies at your users or your servers…

…with a dangerously high chance of success.

After all, if attackers know that someone on your network regularly likes to visit naksec.test, that server might seem like a juicy place to implant fake news, dodgy updates, or rogue JavaScript code.

And if the attackers aren’t able to hack into the naksec.test server itself, what if they were to regularly and frequently fire UDP packets at your DNS server, using a made-up ID tag, that claimed to answer the question, “What is the A-record for naksec.test“?

That way, they might be able to hijack the DNS request, provide a fake reply, and therefore misdirect your next visit to the website – essentially hijacking the site itself without ever needing to attack the naksec.test server at all.

Some luck required

They’d need to get a bit lucky, of course, though they could try over and over again to boost their overall chances, given that they only need to succeed once, whereas you need to get a truthyul DNS reply every time.

To succeed, they’d need to send their rogue DNS reply:

  • During a period that your own server didn’t already know the answer to the question. DNS replies include a 32-bit number called TTL, short for time-to-live, which says how long the other end can keep re-using the answer. If you or anyone else on ytour network asked for naksec.test recently, your DNS server might have the answer in its cache. No further lookup would be needed, and there would be no outgoing request for the attackers to hijack.
  • Between the time that you sent your request and the official reply came back from outside. Even in the olden days, DNS lookup times rarely ran into more than a few seconds. Today, they’re best measured in milliseconds.
  • With the right number in its first 16 bits. You can fit 65536 (216) different values into 16 bits, so the attackers would have to be somewhat lucky. But at today’s network bandwidths, sending 65536 different fake replies at once, thus covering all possible ID numbers, takes a tiny fraction of a second.

Fortunately, decent DNS servers tody atake an extra step to make hijacking difficult by default.

At least, that’s what they’ve been doing since about 2008, when the late Dan Kaminsky pointed out that lots of DNS servers back then were not only configured to listen for incoming requests on a fixed UDP port (almost almways port 53, officially assigned to DNS)…

…but also to receive inbound replies on a fixed port, too, often also port 53, if only to create a pleasing symmetry in traffic.

The reason for using a fixed port in both directions was probably originally for programming simplicity. By always listening for replies on the same UDP port number, you don’t need to keep track of which ports should be opened up for which replies. This means that the request handler and the reply generator components of your DNS software can operate independently. The request listener doesn’t need to tell the reply sender, “This particular reply needs to go back on a special port, not the usual one.”

Use port numbers as extra ID

These days, almost all UDP-based DNS servers listen on port 53, as always, but they keep track of the so-called “source port” used by the DNS requester, which it expects to be chosen randomly.

UDP source ports, which are a bit like an “extension number” in an old-school office telephone exchange, are intended to be used to help you, and the network, differentiate requests from one other.

Internet protocol ports (TCP uses them, too) can run from 1 to 65535, though most outbound connections only use source ports 1024-65535, because port numbers 1023 and below are typically resereved for processes with system privileges.

The idea is that the sender of any DNS lookup should not only insert a truly random 16-bit ID at the start of each request, but also choose a truly random UDP source port number at which it will listen for the associated reply.

This adds an extra level of guesswork that the crooks have to add to their “hijack luck” list above, namely that they have to send a fake reply that ticks all of these boxes:

  • Must be a query that was recently asked, typically within the past few seconds.
  • Must be a lookup that wasn’t in the local server’s cache, typically meaning that no one else asked about it within the past few minutes.
  • Must have the right 16-bit ID number at the start of the data packet.
  • Must be sent to the right destination port at the relevant server’s IP number.

And another thing

In fact, there’s yet another trick that DNS requesters can do, without changing the underlying DNS protocol, and thus (for the most part) without breaking anything.

This trick, astonishingly, was first proposed back in 2008, in a paper gloriously entitled Increased DNS Forgery Resistance Through 0x20-Bit Encoding: SecURItY viA LeET QueRies.

The idea is weirdly simple, and relies on two details in the DNS protocol:

  • All DNS replies must include the original query section at the start. Queries, obviously, have an empty answer section, but replies are required to reflect the original question, which helps ensure that requests and replies don’t accidentally get mixed up.
  • All DNS questions are case-insensitive. Whether you ask for naksec.test, or NAKSEC.TEST, or nAksEc.tESt, you should get the same answer.

Now, there’s nothing in the protocol that says you MUST use the same sPeLLing in the part of the reply where you repeat the original query, because DNS doesn’t care about case.

But although RFC 1035 requires you to do case-insensitive comparisons, it strongly suggests that you don’t actually change the case of any text names that you receive in requests or retrieve from your own databases for use in replies.

In other words, if you receive a request for nAKsEC.tEST, and your database has it stored as NAKSEC.TEST, then those two names are nevertheless considered identical and will match.

But when you formulate your answer, RFC 1035 suggests that you don’t change the character case of the data you put into your reply, even though you might think it would look neater, and even though it would still match at the other end, thanks to the case-insensitive comparison demanded by DNS.

So, if you randomly mix up the case of the letters in a DNS request before you send it, most DNS servers will faithfully reflect that weird mashup of letters, even if their own database stores the name of the server differently, as you see here:

$ dig +noedns @127.42.42.254 nAkSEc.tEsT ;; QUESTION SECTION:
;nAkSEc.tEsT. IN A ;; ANSWER SECTION:
NAKSEC.TEST. 5 IN A 203.0.113.42 ;; Query time: 1 msec
;; SERVER: 127.42.42.254#53(127.42.42.254) (UDP)
;; WHEN: Mon Jan 23 14:40:34 GMT 2023
;; MSG SIZE rcvd: 56

Our DNS server stores the name naksec.test all in upper case, and so the answer section of the reply includes the name in the form NAKSEC.TEST, along with its IPv4 number (the A-record) of 203.0.113.42.

But in the “here’s the query data returned to you for cross-checking” part of the reply that our DNS server sends back, the character-case mashup originally used in the DNS lookup is preserved:

---> Request from 127.0.0.1:55772 to 127.42.42.254:53
---> 00000000 c0 55 01 20 00 01 00 00 00 00 00 00 06 6e 41 6b |.U. .........nAk| 00000010 53 45 63 04 74 45 73 54 00 00 01 00 01 |SEc.tEsT..... | DNS lookup: A-record for nAkSEc.tEsT ==> A=203.0.113.42 <--- Reply from 127.42.42.254:53 to 127.0.0.1:55772
<--- 00000000 c0 55 84 b0 00 01 00 01 00 00 00 00 06 6e 41 6b |.U...........nAk| 00000010 53 45 63 04 74 45 73 54 00 00 01 00 01 06 4e 41 |SEc.tEsT......NA| 00000020 4b 53 45 43 04 54 45 53 54 00 00 01 00 01 00 00 |KSEC.TEST.......| 00000030 00 05 00 04 cb 00 71 2a |......q* |

Extra security tagging, free of charge

Bingo!

There’s some more “identication tagging” that a DNS lookup can add!

Along with 15-or-so bits’ worth of randomly-chosen source port, and 16 bits of randomly-chosen ID number data, the requester gets to choose upper-versus-lower case for each alphabetic character in the domain name.

And naksec.test contains 10 letters, each of which could be written in upper or lower case, for a further 10 bits’ worth of random “tagging”.

With this extra detail to guess, the attackers would need to get lucky with their timing, the UDP port number, the ID tag value, and the caPiTaLiSATion of the domain name, in order to inject a fake “hijack reply” that the requesting server would accept.

By the way, the name 0x20-encoding above is a bit of a joke: 0x20 in headecimal is 00100000 in binary, and the solitary bit in that byte is what differentiates upper-case and lower-case letters in the ASCII encoding system.

The letters A to I, for example, come out as 0x41 to 0x49, while a to i come out as 0x61 to 0x69.

 ASCII encoding chart as ASCII text
+------+------+------+------+------+------+------+------+
|00 ^@ |10 ^P |20 |30 0 |40 @ |50 P |60 ` |70 p |
|01 ^A |11 ^Q |21 ! |31 1 |41 A |51 Q |61 a |71 q |
|02 ^B |12 ^R |22 " |32 2 |42 B |52 R |62 b |72 r |
|03 ^C |13 ^S |23 # |33 3 |43 C |53 S |63 c |73 s |
|04 ^D |14 ^T |24 $ |34 4 |44 D |54 T |64 d |74 t |
|05 ^E |15 ^U |25 % |35 5 |45 E |55 U |65 e |75 u |
|06 ^F |16 ^V |26 & |36 6 |46 F |56 V |66 f |76 v |
|07 ^G |17 ^W |27 ' |37 7 |47 G |57 W |67 g |77 w |
|08 ^H |18 ^X |28 ( |38 8 |48 H |58 X |68 h |78 x |
|09 ^I |19 ^Y |29 ) |39 9 |49 I |59 Y |69 i |79 y |
|0A ^J |1A ^Z |2A * |3A : |4A J |5A Z |6A j |7A z |
|0B ^K |1B ^[ |2B + |3B ; |4B K |5B [ |6B k |7B { |
|0C ^L |1C ^\ |2C , |3C < |4C L |5C \ |6C l |7C | |
|0D ^M |1D ^] |2D - |3D = |4D M |5D ] |6D m |7D } |
|0E ^N |1E ^^ |2E . |3E > |4E N |5E ^ |6E n |7E ~ |
|0F ^O |1F ^_ |2F / |3F ? |4F O |5F _ |6F o |7F |
+------+------+------+------+------+------+------+------+

In other words, if you add 0x41+0x20 to get 0x61, you turn A into a; if you subtract 0x69-0x20 to get 0x49, you turn i into I.

Why now?

You might, by now, be wondering, “Why now, if the idea appeared 15 years ago, and would it actually do any good anyway?”

Our sudden interest, as it happens, comes from a recent public email from Google techies, admitting that their 2022 experimentations with this old-school sECuriTY tRick have been deployed in real life:

As we previously announced, Google Public DNS is in the process of enabling case randomization of DNS query names sent to authoritative nameservers. We have successfully deployed it in some regions in North America, Europe and Asia protecting the majority (90%) of DNS queries in those regions not covered by DNS over TLS.

Intriguingly, Google sugests that the main problems it has had with this simple and apparently uncontroversial tweak is that while most DNS servers are either consistently case-insensitive (so this trick can be used with them) or consistently not (so they are blocklisted), as you might expect…

…a few maintream servers occasionally drop into “case-sensivtive” mode for short periods, which sounds like the sort of inconsistency you’d hope that major service providers would avoid.

Does it really help?

The answer to the question, “Is it worth it?” isn’t yet clear.

If you’ve got a nice long service name, like nakedsecurity.sophos.com (22 alphabetic characters), then there’s plenty of extra signalling power, because 222 different capitalisations means 4 million combinations for the crooks to try, multiplied by the 65536 different ID numbers, multiplied by the approximately 32000 to 64000 different source ports to guess…

…but if you’ve paid a small fortune for a supershort domain name, such as Twitter’s t.co, your attackers only have a job that 2x2x2=8 times harder than before.

Nevertheless, I think we can say, “Chapeau” to Google for trying this out.

As cybersecurity observers like to say, attacks only ever get faster, so anything that can take an existing protocol and add extra cracking time to it, almost “for free”, is a useful way of fighting back.


go top