Category Archives: News

Google Titan security keys hacked by French researchers

In July 2018, after many years of using Yubico security key products for two-factor authentication (2FA), Google announced that it was entering the market as a competitor with a product of its own, called Google Titan.

Security keys of this sort are often known as FIDO keys after the Fast IDentity Online Alliance, which curates the technical specifications of a range of authentication technologies that “[p]romote the development of, use of, and compliance with standards for authentication and device attestation”.

Boldly put, FIDO aims to “help reduce the world’s over-reliance on passwords.”

The Google Titan device, like similar products from Swedish company Yubico and Chinese company Feitian (which actually makes the hardware used in the Titan), looks like a miniature key fob that contains specialised and supposedly tamper-proof hardware for performing secure cryptographic calculations.

Titan product images from the Google Store.

Much like the chip on your credit card (in fact, Titan keys use the same secure processor as some smart cards), or the SIM card in your phone, Titans are designed to do encryption in a rather special way.

Titans can generate encryption keys internally, can encrypt (or digitally sign) data that you send to them, and can export the encrypted (or signed) data.

But they cannot export the secret part of the key itself, which is locked up inside the chip.

As you can probably imagine, this makes it possible to implement a secure login process where:

  • You don’t need to remember a complex password, because the necessary cryptographic secret is stored on the Titan key.
  • The data submitted for authentication is different at every login, thanks to the active cryptographic calculation in the process, unlike a conventional password that is the same every time.
  • You can’t accidentally reveal the secret to anyone else, because it was generated inside the key and can’t be extracted.
  • You can’t login without the key, making it an ideal second factor of authentication – “something you have”, in the jargon, to go along with “something you know”, such as your username and regular password.

Simply put, the fact that the key itself not only generates but also securely stores its own cryptographic secrets means that it can’t, in theory at least, be cloned or copied.

This anti-copying feature provides strong protection against attacks such as phishing, where you get tricked into typing in your password on a fake site, and keylogging, where you get infected by malware that monitors your keystrokes and steals your password as you type it in.

Titan keys use a choice of USB (you plug them briefly into a USB port), NFC (you wave them near an NFC-enabled device such as a phone) or Bluetooth (same idea as NFC). Because they can’t be tricked into spitting out your secret cryptographic keys, they can’t be skimmed or plundered for their data even if you connect them up to a computer or a phone that is itself infected with malware.

Of course, all this anti-cloning protection relies on two vital assumptions, namely that the Titan key really is clone-proof, and that its private internal data can’t be extracted by an attacker.

That assumption has just been disproved.

French researchers Victor Lomne and Thomas Roche from a company called NinjaLab just published a fascinating paper entitled A Side Journey to Titan: Side-Channel Attack on the Google Titan Security Key.

In this admittedly very technical paper (strong mathematics required), they explain how they bypassed Titan’s anti-clone protection and figured out a way to extract secret data from the device.

In particular, Titan keys provide support for a public-key encryption algorithm called ECDSA (Elliptic Curve Digital Signature Algorithm), where the device itself generates a public-private keypair, exports the public key only, and keeps the private key inside the device where you aren’t supposed to be able to get at it.

Amazingly, the researchers came up with a technique, admittedly not an easy or a quick one, by which they could use electromagnetic emanations – tiny, stray radio waves emitted by the device as a side-effect of the electrons whizzing around inside it as it operates – to make guesses about the internal state of the Titan processor chip while it was performing cryptographic calculations.

In particular, they figured out how to monitor the chip while it was performing authentication operations, something that the device is designed to do whenever requested.

From the combined electromagnetic emissions of several thousand cryptographic calculations, they were able to infer the private key that was used in the process.

Interestingly, their electromagnetic snooping didn’t reveal the bits of the private key directly.

Instead, they were able to guess at the value of some of the bits of a random number, known as a nonce (number used once), generated and used internally in a multiplication operation every time a digital signature was calculated.

Multiplication algorithms are notoriously difficult to program in such a way that they behave consistently no matter what numbers are being multiplied.

Knowing the random nonce, which is deliberately thrown away after each digital signature is completed, is enough to extract the private key, given that you already know the input data, the public key and the output data.

You can see the “complexity of consistent multiplication” problem in action if you ask someone to multiply without a calculator. They’ll come up with the answer to 312×100 right away; 312×101 will take a bit longer but they should be able to do it in their head; but ask them for 312×456 and they will almost certainly reach for pen and paper and take many times longer to get the answer. Ensuring that your multiplication algorithm does exactly the same amount of work, in an indistinguishable way, regardless of whether the calculation is “easy” or “difficult”, is surprisingly hard.

How bad is the attack?

After sampling thousands of digital signature operations, involving thousands of nonces, they had enough information about the internal state of each ECDSA computation to work backwards to the private key.

That’s the bad news: it proves that if attackers can get their hands on your Titan key for a while, and connect it to a monitoring device of their own for long enough, they can extract the current ECDSA private key and use it to make a software clone of your Titan key.

The crooks could then snoop on you after returning the original key to you, because if you don’t realise you’ve been hacked, you’ll probably keep logging in as usual.

The chip inside the Titans turned out not to be tamper proof, and not to be sufficiently protected against electromagnetic snooping.

That combination led to what’s known as a side channel attack, so-called because it relies on measuring various side effects of the calculations performed, rather than tracking the calculations directly.

Technically, therefore, the researchers have successfully hacked Google Titan keys.

Here’s the good news: this attack isn’t very practical.

Firstly, you need about $10,000 of specialist equipment, carefully set up to perform pinpoint radio measurements:

The researchers’ electromagnetic snooping rig.
See full paper for original image.

If we zoom into the picture of the snooping rig, you can see both the electromagnetic probe (the metal spike emerging from the red-and-gold box) and the precision positioning device (the black slab labelled “Thorlabs”):

Thorlabs positioner and Langer radio probe.
See full paper for original image.

The Langer ICR HH 500-6 electromagnetic probe has a detection coil that is just half a millimetre in diameter and can pinpoint radio emissions between 2GHz and 6Hz.

The Thorlabs PT3/M 3-axis (X-Y-Z) manual micro-manipulator can be positioned to within one-hundredth of a millimetre (10 micrometres, or just 4/10,000ths of an inch).

Secondly, you need to open up the Titan key, which the researchers found easy to do, but not in a way that would escape notice when the device was reassembled and returned to the original owner in the hope they would not notice it had been “borrowed”.

The researchers couldn’t find a way to open up the device with a scalpel or other fine cutter without destroying it, unless they softened up the plastic first with a heat gun, leaving rather obvious signs of tampering:

The Titan key body after separation.
See full paper for original image.

Thirdly, you need a corrosive chemical that will dissolve the plastic coating on the secure chip inside the device, without overdoing it and destroying the chip (or your lungs) completely.

The researchers used fuming nitric acid, a dangerously corrosive substance once used as rocket fuel.

Fourthly, you need to perform about 6000 digital signature calculations inside the chip in order to collect enough data for later, which takes about six hours given the processing speed of the device.

(The rest of the work, going backwards from the snooped data to the hidden private key, requires a panoply of statistical calculations and deep learning computations, but can be done “offline”, after you’ve returned the reassembled Titan key to its rightful owner in the hope they won’t have missed it and won’t notice that it now looks as though it was left on a car dashboard in direct sunlight.)

What to do?

Note that anyone who already has your username, password and Titan key can login as you anyway, because they have both factors of your 2FA, namely “what you know” (the password) and “what you have” (the Titan key).

Current Titan keys have no biometric protection to stop them being used by someone else, so stealing someone’s key is already a way into their account.

So this attack only makes sense for someone who wants to make a clone of your Titan key so that they can:

  • Keep the copied key handy for later if they haven’t yet acquired your password, but are explicitly targeting you and hoping to get your password in the future.
  • Get ongoing access to your account without locking you out of it or otherwise drawing attention to the attack.

Fortunately, where a presumably well-funded adversary wants long-term access to your account to keep you under surveillance for some time, the FIDO authentication standard includes a way of detecting that your key has been cloned.

That’s because every authentication reponse that’s created by a FIDO key includes a count of how many responses the key has computed so far, together with a digital signature of that count.

Whenever the attackers login using the cloned key, they have to guess the current value of the counter in your key, add one, and use that to get in.

If they guess incorrectly, an online service that tracks the counter number of each user’s key would probably catch the crooks out before they got in at all.

If they guess correctly, however, then the counter in your key will be behind by one (or more) next time you login, and that also ought to raise the alarm.

So, the most obvious precautions against this hard-to-use attack are:

  • Keep your eye on your security key, unless it has some sort of strong biometric or other lockout to prevent other people using it. If someone can steal your key for long enough to clone it using this attack, they can probably access your account anyway without cloning the key. Try to keep the key on your person when you are out and about, or locked away when aren’t using it.
  • If you think your key has been tampered with, assume that it has! Take precautions immediately, such as replacing it with one that hasn’t. As the researchers noted, they found it hard to get into the device to expose its chip without leaving very obvious signs of entry.
  • Ask your account providers if they track FIDO key counters. If they don’t, they might want to consider doing so now that a practical – if not a practicable – key cloning attack is known. Tracking counter errors could give the crooks access at least once before a warning gets triggered, but it’s a worthwhile precaution to take anyway.
  • Don’t stop using your Titan keys. As the researchers themselves say, “it is still clearly far safer to use your Google Titan Security Key (or other impacted products) […] to sign in to applications like your Google account rather than not using one.

Affected models

The paper includes a list of devices that the researchers either found to be vulnerable, or assume to be at risk because they use the same vulnerable chip (NXP A700X).

The full list is here, and includes: all Google Titan keys, Yubikey’s Neo product, and various Feitian devices including the MultiPass FIDO and ePass FIDO keys.


S3 Ep14: Money scams, HTTPS by default, and hardcoded passwords [Podcast]

We advise you how to react when a friend suddenly asks for money, explain why Chromium is finally aiming for HTTPS by default, and warn you why you should never, ever hardcode passwords into your software.

With Kimberly Truong, Doug Aamoth and Paul Ducklin.

Intro and outro music: Edith Mudge.


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher, Overcast and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.

Zyxel hardcoded admin password found – patch now!

Towards the end of 2020, a researcher at Dutch cybersecurity company EYE was taking a look at the firmware of a Zyxel network router.

He examined the password database that shipped in the firmware and noticed an unusual username of zyfwp.

That name didn’t show up in the official list of usernames shown in the router’s user interface…

…yet it did have a password hash in the database itself, which was interesting all on its own.

To explain.

Zyxel products are Linux-based, and Linux usernames and passwords are typically split between two files for security reasons.

The file /etc/passwd is usually world-readable and contains a list of known users, e.g.

root:x:0:0::/root:/bin/slash
bin:x:1:1::/bin:/bin/false
nobody:x:999:999::/nowhere:/bin/false
duck:x:1000:1000::/home/duck:/bin/fish

For reference: the first field on each line is the username; the third is the user’s numeric ID or UID (the root account is always UID zero); the sixth field is the user’s home directory; and the last one denotes the program to run when the user logs in, typically a command shell for regular accounts and /bin/false, a program that exits immediately with an error code, for other accounts.

The second field, intriguingly, is the user’s password.

Or, to be more precise, it used to be where Unix passwords were stored, thus keeping usernames and passwords together in one file for consistency and convenience.

But storing passwords, hashed or not, in a world-readable file was quickly found to be a terrible idea.

Even in the 1970s, hackers would routinely collect /etc/password files so they could crack them offline.

The early passwords of several Unix pioneers were cracked for fun in 2019 based on ancient password files embedded in the BSD-3 source code. Ken Thompson’s password, for example, turned out to be the chess move p/q2-q4!. Thompson himself rather nicely chimed in with congratulations to the finder.

So the “passwords” in /etc/passwd are now set to the letter x, acting merely as a placeholder, and the hashed passwords themselves are stored elsewhere, typically in a locked-down file called /etc/shadow, which might look like this:

root:$1$trymenow$loO18cesIqNfnT1c66lRV/:::::::
bin:*:::::::
nobody:*:::::::
duck:$1$trymetoo$8a7wRlziGi4YMvlmVy23V/:::::::

Accounts with “passwords” starting with a * character don’t have a password, so you can’t login interactively to those accounts (after all, there is no valid reply you could give at the password prompt that would hash to an asterisk character).

In the example above, the root and duck accounts do have passwords set, using hashing method $1$ (the no-longer-fit-for-purpose md5crypt algorithm – never use this in real life!), with salts trymemow and trymetoo respectively.

If, for instance, you can find the input that hashes via md5crypt to r0NTYRppwVIrSnk6OjqPI0 with salt trymenow (and why not try it now?), you just cracked the root password.

However, even if you can’t crack the password, the presence of a password hash in /etc/shadow nevertheless gives you a hint that the account concerned is intended for remote logins.

In this case, the researcher didn’t have to crack the password hash in the firmware, a process that might have taken years or even longer, assuming that a recent Linux password hashing scheme was used. (Methods $5$ or $6$ use 5000 iterations by default of SHA-256 or SHA-512 respectively.)

By looking through the firmware for program files, known as binaries in Unix jargon, and searching them for strings of printable characters , he soon came across what looked like the likely text of the the password for the zyfwp account.

So, with SSH listening on the device, he simply connected up, tried to login…

…and right away got access to the Zyxel command prompt with admin privilege.

The good news is that he reported the problem to Zyxel, who went to work right away and quickly came up with patches plus an official advisory.

What went wrong?

According to Zyxel, the zyfwp account “was designed to deliver automatic firmware updates to connected access points through FTP.

We’re guessing that the plan was for wireless access points on the network to call home on a regular basis to their local router and check for updates.

That sounds harmless enough, assuming that anything downloaded via FTP included a digital signature of its own, given that FTP connections themselves are unencrypted and therefore easily tampered with.

Somehow, however – let’s assume that the code was still in development – the account intended for updating access points (zyfwp might stands for “Zyxel firmware patch” or something similar) got shipped in an update build that was inadvertently still set up for development rather than for release.

After all, an account used merely for fetching firmware updates needs neither login rights nor admin access, though giving it those powers temporarily may have been very convenient during development and testing.

(Actually, we’re not sure why fetching updates via FTP requires a special account at all, for the same reasons that you don’t need a WordPress account of any sort just to be able to read Naked Security articles via HTTPS, but that’s an issue for another time.)

And so an active, easily abusable admin-level account that we assume – or, at least, we hope – was only supposed to be there during development work “in the lab” ended up shipped into the field.

What to do?

If you’re a Zyxel user, check the company’s advisory for a list of affected devices, and then make sure you’re patched.

Affected firewall models apparently include those designated ATP, USG, USG FLEX and VPN.

Note that patches will also be coming out for two models of Zyxel Access Point controllers (NXC2500 and NXC550) on Friday 2021-01-08, so if you are reading this article before that date, be sure to check back with Zyxel at the end of the week.

According to reports, cybercriminals have now recovered the hardcoded password themselves (the report from EYE deliberately didn’t reveal it), so you should assume that the offending username/password combination is now being used routinely by the various automated attack scanning tools used by crooks.

Active attack scanning tools not only probe for open ports and insecure devices, but also follow up their probes by automatically attempting to break in using tricks that are likely to work, including trying out well-known username/password combinations for specific vendors, devices and models.

If you’re a programmer, our advice is:

  • Never use hardcoded passwords. If an account is unimportant enough that it doesn’t need a properly-chosen password, don’t give it a password at all, and make your intentions clear. If it needs a password, use one that is properly chosen and unique for every customer or device. Hardcoded passwords are always the wrong thing to do – they are equivalent to implanting a global backdoor and hoping no one will find it.
  • Never have accounts with unchangeable passwords. In this case, hardwiring the password meant that it couldn’t be changed, which would have been an easy and instant workaround for this bug, even before a patch was available.
  • Limit your use of admin (root) accounts. Installing an update will almost certainly require the use of a program running as root on the device that’s processing the update. But delivering or downloading the files needed for an update over an internet connection can always be done without root powers. Divide and conquer your code to give each part of the update process the minimum access rights possible.
  • React fast when bug-hunters file reports. EYE’s writeup of this vulnerability (CVE-2020-29583) gives a timeline of Zyxel’s response and acknowledges that Zyxel formally acknowledged the report the day after it was sent in – a creditable reaction. Prompt response alone helps security researchers a lot, because it lets them know that someone is paying attention.

Chrome browser has a New Year’s resolution: HTTPS by default

HTTPS, as you probably know, stands for secure HTTP, and it’s a cryptographic process – a cybersecurity dance, if you like – that your browser performs with a web server when it connects, improving privacy and security by agreeing to encrypt the data that goes back and forth.

Encrypting HTTP traffic from end-to-end between your browser and the server means that:

  • The content of your web request and the reply that comes back can’t easily be monitored by other people on the network. This makes it much harder (nearly, if not absolutely, impossible) for attackers to eavesdrop secrets such as passwords, credit card numbers, documents, private photos and other personal information from your network traffic.
  • The content of the traffic can’t easily be modified on the way out or back. HTTPS traffic isn’t just encrypted, it’s also subjected to an integrity test. This stops attackers sneakily altering or corrupting data in transit, such as bank account numbers, payment amounts or contract details.

Without HTTPS, there are many places along the way between your browser and the other end where not-so-innocent third parties could easily eavesdrop on (and falsify) your web browsing.

Those eavesdroppers could be nosy neighbours who have figured out your Wi-Fi password, other users in the coffee shop you’re visiting, curious colleagues on your work LAN, your ISP, cybercriminals, or even your government.

This raises the question: if snooping and falsifying web traffic is so easy when plain old HTTP is used, why do we still have HTTP at all?

LISTEN NOW: UNDERSTANDING HTTPS/SSL/TLS

Click-and-drag on the soundwaves above to skip to any point in the podcast. You can also listen directly on Soundcloud.

Remember Firesheep?

It’s now more than 10 years since a Firefox plugin called Firesheep hit the news – if you were interested in cybersecurity back in 2010, you will almost certainly remember that name.

Back then, many websites where security and privacy were important – examples include social networs, car rental firms, online support forums and even banks – paid only lip service to HTTPS.

They would use encrypted connections when obviously personal data was transmitted, such as for the login page where you entered your actual password, or for the payment form where you put in your credit card details.

But a lot of sites would drop back to HTTP for everything else because it was a bit faster and easier – you didn’t need to spend extra time and CPU power at each end encrypting and decrypting every data packet that you sent and received.

Ignore the encryption and focus on the rest

What Firesheep did was to turn the Firefox browser into an easy-to-use network sniffer – that’s the jargon term for a network surveillance tool – that just about anyone could use, regardless of their technical skill.

Firesheep would automatically sniff out other people’s social networking connections, wait until after the secure login part that couldn’t be eavedropped because of HTTPS encryption, and then target the insecure traffic that followed via HTTP.

Firesheep would read in the unencrypted headers from those unencrypted HTTP web requests, extract the session cookies or authentication tokens that denoted the user’s identity, use the stolen authentication data to impersonate the unfortunate user, and hijack their account.

All of this was done automatically, right inside a browser where an attacker could point-and-click to exploit any hacked accounts at once.

In theory, anyone in the coffee shop around you could have been running Firesheep, digging around in your Facebook account or posting on your Twitter feed, and you wouldn’t have realised until it was too late.

HTTP considered harmful

Firesheep certainly put the cryptographic cat amongst the pigeons, if you will pardon the mixed faunal metaphor.

Indeed, the Firesheep story was an important catalyst in persuading most of the major players in search, social media and online services to bite the bullet and use HTTPS all the time.

Facebook, for example, made “HTTPS always” an option in 2011, turned it on for everyone in North America in 2012, and by 2013 had pretty much abandoned HTTP altogether.

Apple’s App Store moved over to HTTPS in 2013; Microsoft pledged to encrypt almost everything back in 2013; and Google made Gmail HTTPS-only in 2014.

By 2015, Google’s search ranking algorithms were downvoting sites that didn’t offer HTTPS versions; in 2017, the search giant started publicly shaming login and credit card pages that still used HTTP by labelling them “not secure“; and by 2018, Google was applying that label to any website that hadn’t upgraded to HTTPS.

HTTP, in other words, has been inexorably waning for the past decade.

Why aren’t we there yet?

Well, it’s 2021, and the vast majority of websites now support HTTPS.

Think of how long it’s been since you found a mainstream website – indeed, any website – that didn’t offer HTTPS if you insisted…

…and you will arrive back at the question we posed above: why do we still have HTTP at all?

More importantly, why is HTTP still the default choice your browser makes if you type an URL into the address bar and don’t explicitly put https:// at the start?

Why don’t browsers now at least do us the favour of assuming we mean HTTPS unless we go out of our way to type in http:// instead?

The simple answer is that there are still just about enough non-HTTPS websites left out there that switching to HTTPS by default would almost certainly cause enough transitory technical hassles to be disruptive.

Sadly, inadvertent disruptions caused by well-meaning efforts to improve cybersecurity often have the undesirable and paradoxical consequence of luring less well-informed users into deliberately reducing the security they already have, as a “workaround” to bypass the “problem”.

Nevertheless, it really does look as though HTTP finally is on the way out, given that a code change added to the Chromium browser project on New Year’s Eve.

Chromium, of course, is the Google open source project that forms the core of many modern browsers, including Chrome, Edge, Vivaldi, Brave, Opera and many others (the only mainstream browsers not based on Chromium are Firefox and Safari).

The change is documented as follows:

Default typed omnibox navigations to HTTPS: Initial implementation Presently, when a user types a domain name in the omnibox such as "example.com", Chrome navigations to the HTTP version of the site (http://example.com). However, the web is increasingly moving towards HTTPS, and we now want to optimize omnibox navigations and first-load performance for HTTPS, rather than HTTP. This CL [change list] implements an initial version of defaulting typed omnibox navigations to HTTPS. [...]

(What Google calls the omnibox is just a fancy name for what most of us still call the address bar – “omni” because you can use it for searching as well as navigating.)

Unfortunately, we’re still a long way off this being a default, because the above change notification also points out:

This is a minimal implementation and is not ready for general usage. Future CLs are going to observe upgraded HTTPS navigations for several seconds instead and cancel the load when necessary, instead of indefinitely waiting for HTTPS loads to succeed. This CL also lacks many quality of life improvements such as remembering which URLs fell back to HTTP. These will also be added in future CLs.

In plain English, this means that the final goal is not going to be quite as dramatic as banning HTTP altogether.

The Chromium developers currently seem to be aiming for a system where HTTPS will be preferred by default, but that the system will not only fall back automatically and quickly back to HTTP if needed, but also remember which sites “prefer” HTTP, thus helping to keep HTTP alive that little bit longer.

What do you think?

Intriguingly, changes of this sort often end up becoming curiously controversial, as you can see from the range of Naked Security comments on the articles we’ve linked to above.

A small but vocal minority seems convinced that Google, Microsoft, Firefox, Apple and others are promoting HTTPS simply to inconvenience small businesses and community websites, even though HTTPS certificates can now be acquired for free and kept up-to-date easily.

But we think that railing against HTTPS is a bit like refusing to wear a seatbelt when you are in a car, and telling your friends that you won’t give them lifts any more if they inisist on wearing theirs…

…with the likely outcome that your friends will quietly stop going anywhere with you at all.

So, if you still haven’t upgraded your website to support HTTPS, we suggest that you make it your own New Year’s Resolution for 2021!

HOW TO ENABLE HTTPS-ONLY IF YOU’RE A FIREFOX USER

Interestingly, Mozilla started out on the road to banishing HTTP in a slightly different way.

If you’re a Firefox user, you already have access to “HTTP-Only” mode on the Settings > Preferences > Privacy & Security page:

HTTPS provides a secure, encrypted connection between Firefox and the web sites you visit. Most websites support HTTPS, and if HTTPS-Only Mode is enabled, then Firefox will upgrade all connections to HTTPS.

Note that this option is much stricter than what Chromium is proposing, which is why it’s not on by default: if you enable HTTPS-only in Firefox, you won’t be able to use HTTP even if you want to.


S3 Ep13: A chat with hacker Keren Elazari [Podcast]

How did the movie “Hackers” inspire a girl to grow up to become a hacker herself? Find out from security analyst and friendly hacker Keren Elazari.

Hear about Keren’s incredible journey, why hackers should be welcomed with open arms, and the inspiration that guided her career.

Keren Elazari

Interviewer: Kimberly Truong.

Special guest: Keren Elazari (@k3r3n3 on Twitter).

TED talk mentioned by Keren: Hackers – the internet’s immune system

Intro and outro music: Edith Mudge.


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher, Overcast and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.

go top