Open source web programming language PHP narrowly avoided a potentially dangerous supply chain attack over the weekend.
Technically, in fact, you could say that the “attack” was successful, given that imposters were apparently able to make to make the same source code change on two separate occasions:
Fortunately, however, the changes were noticed and reverted within hours, so they didn’t make it into any official PHP release.
In theory, anyone who downloaded the very latest “still in development” version of PHP on Sunday 2021-03-28, compiled it, and installed it on a real-life, internet facing web server could have been at risk…
…but we think the total number of people who did that is probably zero, with the possible exception of the crooks themselves proving a point.
What it does
The modifications above introduce a nasty remote code execution backdoor to any server that uses PHP’s Zlib compression for content it sends out.
(These days, many, if not most, web pages are compressed before they’re transmitted, unless they are files such as images or download archives that are already compressed and so won’t compress much more, if at all.)
The backdoor is triggered when PHP output compression starts, and it:
Looks for a header in the incoming request called User-Agentt. Web requests usually include a User-Agent header that denotes which browser you are using. This is nearly, but not quite, the same name used as a command carrier.
Checks that the header starts with the word ‘zerodium’. Zerodium is a reference to a company that buys zero-day exploits in third-party products for its own use, in contrast to software vendors who offer bug bounties for responsible disclosure of bugs so that they can be patched.
Treats the rest of the header as a command and runs it. This causes remote code execution (RCE), typically giving the attacker the same rights and privileges as the web server itself.
This backdoors turns PHP itself into what’s known as a webshell – an implanted malicious file on the server that can not only be triggered by an external attacker, but also instructed to run any system command the attacker wants at any time.
In other words, a remote shell of this sort doesn’t just let cybercriminals run some commands, it lets them run any commands, and therefore to adapt and alter their attack as they go along.
What happened?
The unauthorised code changes were tagged with the names of Rasmus Lerdorf (creator of PHP) and Nikita Popov (a major PHP contributor).
PHP development is managed using the well-known Git source code control system, on a server operated by the PHP team itself.
We don’t yet know how exactly this happened, but everything points towards a compromise of the git.php.net server (rather than a compromise of an individual git account).
Until now, the team has used Microsoft’s cloud-based GitHub service as a mirror (secondary copy) of its codebase, but says that “the repositories on GitHub […] will become canonical,” which is the jargon term for the primary copy, and says “we have decided that maintaining our own git infrastructure is an unnecessary security risk, and that we will discontinue the git.php.net server.”
Popov also said:
We’re reviewing the repositories for any corruption beyond the two referenced commits. Please contact security@php.net if you notice anything.
What to do?
The good news, as we mentioned above, is that this backdoor didn’t make it into any official PHP releases, so it’s highly unlikely that this Trojan Horse code made it into any real-world servers.
In particular, if you didn’t download PHP and rebuild it from source code over the past weekend, you’re unlikely to have come anywhere near this.
If you’re worried, check the file etc/zlib/zlib.c in your PHP source code tree for signs of the added lines shown above.
In particular, the text string zend_eval should not appear anywhere in the /etc/zlib/* files, so if you run this command from the top of your PHP tree, you shouldn’t see any matches:
In the unlikely event that your code includes the backdoor, you need to refresh your PHP source from the new repository as well as looking for any other unexplained modifications in your code, or unexpected commands in your logs..
We’re sure you’ve heard of OpenSSL, and even if you aren’t a coder yourself, you’ve almost certainly used it.
OpenSSL is one of the most popular open-source cryptography libraries out there, and lots of well-known products rely on it, especially on Linux, which doesn’t have a standard, built-in encryption toolkit of its own.
Even on Windows and macOS, which do have encryption toolkits built into their distributions, you may have software installed that includes and uses OpenSSL instead of the operating system’s standard cryptographic libraries.
As its name suggests, OpenSSL is very commonly used for supporting network-based encryption using TLS, which is the contemporary name for what used to be called SSL.
TLS, or transport layer security, is what puts the padlock into your browser, and it’s probably what encrypts your email in transit these days, along with protecting many other online communications initiated by your computer.
…it’s worth paying attention, and upgrading as soon as you can.
The latest patches, which came out in OpenSSL 1.1.1k on 2021-03-25, fix two high-severity bugs that you should definitely know about:
CVE-2021-3449: Crash can be provoked when connecting to a vulnerable server.
CVE-2012-3450: Vulnerable client can be tricked into accepting a bogus TLS certificate.
Vulnerabilities compared
Even though we think the second bug is the more interesting of the two, we’ve seen several reports that have focused their attention on the first one, perhaps because it threatens immediate and disruptive drama.
The bug can be triggered by a TLS feature called renegotiation, where two computers that are already connected over TLS agree to set up a new secure connection, typically with different (supposedly more secure) settings.
To exploit the bug, a TLS client asks for renegotiation but deliberately leaves out one of the settings it used when it first connected.
The OpenSSL server code fails to notice that the needed data was not supplied this time, and incorrectly tries to use the non-existent data anyway, given that it was used last time…
…thus reading from a non-existent memory location, causing the server software to crash.
This means that a malicious client could, in theory, deliberately crash a vulnerable web server or email server at will, leading to a dangerous Denial of Service (DoS) situation that could be repeated ad nauseam every time the server came back up.
Session renegotiation, which is complex and considered error-prone (an opinion that is only strengthened by the appearance of this bug), was removed from TLS 1.3, the latest version of the protocol. However, very few web servers we know of have switched entirely to TLS 1.3 yet, and will still happily accept TLS 1.2 connections for reasons of backwards compatibility. You can turn off renegotiation for TLS 1.2 if you want, but it’s enabled by default in OpenSSL. Many servers that rely on OpenSSL may therefore be vulnerable to this flaw.
The second bug, CVE-2021-3450, is slightly more complex to exploit, but could end up being more damaging than a DoS attack, because it allows security checks to be circumvented.
After all, in many ways, a server that stops working altogether, as disruptive as that sounds, is better than a server that keeps on running but that behaves insecurely.
When STRICT means less secure
The CVE-2021-3450 vulnerability involves a special setting that an OpenSSL client program can turn on called X509_V_FLAG_X509_STRICT. (We’ll shorten this from now on to just X509_STRICT.)
This setting, which is not enabled by default, tells the OpenSSL code to perform additional checks when it is establishing a TLS connection.
Ironically, however, turning it on activates a dangerous bug.
As you probably know, the server side of a TLS connection usually submits a so-called digital certificate right at the start of proceedings.
This certificate asserts that the holder of the certificate has the right to operate the domain name that you just connected to, e.g. www.sophos.com, and includes a digital signature from a third party, known as a CA, that vouches for that assertion.
CA is short for certificate authority, a company that is supposed to check up on newly-created certificates to verify that the certificate creator does indeed have the authority over the domain name that they claim, after which the CA signs and issues the certificate, as depicted here:
Without CA verification, literally anyone could issue certificates for literally any domain name, including those for well-known brands and services, and you would have no way of telling that they were an imposter.
So, your browser, or whatever program is setting up the TLS connection, typically checks the certificates it receives to ensure that they are correctly signed by a CA, and then looks up that CA in a list of “trusted authorities” that either the browser or your operating system considers competent to sign certificates.
If the signature checks out and the CA checks out, then the TLS connection is considered verified; if not, you will see one of those “certificate warning” pages that fraudulent or misconfigured sites provoke.
Certificate checking in OpenSSL
Very greatly simplified, OpenSSL has code that looks like this to verify the CA of a certificate before it validates a connection:
if IsVerifiedByCA(cert) then result = GOOD else result = BAD end [...do some stuff...] [...do more stuff...] return result
However, as mentioned above, there’s a non-default X509_STRICT option to do some extra certificate checks, including a special check that was introduced recently (in OpenSSL 1.1.1h, just three versions ago) to detect the use of non-standard cryptographic settings.
We won’t go into detail here, but you need to know that one sort of TLS certificate uses what is called Elliptic Curve Cryptography (ECC), which is an algorithm based on mathematical computations using equations that define what are known as elliptic curves.
If you did high school mathematics, you may rememer x2 + y2 = 1 as the equation for a conventional circle, which is just an ellipse that is perfectly round, and (x/A)2 + (y/B)2 = 1 as the equation for ellipses that look more like rugby balls.
In this formula, A and B are parameters that determine the width and the height of the resulting shape.
The elliptical formulas and calculations used in ECC are somewhat more complex and include a greater number of curve parameters, which aren’t meant to be secret, but that must nevertheless be chosen wisely.
For an analogy of why parameters matter in elliptical formulas, consider the “oval” ellipses you studied at school. In the formula we gave above, for example, you mustn’t let A or B be zero or the formula won’t work at all. And if you make A very tiny and B very large then you will end up with a super-stretched ellipse that will look like a stick if you draw a graph, and will be much harder to work with than if you simply chose, say, A=3 and B=2.
Unfortunately, choosing ECC parameters carelessly could result in weakened encryption.
Even worse, attackers could deliberately choose bad parameters to weaken the encryption on purpose, in order to boost their chances of hacking into your network traffic later on.
As a result, various standards bodies have come up with lists of supposedly “known good” ECC parameters that you are expected to choose from in order to avoid this problem.
And, from OpenSSL 1.1.1h and later, turning on OpenSSL’s X509_STRICT mode causes the code to ensure that any TLS connections that rely on ECC use only standard elliptic curve settings.
The updated code goes something like this:
if IsVerifiedByCA(cert) then result = GOOD else result = BAD end [...do some stuff...] if X509StrictModeIsOn then if UsesStandardECCParameters(cert) then result = GOOD -- BUG! This overrides any previous 'result = BAD' settings! else result = BAD end end [...do more stuff...] return result
If you read the code above carefully, you will see that if an attacker wants to present a fake certificate that is not correctly verified by a CA, and knows you have strict checks enabled…
…then if they configure their server to use a bog-standard elliptic curve certificate with standard parameters, the certificate test above will always succeed at the end, even if the CA verification step failed earlier on.
Almost all web browsers these days will accept either RSA or Elliptic Curve Cryptography certificates. ECC certificates are increasingly popular because they’re typically a lot smaller than RSA certificates with a comparable security strength. That’s a simple side-effect of the size of the numbers used in the mathematical calculations that go on behind the scenes in ECC and RSA cryptography.
In the code, you can see that if the CA check fails then the variable result is set to BAD in order to remember that there was an error.
But if the certificate is using ECC with standard parameters, and strict checking is turned on, then the variable result later gets “upgraded” to GOOD when the ECC check is done, and the previous error simply gets overwritten.
So the code correctly detects that the certificate is fake, but then “forgets” that fact and reports that the certificate is valid instead.
What to do?
Upgrade to OpenSSL 1.1.1k. If you are still using earlier versions that are no longer supported, you will need to examine the code yourself to see if these vulnerabilities apply to your software, and if so to make your own patches if needed.
Turn off TLS 1.2 renegotiation. A client can only exploit CVE-2021-3499 if TLS renegotiation is allowed. It’s enabled by default but if your server doesn’t require it, turning it off will sidestep the Denial of Service bug described above.
Don’t use X509_STRICT mode. The CVE-2021-3450 bug gets sidestepped if strict certificate checking is turned off. If you can manage without the additional certificate checks (they are, after all, not on by default) then this may be the lesser of two evils until you can upgrade to version 1.1.1k.
Also, if you are a programmer, try not to write error-checking code the way that it was done in OpenSSL’s certificate verification routines.
There are several other approaches you can take:
Bail out at the first error you detect. If you aren’t interested in accumulating and reporting a complete list of errors, but merely in ensuring that there aren’t any, you reduce the chance of mistakes by returning BAD as soon as you know something is wrong.
Only allow one type of assignment to your result value. If you start by assuming no errors, set your result variable to GOOD at the start and change its value to BAD every time you find an error. It’s easier to review your error-checking function if you don’t have anywhere in the code path where the value can get reset to GOOD.
Count the number of errors encountered, starting from zero. If you want to report all the errors as you find them, increment a counter every time instead of using a simple GOOD/BAD (boolean) variable. That way, you can’t accidentally lose track of errors you previously encountered. At the end, simply test that there were zero errors in total before declaring the overall outcome as GOOD.
Apple has just pushed out an emergency “one-bug” security update for its mobile devices, including iPhones, iPads and Apple Watches.
Even users of older iPhones and iPads who are still on the officially-supported iOS 12 version need to patch, so the versions you should be updating to are as follows:
iOS 14 (recent iPhones): update to 14.4.2
iOS 12 (older iPhones and iPads): update to 12.5.2
iPadOS 14: update to 14.4.2
watchOS: update to 7.3.3
To check whether you have the latest version, and to install it right away if you don’t, go to Settings > General > Software Update.
If you are wondering why there is no iPadOS update numbered 12.5.2, that’s because there was no separately named product called “iPadOS” until version 13 came out.
Up to and including version 12, both iPads and iPhones used the version called “iOS”.
All that Apple is saying about the vulnerability so far is that:
Processing maliciously crafted web content may lead to universal cross site scripting. Apple is aware of a report that this issue may have been actively exploited.
The TL;DR version is: “Crooks have found a way to trick your browser into giving them access to private data they aren’t supposed to see, and as far as we know they are already abusing this bug to do bad things.”
WebKit vulnerable
Just like the last emergency Apple patch, this vulnerability affects WebKit, Apple’s core web browser code.
Although WebKit itself isn’t a fully-fledged browser, it is nevertheless the heart of every browser you’ve ever used on your iPhone, not just Apple’s own built-in Safari browser.
That’s because Apple won’t allow apps onto your device if they don’t come from the App Store, and won’t allow browsers into its App Store if they don’t use WebKit.
(OK, there are official ways of installing non-Apple corporate apps onto managed devices, but for most users, and on most iPhones, all apps come via Apple.)
As a result, even browsers such as Firefox (which usually uses Mozilla’s browser engine), as well as Google Chrome and Microsoft Edge (which usually use the Chromium browser engine), are forced to rely internally on WebKit when they run on Apple devices.
Also, WebKit is the software that runs whenever any app pops up even the most basic web content in a window, for example to show you its About screen or to give you instructions on how to use the app.
In other words, a security flaw in WebKit affects any browser you have installed, including Apple’s built-in Safari app, and could affect many other apps if they have any program options that pop up a web window to show you information.
Universal XSS
Last time Apple did an emergency update, back in January 2012, the company fixed two bugs that allowed crooks to perform what are known as RCE and EoP attacks, short for remote code execution and elevation of privilege.
Loosely speaking, RCE lets you break in as a regular user, and EoP lets you promote yourself to an all-powerful system user after you’re in – a sort of double-play attack that is obviously very serious and could lead to complete compromise.
This time, the update patches what’s known as a UXSS vulnerability, short for universal cross site scripting.
Although UXSS doesn’t sound as serious as RCE (which implies that a crook could directly implant malware at will), UXSS bugs can nevertheless be devastating to your privacy, your security, and your wallet.
Simply put, a UXSS flaw means that WebKit itself can be tricked into violating one of the most important principles of browser security, known as the Same Origin Policy (SOP).
SOP explained
The Same Origin Policy dictates that only web content served up by website X is allowed to access stored data, such as web cookies, that relate to site X.
As you probably know, web cookies and local web storage exist so that websites can keep track of you between visits.
Cookies, for example, can be used to store the preferences you choose; to remember whether you already accepted a licence agreement or not; and to determine whether you’ve already logged in, and if so as which user.
As intrusive as web tracking can sometimes be, especially when it is used for aggressive marketing purposes, it’s nevertheless a vital part of the modern web.
If websites couldn’t set cookies to store some sort of authentication token (typically a long, random string of characters unique to your current session) to indicate that you recently entered your username and the correct password, then there would be no concept of being “logged in” to a website at all.
You would need to enter your username and password every time you looked at any page on the site; you wouldn’t be able to tell the website “please show me the Spanish language version instead of the English one next time I visit”; and there wouldn’t be any way of keeping track of things like shopping carts.
Clearly, it’s vital that cookies set for one website can’t be snooped on by another.
As you can imagine, if website X could send out JavaScript code to access the cookies and local web data of website Y, that would be a security disaster.
Without the SOP, an innocent-looking site of cat videos could, if it wanted, read in the authentication cookies for your social media accounts and rifle through them in the background, pretending to be you, even after you’d finished watching the distracting videos.
Without the SOP, you could end up spending money you didn’t mean to, or signing up for services you didn’t want, or giving cybercriminals access to your most personal data from your online profiles.
XSS and breaking the SOP
XSS bugs, where XSS means cross-site scripting, are the most common way that cybercrooks violate the Same Origin Policy in order to get unlawful access to private data in your online accounts.
Usually, XSS attacks exist because of bugs on a specific website, meaning that crooks can attack users of that website only.
For example, if I can trick your website in returning a search result page that includes not only the text I just searched for but also a chunk of executable JavaScript, then I have a way of pulling off an XSS attack against your site.
That’s because, when your site returns my sneakily-supplied JavaScript inside one of its own web pages, my JavaScript suddenly get access to all your cookies and local web data, which I’m not supposed to have.
That’s bad enough, but server-side XSS tricks typically only affect one website at a time, and the operator of that site can fix the security hole for everyone by patching the server.
A Universal XSS bug, which is what we have here, is much more serious, and it gets the name “universal” because it’s not limited to a specific website.
Simply put, a UXSS bug typically means that attackers can pull off XSS tricks right inside your browser, so that:
All websites you visit are affected by the bug, at least in theory, including sites with no security holes of their own.
You need to patch the vulnerability for yourself, because the bug is in your browser, not in any individual web server.
You can’t sidestep the bug simply by avoiding specific web servers until they get patched.
What to do?
We already said it: update now!
As stated at the top of the article, go to Settings > General > Software Update to make sure you have the update – doing this will either tell you that you are OK, or offer to install the update if you aren’t.
However, at the time of writing [2021-03-27T13:00Z] these pages tell you nothing more than: there is a UXSS vulnerability in WebKit; attackers may already be exploiting this bug; it was reported by researchers from Google; and the bug is officially known as CVE-2021-1879.
Regular Naked Security readers will know we’re huge fans of Alan Turing OBE FRS.
He was chosen in 2019 to be the scientist featured on the next issue of the Bank of England’s biggest publicly available banknote, the bullseye, more properly Fifty Pounds Sterling.
(It’s called a bullseye because that’s the tiny, innermost circle on a dartboard, also known as double-25, that’s worth 2×25 = 50 points if you hit it.)
Turing beat out an impressive list of competitors, including STEM visionaries and pioneers such as Mary Denning (first to unravel the paleontological mysteries of what is now known as Dorset’s Jurassic Coast), Rosalind Franklin (who unlocked the structure of DNA before dying young and largely unrecognised), and the nineteenth-century computer hacking duo of Ada Lovelace and Charles Babbage.
The Universal Computing Machine
Turing was the groundbreaking computer scientist who first codified the concept of a “universal computing machine”, way back in 1936.
At that time, and indeed for many years afterwards, all computing devices then in existence could typically solve only one specific variant of one specific problem.
They would need rebuilding, not merely “reinstructing” or “reprogramming”, to take on other problems.
Turing showed, if you will pardon our sweeping simplification, that if you could build a computing device (what we now call a Turing machine) that could perform a certain specific but simple set of fundamental operations, then you could, in theory, program that device to do any sort of computation you wanted.
The device would remain the same; only the input to the device, which Turing called the “tape”, which started off with what we’d now call a “program” encoded onto it, would need to be changed.
So you could program the same device to be an adding machine, a subtracting machine, or a multiplying machine.
You could compute numerical sequences such as mathematical tables to any desired precision or length.
You could even, given enough time, enough space, enough tape and a suitably agreed system of encoding, produce all possible alphabetic sequences of any length…
…and therefore ultimately, like the proverbially infinite number of monkeys working at an infinite number of typewriters, reproduce the complete works of William Shakespeare.
It is possible to invent a single machine which can be used to compute any computable sequence.
The date of this, don’t forget, was 1936.
All modern electronic digital computers are nearly-but-not-quite Turing machines – our real-world computers have enormous, but not infinite, storage capacity, so there are some interesting problems they can still only compute in theory, not in practice.
Also, programming languages that are expressive enough to simulate a Turing machine, and therefore could be used to program a theoretical solution to any computational problem, are known as Turing complete.
The halting problem
Intriguingly, Turing showed in the same paper that even with a universal computing device, it’s not possible to write a program that can unerringly examine another program and predict its final behaviour.
Specifically – and this is where the Entscheidungsproblem, or “halting problem” comes in – you can’t tell in advance whether a program written for a Turing Machine will ever actually run to completion and therefore come up with the final answer you wanted.
You can write the code needed to give you an answer, but you can’t always be certain in advance that the answer will be computable – the algorithm might run for ever.
Clearly, you can prove by examination that some programs will terminate correctly, such as a loop that is coded to iterate exactly 10 times.
And you can show that some programs won’t terminate, for example if you were to write a loop to find three positive integers X, Y and Z for which X3 + Y3 = Z3. (We have known analytically since 1995 that no such solution exists.)
Indeed, if the halting problem were not a problem, and you could write a program to tell you if another program would terminate or not, you could use that “will-it-halt” program to solve a whole raft of mathematical conundrums.
Here’s an example, based on the fact that we strongly suspect that there are no odd perfect numbers.
A perfect number is equal to the sum of the numbers that divide exactly into it. Thus 6 is exactly divisible by 1, 2 and 3, and 6 = 1+2+3, so 6 is perfect. 12 is divisible by 1, 2, 3, 4 and 6, but 1+2+3+4+6 = 16, so 12 is not perfect. The numbers 1, 2, 4, 7 and 14 divide 28, and 28 = 1+2+4+7+14, the second perfect number. Then comes 496 and 8128, from which you might hope that the fifth perfect number would have five digits, then six, and so on. But they thin out really quickly, with the tenth perfect number already being 54 digits long. The 50th perfect number (that we know of, anyway) runs to nearly 50 million digits. All perfect numbers found so are are even, i.e. can be divided by 2.
It’s trivial to write a program to test all the odd numbers, one by one, until you find an odd perfect number, then to print it out and terminate, which would prove that not all perfects are even:
function findoddone() local n = 3 while true do if isperfect(n) then print('found one:',n) os.exit() end n = n + 2 end end
But as long as the program keeps running you will never be sure whether all perfects are even, or whether you just haven’t waited long enough yet to prove there’s an odd one out there.
However, if there existed a program that could analyse your perfect number calculator and reliably predict if it would terminate or not, then you could prove whether any odd perfect numbers existed simply by running your will-it-halt detector:
if willithalt(findoddone) then print('proved - at least one odd perfect exists') else print('disproved - all perfects are even') end
You wouldn’t find out the actual value of any odd perfect numbers, if indeed they exist, because you wouldn’t actually be running your perfect number testing function.
You’d simply be running your will-it-halt program to determine the outcome of the detector, and that on its own would complete your proof: you would know whether all perfects were even or not.
But you can’t rely on completing the proof that way, because of the halting problem, and Turing proved this before computers as we know them even existed.
Implications for cybersecurity
You can extend the halting problem result in important ways for cybersecurity, as we wrote on what would have been Turing’s 100th birthday in 2012:
[The halting problem means, for example,] that you can’t write an anti-virus program which never needs updating. All those criticisms about the imperfection of anti-virus are true!
But the halting problem applies to everyone. Not just to anti-virus, but to code analysers, behaviour blockers, [machine learning systems, intrusion monitors], network flow correlators, [exploit detectors] and so forth. No security solution can be perfect, because no solution can decide all the answers. That’s why defence in depth is really important, and why you should run a mile from any security vendor who still makes claims like “never needs updating.”
By the way, Turing’s result can be turned around to make it a bit more optimistic: you can’t write [malware] that will be undetectable by all possible [anti-malware] programs. So the good guys always win in the end.
Multifactor science superhero
As you may already know, Alan Turing distinguished himself in many other ways beyond his pioneering work on Turing machines:
He was a massively important part of the British codebreaking team at Bletchley Park in England during World War II.
He was a major figure in the design and construction of one of Britain’s first digital electronic computers, the Pilot ACE.
He came up with what we now call the Turing Test in an early investigation into how to measure artifical intelligence, in particular how we might answer the question “Can machines think?”
He conducted groundbreaking mathematical work on how patterns form in nature, such as a leopard’s spots or a zebra’s stripes.
A fascinating insight into Turing’s interest in the field of morphogenesis – how living structures develop – can be gleaned from an archived letter he wrote to one of the Pilot ACE team shortly before the first computer was delivered:
Dear Woodger,
[. . .] Our new machine is to start arriving on Monday. I am hoping as one of the first jobs to do something about ‘chemical embryology’. In particular I think one can account for the appearance of Fibonacci numbers in connection with fir-cones.
Turing was gay in an era when that was proscribed by law in Britain.
This ultimately led to his prosecution and conviction in court, where he was sentenced to undergo the administration of a carcinogenic hormone, apparently as an alternative to prison.
Turing was also formally ostracised by the Establishment – who had, of course, conveniently ignored the law when his wartime contribution was so desperately needed – and, in the ultimate tragedy, killed himself in 1954.
The banknote unveiled
It’s now official: the Bank of England has just unveiled the Alan Turing £50 originally announced in 2019.
The “Turing bullseye” banknote will enter circulation in three months’ time.
As we said in 2019:
[T]he £50 is the biggest English banknote in circulation, in both size and value, so perhaps it is a fitting tribute for Turing after all – one that will remind us of the huge value of mathematicians and scientists who can blend theory and practice in ways that advance the world as a whole.
As the Bank of England’s website proclaims, “Think science and celebrate Alan Turing.”
Details we like:
RED: Picture of Pilot ACE computer in the background.
ORANGE: Table from the On Computable Numbers paper.
YELLOW: Wheel design from the British Bombe, a mechanical codebreaking computer of Turing’s design from wartime days.
GREEN: His prescient words about how dramatically digital computers would change the world.
BLUE: His birthday (19120623) encoded in binary.
VIOLET: Sunflower anti-counterfeiting icon with initials AT, symbolising Turing’s work on morphogenesis.