Category Archives: News

Supply chain blunder puts 3CX telephone app users at risk

NB. Detection names you can check for if you use Sophos products and services
are available from the Sophos X-Ops team on our sister site Sophos News.

Internet telephony company 3CX is warning its customers of malware that was apparently weaseled into the company’s own 3CX Desktop App by cybercriminals who seem to have acquired access to one or more of 3CX’s source code repositories.

As you can imagine, given that the company is scrambling not only to figure out what happened, but also to repair and document what went wrong, 3CX doesn’t have much detail to share about the incident yet, but it does state, right at the very top of its official security alert:

The issue appears to be one of the bundled libraries that we compiled into the Windows Electron App via Git.

We’re still researching the matter to be able to provide a more in depth response later today [2023-03-30].

Electron is the name of a large and super-complex-but-ultra-powerful programming toolkit that gives you an entire browser-style front end for your software, ready to go.

For example, instead of maintaining your own user interface code in C or C++ and working directly with, say, MFC on Windows, Cocoa on macOS, and Qt on Linux…

…you bundle in the Electron toolkit and program the bulk of your app in JavaScript, HTML and CSS, as if you were building a website that would work in any browser.

With power comes responsibility

If you’ve ever wondered why popular app downloads such as Visual Studio Code, Zoom, Teams and Slack are as big as they are, it’s because they all include a build of Electron as the core “programming engine” for the app itself.

The good side of tools like Electron is that they generally make it easier (and quicker) to build apps that look good, that work in a way that users are aready famiilar with, and that don’t behave completely differently on each different operating system.

The bad side is that there’s a lot more underyling foundation code that you need to pull down from your own (or perhaps from someone else’s) source code repository every time you rebuild your own app, and even modest apps typically end up several hundreds of megabytes in size when they’re downloaded, and even bigger after they’re installed.

That’s bad, in theory at least.

Loosely speaking, the bigger your app, the more ways there are for it to go wrong.

And while you’re probably familiar with the code that makes up the unique parts of your own app, and you’re no doubt well-placed to review all the changes from one release to the next, it’s much less likely that you have the same sort of familiarity with the underlying Electron code on which your app relies.

It’s therefore unlikely that you will have the time to pay attention to all the changes that may have been introduced into the “boilerplate” Electron parts of your build by the team of open-source volunteers who make up the Electron project itself.

Attack the big bit that’s less well-known

In other words, if you’re keeping your own copy of the Electron repository, and attackers find a way into your source code control system (in 3CX’s case, they’re apparently using the very popular Git software for that)…

…then those attackers might well decide to booby-trap the next version of your app by injecting their malicious bits-and-pieces into the Electron part of your source tree, instead of trying to mess with your own proprietary code.

After all, you probably take the Electron code for granted as long as it looks “mostly the same as before”, and you you are almost certainly better placed to spot unwanted or unexpected additions in your own team’s code than in a giant dependency tree of source code that was written by someone else.

When you’re reviewing your own company’s own code, [A] you have probably seen it before, and [B] you may very well have attended the meetings in which the changes now showing up in your diffs were discussed and agreed. You’re more likely to be tuned into, and more proprietorial – sensitive, if you wish – about changes in your own code that don’t look right. It’s a bit like the difference between noticing that something’s out-of-kilter when you drive your own car than when you set off in a rental vehicle at the airport. Not that you don’t care about the rented car because it isn’t yours (we hope!), but simply that you don’t have the same history and, for want of a better word, the same intimacy with it.

What to do?

Simply put, if you’re a 3CX user and you’ve got the company’s Desktop App on Windows or macOS, you should:

  • Uninstall it right away. The malicious add-ons in the booby-trapped version could have arrived either in a recent, fresh installation of the app from 3CX, or as the side-effect of an official update. The malware-laced versions were apparently built and distributed by 3CX itself, so they have the digital signatures you’d expect from the company, and they almost certainly came from an official 3CX download server. In other words, you aren’t immune just because you steered clear of alternative or unofficial download sites. Known-bad product version numbers can be found in 3CX’s security alert.
  • Check your computer and your logs for tell-tale signs of the malware. Just removing the 3CX app is not enough to clean up, because this malware (like most contemporary malware) can itself download and install additional malware. You can read more about how the malware actually works on our sister site, Sophos News, where Sophos X-Ops has published analysis and advice to help you in your threat hunting. That article also lists the detection names that Sophos products will use if they find and block any elements of this attack in your network. You can also find a useful list of so-called IoCs, or indicators of compromise, on the SophosLabs GitHub pages. IoCs tell you how to find evidence you were attacked, in the form of URLs that might show up in your logs, known-bad files to seek out on your computers, and more.

NEED TO KNOW MORE? KEEP TRACK OF IOCS, ANALYSIS AND DETECTION NAMES


  • Switch to using 3CX’s web-based telephony app for now. The company says: “We strongly suggest that you use our Progressive Web App (PWA) instead. The PWA app is completely web-based and does 95% of what the Electron app does. The advantage is that it does not require any installation or updating and Chrome web security is applied automatically.”
  • Wait for further advice from 3CX as the company finds out more about what happened. 3CX has apparently already reported the known-bad URLs that the malware uses for further downloads, and claims that “the majority [of these domains] were taken down overnight.” The company also says it has temporarily discontinued availability its Windows app, and will soon rebuild a new version that’s signed with a new digital signature. This means any old versions can be identified and purged by explicitly blocklisting the old signing certificate, which won’t be used again.
  • If you’re not sure what to do, or don’t have the time to do it yourself, don’t be afraid to call for help. You can get hold of Sophos Managed Detection and Response (MDR) or Sophos Rapid Response (RR) via our main website.

Cops use fake DDoS services to take aim at wannabe cybercriminals

The UK’s National Crime Agency (NCA) has recently announced work that it’s been doing as an ongoing part of a multinational project dubbed Operation PowerOFF.

The idea seems to be to use fake cybercrime-as-a-service sites to attract the attention of impressionable youngsters who are hanging around on the fringes of cybercrime and looking for an underground community to join and start learning the ropes…

…after which those who attempt to register are “contacted by the National Crime Agency or police and warned about engaging in cybercrime”.

The fake crimeware-as-a-service offerings that the NCA pretends to operate are so-called booters, also known as stressers, also known as DDoSsers, where DDoS is short for distributed denial of service.

DoS versus DDoS

A plain denial of service, or DoS, typically involves sending specially-crafted network traffic to one particular site or service in order to crash it.

Usually, that means finding some sort of vulnerability or configuration problem such that a booby-trapped network packet will trip up the server and cause it to fail.

Attacks of that sort, however, can often be sidestepped once you know how they work.

For example, you could patch against the bug that the crooks are poking their sharpened knitting needles into; you could tighten up the server configuration; or you could use an inbound firewall to detect and block the booby-trapped packets they’re using to trigger the crash.

In contrast, DDoS attacks are usually much less sophisticated, making them easier for technically inexperienced crooks to take part in, but much more natural-looking, making them harder even for technically experienced defenders to stop.

Most DDoS attacks rely on using apparently unexceptionable traffic, such as plain old web GET requests asking for the the main page of your site, from an unassuming variety of internet addresses, such as apparently innocent consumer ISP connections…

…but at a volume that’s hundreds, thousands or perhaps even millions of times higher than your best day of genuine web traffic ever.

Floooded with normal

For example, a booter service run by crooks who already control malware that they’ve implanted on 100,000 home users’ laptops or routers could command them all to start accessing your website at the same time.

This sort of setup is known in the jargon as a botnet or zombie network, because it’s a collection of computers that can be secretly and remotely kicked into life by their so-called bot-herders to do bad things.

Imagine that you’re used to a million site hits a month, and you’ve made emergency provision in the hope of a gloriously high-traffic period where you might pull in a million hits in a single day.

Now imagine that you suddenly have 100,000 “users” all knocking on your door in a single 10-second period, and then coming back over and over, asking you to send back real web pages that they have no intention of viewing at all.

You can’t patch against this sort of traffic overload, because attracting traffic to your website is almost certainly your goal, not something you want to prevent.

You can’t easily write a firewall rule to block the waste-of-time web requests coming from the DDoSsers, because their packets are probably indistinguishable from the network traffic that a regular browser woild create.

(The attackers can simply visit your website with a popular browser, record the data generated by the request, and replay it exactly for verisimilitude.)

And you can’t easily build up a blocklist of known bad senders, because the individual devices co-opted into the botnet that’s been turned against you are often indistinguishable from the devices or routers of legitimate users trying to access your website for genuine purposes.

No experience necessary

Unfortunately, getting into the DDoS or booter scene doesn’t require technical skills, or the knowledge needed to write and disseminate malware, or the ability to operate a botnet of your own.

You can start off simply by hanging out with more experienced cybercriminals and begging, borrowing or buying (more precisely, perhaps, renting) time and bandwidth from their existing booter service.

Perhaps it doesn’t feel like much of a crime?

If all you’re doing is asking your school’s servers to process thousands of otherwise well-formed requests in order to disrupt a test you haven’t revised for, or to get back at a teacher you don’t like, or simply for bragging rights with your mates, where’s the criminality in that?

You might manage to convince yourself you aren’t doing anything wrong as long as you aren’t flinging malware at the network, aren’t aiming to break in, and aren’t intending to steal any data.

Heck, “enjoying” more traffic is something most sites would love to brag about, surely?

Not an innocent pastime

But DDoSsing is nowhere near as innocent as you might hope to claim in your defence if ever you find yourself hauled in front of a criminal court.

According to the NCA:

Distributed Denial of Service (DDoS) attacks, which are designed to overwhelm websites and force them offline, are illegal in the UK under the Computer Misuse Act 1990.

As the cops continue:

DDoS-for-hire or booter services allow users to set up accounts and order DDoS attacks in a matter of minutes. Such attacks have the potential to cause significant harm to businesses and critical national infrastructure, and often prevent people from accessing essential public services.

[. . .]

The perceived anonymity and ease of use afforded by these services means that DDoS has become an attractive entry-level crime, allowing individuals with little technical ability to commit cyberoffences with ease.

Traditional site takedowns and arrests are key components of law enforcement’s response to this threat. However, we have extended our operational capability with this activity, at the same time as undermining trust in the criminal market.

The NCA’s position is clear from this notice, as posted on a former decoy server now converted into a warning page:

Here be Dragons! (Click on image to see original.)
Message shown after an NCA decoy site has served its purpose.

What to do?

Don’t do it!

If you’re looking to get into programming, network security, website design, or even just to hang out with other computer-savvy people in the hope of learning from them and having fun at the same time…

…hook up with one of the many thousands of open source projects out there that aim to produce something useful for everyone.

DDoSsing may feel like just a bit of countercultural amusement, but neither the owner of the site you attack, nor the police, nor the magistrates, will see the funny side.


Apple patches everything, including a zero-day fix for iOS 15 users

Apple’s latest update blast is out, including an extensive range of security patches for all devices that Apple officially supports.

There are fixes for iOS, iPadOS, tvOS and watchOS, along with patches for all three supported flavours of macOS, and even a special update to the firmware in Apple’s super-cool external Studio Display monitor.

Apparently, if you’re running macOS Ventura and you’ve hooked your Mac up to a Studio Display, just updating the Ventura operating system itself isn’t enough to secure you against potential system-level attacks.

According to Apple’s bulletin, a bug in the display screen’s own firmware could be abused by an app running on your Mac “to execute arbitrary code with kernel privileges.”

Travellers beware

We’re guessing that if you’re on the road right now, travelling with your Mac, you might not be able to plug in to your Screen Display for a while yet, by which time some enterprising criminal might have worked backwards from the patches, or a proof-of-concept exploit might have been released.

We don’t know how to (or even if you can) download the Screen Display patch for offline installation later when you get home.

So: if you can only patch your display in a few days’ or weeks’ time; because you have to plug your patched Mac into your vulnerable display to update it; and assuming that you need go online to complete the update…

…you may want to learn how to start up your Mac in so-called Safe Mode, and to update from there.

In Safe Mode, a minimum set of system software and third-party apps are loaded, thus slimming down what’s known as your attack surface area until you’ve completed the patch.

Ironically, albeit unavoidably, most third-party security add-ons don’t start up in Safe Mode, so an alternative approach is simply to boot up with as many non-security-related apps turned off, so they don’t start automatically when you log in.

You can temporarily turn off auto-starting background apps in the Settings > General > Login Items menu.

One zero-day, but plenty of other bugs

The good news, as far as we can see, is that there is only one zero-day bug in this batch of updates: the bug CVE-2023-23529 in WebKit.

This vulnerablity, which allows attackers to implant malware on your iOS 15 or iPadOS 15 device without you noticing, is listed with the dread words, “Apple is aware of a report that this issue may have been actively exploited.”

Fortunately, this bug is only listed as a zero-day in the iOS 15.7.4 and iPadOS 15.7.4 security bulletin, meaning that more recent iDevices, Macs, TVs and Apple Watches appear to be safe from this one.

The bad news, as usual, is that there is nevertheless a wide range of we-hope-we-found-them-before-the-crooks-did bugs fixed for all Apple’s other operating systems, including vulnerabilities that could theoretically be exploited for:

  • Kernel-level remote code execution, where attackers could take over your entire device, and potentially access all data from any apps they liked, instead of being limited to intruding on an individual app and its data.
  • Data stealing triggered by a booby-trapped calendar invitation.
  • Access to Bluetooth data after your device receives a booby-trapped Bluetooth packet.
  • File downloads that bypass Apple’s usual Gatekeeper quarantine checks, rather like the recent SmartScreen bypass on Windows caused by a bug in Microsoft’s similar Mark of the Web system.
  • Unauthorised access to your Hidden Photos Album, caused by a flaw in the Photos app.
  • Sneakily and incorrectly tracking you online after you’ve browsed to a booby-trapped website.

What to do?

The updates you need, the bulletins that describe what you’re getting, and the version numbers to look for to ensure you’ve updated correctly, are as follows:

  • HT213670: macOS Ventura goes to 13.3.
  • HT213677: macOS Monterey goes to 12.6.4.
  • HT213675: macOS Big Sur goes to 11.7.5.
  • HT213671: Safari goes to 16.4 (this update is included with the Ventura patches, but you need to install it separately if you are using Monterey or Big Sur).
  • HT213676: iOS 16 and iPadOS 16 go to 16.4.
  • HT213673: iOS 15 and iPadOS 15 go to 15.7.4.
  • HT213674: tvOS goes to 16.4.
  • HT213678: watchOS goes to 9.4.
  • HT213672: the Studio Display Firmware goes to 16.4.

On iDevices, go to Settings > General > Software Update to check if you’re up-to-date, and to trigger an update if you aren’t.

On Macs, it’s almost the same, except that you open the Apple menu and choose System Settings… to get started, followed by General > Software Update.

Get ’em while they’re fresh!

Microsoft assigns CVE to Snipping Tool bug, pushes patch to Store

Last week was aCropalypse week, where a bug in the Google Pixel image cropping app made headlines, and not just because it had a funky name.

(We formed the opinion that the name was a little bit OTT, but we admit that if we’d thought of it ourselves, we’d have wanted to use it for its word-play value alone, even though it turns out to be harder to say out loud than you might think.)

The bug was the kind of programming blunder that any coder could have made, but that many testers might have missed:

Image cropping tools are very handy when you’re on the road and you want to share an impulse photo, perhaps involving a cat, or an amusing screenshot, perhaps including a wacky posting on social media or a bizarre ad that popped up on a website.

But quickly-snapped pics or hastily-grabbed screenshots often end up including bits that you don’t want other people to see.

Sometimes, you want to crop an image because it simply looks better when you chop off any extraneous content, such as the graffiti-smeared bus stop on the left hand side.

Sometimes, however, you want to edit it out of decency, such as cutting out details that could hurt your own (or somone else’s) privacy by revealing your location or situation unnecessarily.

The same is true for screenshots, where the extraneous content might include the content of your next-door browser tab, or the private email directly below the amusing one, which you need to cut out in order to stay on the right side of privacy regulations.

Be aware before you share

Simply put, one of the primary reasons for cropping photos and screenshots before you send them out is to get rid of content that you don’t want to share.

So, like us, you probably assumed that if you chopped bits out of a photo or screenshot and hit [Save], then even if the app kept a record of your edits so you could revert them later and recover the exact original…

…those chopped-off bits would not be included in any copies of the edited file that you chose to post online, email to your chums, or send to a friend.

The Google Pixel Markup app, however, didn’t quite do that, leading to a bug denoted CVE-2023-20136.

When you saved a modified image over the old one, and then opened it back up to check your changes, the new image would appear in its cropped form, because the cropped data would be correctly written over the start of the previous version.

Anyone testing the app itself, or opening the image to verify it “looked right now” would see its new content, and nothing more.

But the data written at the start of the old file would be followed by a special internal marker to say, “You can stop now; ignore any data hereafter”, followed entirely incorrectly by all the data that used to appear thereafter in the old version of the file.

As long as the new file was smaller than the old one (and when you chop the edges off an image, you expect the new version to be smaller), at least some chunks of the old image would escape at the end of the new file.

Traditional, well-behaved image viewers, including the very tool you just used to crop the file, would ignore the extra data, but deliberately-coded data recovery or snooping apps might not.

Pixel problems repeated elsewhere

Google’s buggy Pixel phones were apparently patched in the March 2023 Android update, and although some Pixel devices received this month’s updates two weeks later than usual, all Pixels should now be up-to-date, or can be force-updated if you perform a manual update check.

But this class of bug, namely leaving data behind in an old file that you overwrite by mistake, instead of truncating its old content first, could in theory appear in almost any app with a [Save] feature, notably including other image-cropping and screenshot-trimming apps.

And it wasn’t long before both the Windows 11 Snipping Tool and the Windows 10 Snip & Sketch app were found to have the same flaw:

You could crop a file quickly and easily, but if you did a [Save] over the old file and not a [Save As] to a new file, where there would be no previous content to leave behind, a similar fate would await you.

The low-level causes of the bugs are different, not least because Google’s software is a Java-style app and uses Java libraries, while Microsoft’s apps are written in C++ and use Windows libraries, but the leaky side-effects are identical.

As our friend and colleague Chester Wisniewski quipped in last week’s podcast, “I suspect there may be a lot of talks in August in Las Vegas discussing this in other applications.” (August is the season of the Black Hat and DEF CON events.)

What to do?

The good news for Windows users is that Microsoft has now assigned the identifier CVE-2023-28303 to its own flavour of the aCropalypse bug, and has uploaded patched versions of the affected apps to the Microsoft Store.

In our own Windows 11 Enterprise Edition install, Windows Update showed nothing new or patched that we needed since last week, but manually updating the Snipping Tool app via the Microsoft Store updated us from 11.2302.4.0 to 11.2302.20.0.

We’re not sure what version number you’ll see if you open the buggy Windows 10 Snip & Sketch app, but after updating from the Microsoft Store, you should be looking for 10.2008.3001.0 or later.

Microsoft considers this a low-severity bug, on the grounds that “successful exploitation requires uncommon user interaction and several factors outside of an attacker’s control.”

We’re not sure we quite agree with that assessment, because the problem is not that an attacker might trick you into cropping an image in order to steal parts of it. (Surely they’d just talk you into sending them the whole file without the hassle of cropping it first?)

The problem is that you might follow exactly the workflow that Microsoft considers “uncommon” as a security precaution before sharing a photo or screenshot, only to find that you unintentionally leaked into a public space the very data you thought you had chopped out.

After all, the Microsoft Store’s own pitch for the Snipping Tool describes it as a quick way to “save, paste or share with other apps.”

In other words: Don’t delay, patch it today.

It only takes a moment.


In Memoriam – Gordon Moore, who put the more in “Moore’s Law”

Gordon Moore, co-founder of Intel, has died at 94.

Academically, Moore was both a chemist and physicist, earning a Bachelor’s degree in chemistry from the University of California at Berkeley in 1950, and a Doctorate in physical chemistry and physics from the California Institute of Technology in 1954.

After a brief interlude as a researcher at Johns Hopkins University in Maryland, Moore returned to his native San Francisco in 1956 to work for the co-inventor of the transistor, William Shockley, at the startup Shockley Semicondutor Laboratory in Mountain View.

Although Shockley has been described by Jacques Beaudoin of Stanford University as “the man who brought silicon to Silicon Valley”, he was a controversial figure even in his own heyday (to be blunt, he was an unreconstructed racist), and was by many accounts an abrasive, divisive, perhaps even paranoid manager.

By 1957, Moore and seven other Shockley Semiconductor staffers had had enough of Shockley, and decided to break away to form their own startup instead, with what’s known these days as venture capital injected by a cash-rich East Coast camera company, Fairchild Camera and Instrument.

Startup breakways may be routine in the technology industry these days, but they weren’t common at all in the 1950s, and Moore and his fellow entrepreneurs went down in history under the dramatic nickname of “The Traitorous Eight”.

The company that the Traitorous Eight founded, Fairchild Semiconductor, was quickly successful, and is officially recognised by the State of California as the producer of the “first commercially practicable integrated circuit.”

Based on patents granted and overturned over the years, credit for actually inventing the integrated circuit see-sawed between Jack Kilby of Texas Instruments, and Robert Noyce of Fairchild, with both of them ultimately acknowleged as joint inventors. Sadly, by the time Jack Kilby was recognised with a Nobel Prize in Physics in 2000, Noyce had been dead for a decade, and Nobel prizes can’t be given posthumously, so Kilby received the award on his own.

What took you so long?

By 1968, Moore was ready for another breakaway, and he and Robert Noyce left Fairchild to form a new startup of their own, along with deal-maker Arthur Rock .

Rock, originally from New York, helped the Traitorous Eight get their seed money from Fairchild Camera and Instrument in the 1950s; he had moved to San Francisco in the early 1960s to go into hi-tech adventure capitalism (appaently, venture capital had a more exciting name in those days).

According to Walter Isaacson, writing in his book The Innovators, when Noyce called Arthur Rock in 1968 to ask for help attracting backers for company that he and Moore wanted to create, Rock replied with a single question: “What took you so long?”

Apparently, Moore and Noyce toyed with the precise but unadventurous company name Moore Noyce, but soon realised that when said aloud, it was easily confused with “more noise”, an undesirable attribute in electronic circuits.

They incorporated, it seems, as NM Electronics, but quickly switched to Integrated Electronics.

Integrated Electronics was in turn a short-lived name, with the company soon known by the shortened form it has retained to this day: Intel.

Moore’s Law revisited

Ironically, perhaps, Moore, is probably most widely known today not for the entreprenurial enthusiasm, engineering excellence and business acumen that he brought to Intel during his long and storied career…

…but for a brief article that was published in Electronics magazine in April 1965, three years before he started Intel with Robert Noyce.

The article was enthusiastically entitled Cramming More Components onto Integrated Circuits, and its third sentence is preternatually prescient (remember, this was written almost 60 years ago):

Integrated circuits will lead to such wonders as home computers – or at least terminals connected to a central computer, automatic controls for automobiles, and per-
sonal portable communications equipment.

Intriguingly, we now live in a cloud-centric computer ecosystem in which, for many of us, our most expensive single piece of personal computing equipment is neither a laptop for offline work, nor a terminal for hooking up to a powerful central “mainframe” computer service, nor a two-way radio for keeping in touch from afar…

…but a device that we still anachronistically refer to as a “mobile phone” that does all of these things, and much, much more. (No pun intended.)

Two famous graphs

Moore presented two simple graphs in his article.

The first, and perhaps the more important of the two, suggested that the only way to keep improving the performance of an integrated circuit would be to keep making the individual components in the circuit smaller.

You couldn’t merely keep making the chip itself bigger to give you more room for components.

Moore suggested, perhaps counterintuively to many readers at the the time, that given the same manufacturing process with the same component size, reliability falls (and thus cost starts increasing) as you try to integrate more components into a finished chip:

The lowest point of eeach U-shaped curve denotes the component count “sweet spot” for each manufacturing process. (The 1970 curve is a prediction, given that the graph was published in 1965.)

In other words, it’s not enough to add more components to a chip just by using more space, because you soon reach a natural limit imposed by the manufacturing process itself.

As the title of the article suggests, you need to change the process as well, so you can quite literally cram in the extra components you need, rather than simply letting them spread out around the edges.

The second graph in the article is the one for which Moore is probably best remembered, even though it has just four true data points on it.

Moore suggested that this price-performance sweet spot, based on the ongoing miniaturisation of component sizes, had increased exponentially from 1962 to 1965.

In other words, if you plotted a graph with a linear scale on the X-axis (time) and a logarithmic scale on the Y-axis (number of components in chip, which we today loosely refer to as transistor count), you’d get a straight line.

The 20 = 1 value for 1959, which happens to line up fairly nicely, denotes that altough the company had invented a process for making integrated circuits at that point, the products it had to sell were all still individual, standalone transitors, each with a component count of one:

Looking ahead 10 years, Moore therefore conjectured that by 1975, we might reasonably expect chips with 216 components (about 65,0000) baked into them – an astonishing acceleration in potential computer power.

Not quite, but nearly so!

In real life, things didn’t quite turn out that way.

Intel’s own 8086 microprocessor, for example, released in 1978, had a transistor count of just under 30,000, close to 215, but Moore’s original prediction was for chips to accommodate 219 components by then, or more than half a million.

Indeed, by 1975, Moore had adjusted his estimate to a doubling of component counts every two years, rather than every year, along with the necessary reduction in size of each component in the integrated circuit.

That prediction of exponential growth became known as Moore’s Law, and although it isn’t in any literal sense a law, and although we haven’t quite kept up with it in the way he predicted…

…we’ve come surprisingly close.

The mark of a Mage

Although it’s not really comparing like with like, let’s line up a 1978-era Intel 8086 microprocessor against a 2022-era Apple M2 system-on-chip.

The M2 arrived 44 years after the 8086, which is time for 22 two-year doublings, as Moore’s Revised Law of 1975 would predict.

That would take the M2’s theoretical component count from 215 to 215+22 = 237, or just under 140 billion.

The M2 takes up 150mm2 – that’s what’s known as its die size, the actual dimensions of the silicon chip inside the package that’s soldered to your new Mac’s motherboard.

Amazingly, that’s less than five times larger than the 8086, which was a more modest 33mm2, but the M2 die has a component count of about 20 billion, or just over 234.

That might not be exactly what the Revised Law of 1975 predicted, but it’s hard to quibble with such a modest difference over such a long time.

Usually, when a technology commentator tells you that something “is growing exponentially” – whether that’s the hacking abilities of cybercriminals, the value of a new cryptocoin, or whatever they’re interested in talking up at the time – you know to treat their remarks as mere marketing metaphor.

True exponential growth is usually short-lived simply because you quickly run out of resources to keep up the regular doubling, so any growth that’s described “exponential” is almost always either a flash in the pan, or plain old hype.

It is therefore a mark of Gordon Moore’s insight, importance, innovation, intellect and influence that when he predicted transistor counts would grow as he did, almost 60 years ago, in what was published as a brief piece in a popular magazine…

…his words were hailed as a Law, though in truth it was as much as case of The Moore Effect – a challenge as much as a calculation; a proposal as much as a prediction; an exhortation as much as an estimate.

Gordon Earle Moore, RIP.


Picture of Gordon Moore in featured image from a memorial collection provided courtesy of Intel Corporation.


go top