Even though it’s already Day 4 of Year 2023, some of the important IT/sysadmin/X-Ops security stories of the holiday season are only popping up in mainstream news now.
So we though we’d take a quick look back at some of the major issues we covered over the last couple of weeks, and (just so you can’t accuse us of sneaking out a New Year’s listicle!) reiterate the serious security lessons we can learn from them.
IS THIS THE LAST STRAW AT LASSPASS?
Lessons to learn:
Be objective. If you are ever stuck with doing a data breach notification, don’t try to rewrite history to your marketing advantage. If there are parts of the attack that you headed off at the pass, by all means say so, but take care not to sound self-congratulatory at any point.
Be complete. That doesn’t mean being long-winded. In fact, you may not have enough information to say very much at all. “Completeness” can include brief statements such as, “We don’t yet know.” Try to anticipate the questions that customers are likely to ask, and confront them proactively, rather than giving the impression you’re trying to avoid them.
Hope for the best, but prepare for the worst. If you receive a data breach notification, and there are obvious things you can do that will improve both your theoretical security and your practical peace of mind (such as changing all your passwords), try to find the time to do them. Just in case.
CRYPTOGRAPHY IS ESSENTIAL – AND THAT’S THE LAW
Lessons to learn:
Cryptography is essential for national security and for and the functioning of the economy. It’s official – that text appears in the Act that Congress just passed into US law. Remember those words the next time you hear anyone, from any walk of life, arguing that we need “backdoors”, “loopholes” and other security bypasses build into encryption systems on purpose. Backdoors are a terrible idea.
Software must be built and used with cryptographic agility. We need to be able to introduce stronger encryption with ease. But we also need to be able to retire and replace insecure cryptography quickly. This may mean proactive replacement, so we aren’t encrypting secrets today that might become easily crackable in the future while they’re still supposed to be secret.
WE STOLE YOUR PRIVATE KEYS – BUT WE DIDN’T MEAN IT, HONEST!
Lessons to learn:
You have to own your entire software supply chain. PyTorch was attacked via a community repository that was poisoned with malware that inadvertently overrode the uninfected code built into PyTorch itself. (The PyTorch team quickly worked with the community to override this override, despite the holiday season.)
Cybercriminals can steal data in unexpected ways. Make sure your threat monitoring tools keep an eye even on unlikely routes out of your organisation. These crooks used DNS lookups with “server names” that were actually exfiltrated data.
Don’t bother making cybercrime excuses. Apparently, the attackers in this case are now claiming that they stole personal data, including private keys, for “research reasons” and say they’ve deleted the stolen data now. Firstly, there’s no reason to believe them. Secondly, they sent out the data so that anyone on your network path who saw or saved a copy could unscramble it anyway.
WHEN SPEED TRUMPS SECURITY
Lessons to learn:
Threat prevention isn’t just about finding malware. XDR (extended detection and response) is also about knowing what you’ve got, and where it’s in use, so you can assess the risk of security vulnerabilities quickly and accurately. As the old truism says, “If you can’t measure it, you can’t manage it.”
Performance and cybersecurity are often in conflict. This bug only applies to Linux users whose determination to speed up Windows networking lured them to implement it right inside the kernel, unavoidably adding additional risk. When you tweak for speed, make sure you really need the improvement before changing anything, and make sure you really are enjoying a genuine benefit afterwards. If in doubt, leave it out.
CYBERCRIME PREVENTION AND INCIDENT RESPONSE
For a fantastic overview both of cybercrime prevention and incident response, listen to our latest holiday season podcasts, where our experts liberally share both their knowledge and their advice:
Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.
Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.
It looks like the sort of meeting room you might find in startups all over the world: diffuse lighting from windows down one wall, alongside a giant poster cityscape of New York’s Brooklyn Bridge, with the Manhattan skyline towering behind it.
The difference in this case is that that the computer workstations around the room are there for a different sort of “entrepreneurial” venture, and the room is empty not because no one showed up for work, but because the “employees” were in the process of being arrested.
This picture comes from the Ukraine Cyber Police, who raided a fraudulent call centre just before New Year, where they say the three founders of the scam, plus 37 “staff”, were busted for allegedly operating a large-scale banking fraud.
Playbook + gift of gab = scam
You’re probably familiar with the scamming script they’re said to have used, and you probably know friends or family who have been pestered by scammers of this sort.
Some of you may even have acquaintances who were ripped off this way, because these scammers are well versed in gaining the trust of their victims.
Typically, the scammers try to convince you that your bank account is under attack from fraudsters (technically, that part is true – the caller is the attacker), and patiently offer to help you “secure” your account and “recover” lost or at-risk funds.
The scammers aim to turn people’s general awareness of banking scams into an excuse, a reason, a playbook, if you like, for carrying out a scam of their own.
Simply put, they call up pretending to be an official from your own bank, using a variety of tricks to make you accept their fictitious credentials as bank staff, and then “advise” you to take a series of disastrous steps.
IMPORTANT. Remember that the number that pops up on your phone when someone calls you cannot be relied on. Scammers can inject fake numbers into the calling process to make it look as though they’re calling from almost anywhere: from your bank’s HQ; from an official helpline number;from the tax office; even from your local police station. Also remember also that if you call someone back based on a number they gave you, even if the number is a tollfree number inside your country, you could end up invisibly redirected almost anywhere in the world. Scammers can even buy ready-to-go “spoofed” telephone services from other cybercriminals, so they don’t need any internet telephony knowhow themselves.
The scammers’ first job is to convince you that a hacker has already gained access to your account.
The crooks typically use a mix of threatening, scary and urgent language, combined with the sort of attentiveness that you probably wish more call centre staff would show.
Even if you decide to call them back (don’t do it – you’re only reconnecting to the person who just called you, which proves nothing!), you’ll almost certainly find the scammers more prompt and more helpful than you’ve experienced in a long time when calling a real support line…
…so we’re not surprised that this sort of caller makes some people feel comfortable enough to keep on listening, even if they didn’t believe a word at first.
If in doubt, don’t give it out
As you can imagine, once the crooks know you’re starting to believe their cover story, they’ll start to milk you for personal information, often by pretending that they can see it for themselves on the “banking screen” in front of them, yet somehow always coaxing you to say it out loud first.
At that point, of course, they do know the information you just let slip, and they’ll pretend to “confirm” it or to “double-check” it to keep up the pretence.
There are then many ways that the crooks can defraud you or drain your account.
Sometimes, they may simply convince you to login on a fake “security” site as they coach you through the process, including getting you to go through any 2FA (two-factor authentication) process.
The Ukrainian call centre that just got busted seems to have worked that way, with victims being “helpfully” guided through the process of “cancelling” transactions that, in fact, never happened in the first place [automated translation]:
[These scammers] called people in Kazakhstan, pretending to be employees of the security service of banks. These people were notified of suspicious transactions and told that alleged outsiders had gained access to their accounts. Under the guise of “cancelling” transactions, victims were persuaded to provide financial data.
After receiving such information, the perpetrators transferred the victims’ money to account under their own control. They also issued quick loans and appropriated the loan amount.
For the conspiracy, the participants used bank accounts located in offshore zones, and cryptocurrency wallets.
In this way, the criminals defrauded [about 18,000 people].
High and dry
In other scams – this approach, unfortunately, is widely reported in the UK – the crooks present you with a brand-new account number, based at the same bank, which they announce is your “replacement account”.
The idea is that you’re being provided with new account details in the same way that if you were to ask for a new credit card due to fraud, it too would have a brand new number, expiry date and so on.
The crooks then convince you to transfer the funds from your “old, hacked” account to this new one, leading you to believe that the account was created by the bank minutes ago, especially for the purpose of “protecting” you from an active attack.
Of course, this “new account” is just a regular account that was opened recently by accomplices of the crooks, perhaps using fraudulent documentation to pass the bank’s know-your-customer (KYC) process.
So, the account it is already directly under the control of the scammers, and the money will typically be whisked out of that “new” account even before you finish the call.
In cases like this, victims sometimes tragically find themselves left high and dry by their bank, which may claim that because they apparently willingly transferred the funds of their own accord, and properly identified themselves to the online banking system (for example by using 2FA), the funds have technically not been “stolen”, and the bank therefore has no liability.
What to do?
Never believe anyone who contacts you out of the blue and claims to be “helping” you with a fraud investigation. That person isn’t stopping a fraud, they are starting one.
Never use contact details given to you by the other person when cybersecurity is at stake. This cannot possibly prove anything, given that the details probably came from a scammer in the first place. All you get is a false sense of “security”.
Never rely on the Caller ID number that shows up on your phone. The number that appears can easily be faked. If the caller tells you to “check the number if you don’t believe them”, you can be sure they’re a scammer.
Never let yourself be talked into handing over personal information, especially not to “prove” your identity. After all, it’s the other person who should be proving themselves to you. Visit your bank in person if you possibly can; if you need to call or interact online, look for contact details printed on something you know you received directly from the bank, such as the back of your payment card or a recent statement.
Never transfer funds to another account on someone else’s say so. You bank will never call you to ask you to do this, so any call of this sort must be a scam. Worse still, you could find yourself liable for the transfer if you approve it yourself, even if you were tricked into doing so.
Look out for friends and family who may be vulnerable. These scammers don’t give up easily, and they can be consummate actors when playing the role of a helpful official. Make sure your friends and family know to hang up right away, and to contact you personally for advice, so they never give the scammers a chance to “vouch” for themselves.
PyTorch is one of the most popular and widely-used machine learning toolkits out there.
(We’re not going to be drawn on where it sits on the artifical intelligence leaderboard – as with many widely-used open source tools in a competitive field, the answer seems to depend on whom you ask, and which toolkit they happen to use themselves.)
Originally developed and released as an open-source project by Facebook, now Meta, the software was handed over to the Linux Foundation in late 2022, which now runs it under the aegis of the PyTorch Foundation.
Unfortunately, the project was compromised by means of a supply-chain attack during the holiday season at the end of 2022, between Christmas Day [2022-12-25] and the day before New Year’s Eve [2022-12-30].
The attackers malevolently created a Python package called torchtriton on PyPI, the popular Python Package Index repository.
The name torchtriton was chosen so it would match the name of a package in the PyTorch system itself, leading to a dangerous situation explained by the PyTorch team (our emphasis) as follows:
[A] malicious dependency package (torchtriton) […] was uploaded to the Python Package Index (PyPI) code repository with the same package name as the one we ship on the PyTorch nightly package index. Since the PyPI index takes precedence, this malicious package was being installed instead of the version from our official repository. This design enables somebody to register a package by the same name as one that exists in a third party index, and pip will install their version by default.
The program pip, by the way, used to be known as pyinstall, and is apparently a recursive joke that’s short for pip installs packages. Despite its original name, it’s not for installing Python itself – it’s the standard way for Python users to manage software libraries and applications that are written in Python, such as PyTorch and many other popular tools.
Pwned by a supply-chain trick
Anyone unfortunate enough to install the pwned version of PyTorch during the danger period almost certainly ended up with data-stealing malware implanted on their computer.
According to PyTorch’s own short but useful analysis of the malware, the attackers stole some, most or all of the following significant data from infected systems:
System information, including hostname, username, known users on the system, and the content of all system environment variables. Environment variables are a way of providing memory-only input data that programs can access when they start up, often including data that’s not supposed to be saved to disk, such as cryptographic keys and authentication tokens giving access to cloud-based services. The list of known users is extracted from /etc/passwd, which, fortunately, doesn’t actually contain any passwords or password hashes.
Your local Git configuration. This is stolen from $HOME/.gitconfig, and typically contains useful information about the personal setup of anyone using the popular Git source code management system.
Your SSH keys. These are stolen from the directory $HOME/.ssh. SSH keys typically include the private keys used for connecting securely via SSH (secure shell) or using SCP (secure copy) to other servers on your own networks or in the cloud. Lots of developers keep at least some of their private keys unencrypted, so that scripts and software tools they use can automatically connect to remote systems without pausing to ask for a password or a hardware security key every time.
The first 1000 other files in the your home directory smaller that 100 kilobytes in size. The PyTorch malware description doesn’t say how the “first 1000 file list” is computed. The content and ordering of file listings depends on whether the list is sorted alphabetically; whether subdirectories are visited before, during or after processing the files in any directory; whether hidden files are included; and whether any randomness is used in the code that walks its way through the directories. You should probably assume that any files below the size threshold could be the ones that end up stolen.
At this point, we’ll mention the good news: only those who fetched the so-called “nightly”, or experimental, version of the software were at risk. (The name “nightly” comes from the fact that it’s the very latest build, typically created automatically at the end of each working day.)
Most PyTorch users will probably stick to the so-called “stable” version, which was not affected by this attack.
Also, from PyTorch’s report, it seems that the Triton malware executable file specifically targeted 64-bit Linux environments.
We’re therefore assuming that this malicious program would only run on Windows computers if the Windows Subsystem for Linux (WSL) were installed.
Don’t forget, though that the people most likely to install regular “nightlies” include developers of PyTorch itself or of applications that use it – perhaps including your own in-house developers, who might have private-key-based access to corporate build, test and production servers.
DNS data stealing
Intriguingly, the Triton malware doesn’t exfiltrate its data (the militaristic jargon term that the cybersecurity industry likes to use instead of steal or copy illegally) using HTTP, HTTPS, SSH, or any other high-level protocol.
Instead, it compresses, scrambles and text-encodes the data it wants to steal into a sequence of what look like “server names” that belong to a domain name controlled by the criminals.
By making a sequence of DNS lookups containing carefully constructed data that could be series of legal server names but isn’t, the crooks can sneak out stolen data without relying on traditional protocols usually used for uploading files and other data.
This is the same sort of trick that was used by Log4Shell hackers at the end of 2021, who leaked encryption keys by doing DNS lookups for “servers” with “names” that just happened to be the value of your secret AWS access key, plundered from an in-memory environment variable.
So what looked like an innocent, if pointless, DNS lookup for a “server” such as S3CR3TPA55W0RD.DODGY.EXAMPLE would quietly leak your access key under the guise of a simple lookup that directed to the official DNS server listed for the DODGY.EXAMPLE domain.
LIVE LOG4SHELL DEMO EXPLAINING DATA EXFILTRATION VIA DNS
[embedded content]
If you can’t read the text clearly here, try using Full Screen mode, or watch directly on YouTube. Click on the cog in the video player to speed up playback or to turn on subtitles.
If the crooks own the domain DODGY.EXAMPLE, they get to tell the world which DNS server to connect to when doing those lookups.
More importantly, even networks that strictly filter TCP-based network connections using HTTP, SSH and other high-level data sharing protocols…
…sometimes don’t filter UDP-based network connections used for DNS lookups at all.
The only downside for the crooks is that DNS requests have a rather limited size.
Individual server names are limited to 64 alphanumeric characters each, and many networks limit individual DNS packets, including all enclosed requests, headers and metadata, to just 512 bytes each.
We’re guessing that’s why the malware in this case started out by going after your private keys, then restricted itself to at most 1000 files, each smaller than 100,000 bytes.
That way, the crooks get to thieve plenty of private data, notably including server access keys, without generating an unmanageably large number of DNS lookups.
An unusually large number of DNS lookups might get noticed for routine operational reasons, even in the absence of any scrutiny applied specifically for cybersecurity purposes.
We wrote above that that the malware’s stolen data is scrambled rather than encrypted. Even though a glance at the triton machine code shows that it compresses the data it wants to send using the well-known deflate() algorithm, as used in gzip and ZIP, then encrypts it using AES-256-GCM, the code uses a hard-wired password and initialisation vector, so that the same plaintext data comes out as the same ciphertext every time. The malware converts this scrambled data into pure text characters using Base62 encoding. Base62 is like Base64 or URL64 encoding, but uses only A-Z, a-z and 0-9, with no punctuation characters appearing in the encoded output. This sidesteps the problem that only one punctuation symbol, the dash or hyphen, is allowed in DNS names. This compressed-obfuscated-and-textified data is sent as a sequence of DNS lookups. The hard-coded DNS suffix .h4ck.cfd is added to the encoded data that’s “looked up”, where the string .h4ck.cfd is a domain owned by the attackers. (Inside the malware, this domain name is obfuscated by XORing each byte with 0x4E, so it shows up as the disguised string &z-%`-(* in the compiled executable.) This means that DNS lookups sent out for that domain are received by the criminals at a DNS server that they get to choose, thus allowing them to recover and unscramble the stolen data.
What to do?
PyTorch has already taken action to shut down this attack, so if you haven’t been hit yet, you almost certainly won’t get hit now, because the malicious torchtriton package on PyPI has been replaced with a deliberately “dud”, empty package of the same name.
This means that any person, or any software, that tried to install torchtriton from PyPI after 2022-12-30T08:38:06Z, whether by accident or by design, would not receive the malware.
PyTorch has published a handy list of IoCs, or indicators of compromise, that you can search for across your network.
Remember, as we mentioned above, that even if almost all of your users stick to the “stable” version, which was not affected by this attack, you may have developers or enthusiasts who experiment with “nightlies”, even if they use the stable release as well.
According to PyTorch:
The malware is installed with the filename triton. By default, you would expect to find it in the subdirectory triton/runtime in your Python site packages directory. Given that filenames alone are weak malware indicators, however, treat the presence of this file as evidence of danger; don’t treat its absence as an all-clear.
The malware in this particular attack has the SHA256 sum 2385b29489cd9e35f92c072780f903ae2e517ed422eae67246ae50a5cc738a0e. Once again, the malware could easily be recompiled to produce a different checksum, so the absence of this file is not a sign of definite health, but you can treat its presence as a sign of infection.
DNS lookups used for stealing data ended with the domain name H4CK.CFD. If you have network logs that record DNS lookups by name, you can search for this text string as evidence that secret data leaked out.
The malicious DNS replies apparently went to, and replies, if any, came from a DNS server called WHEEZY.IO. At the moment, we can’t find any IP numbers associated with that service, and PyTorch hasn’t provided any IP data that would tie DNS taffic to this malware, so we’re not sure how much use this information is for threat hunting at the moment [2023-01-01T21:05:00Z].
Fortunately, we’re guessing that the majority of PyTorch users won’t have been affected by this, either because they don’t use nightly builds, or weren’t working over the vacation period, or both.
But if you are a PyTorch enthusiast who does tinker with nightly builds, and if you’ve been working over the holidays, then even if you can’t find any clear evidence that you were compromised…
…you might nevertheless want to consider generating new SSH keypairs as a precaution, and updating the public keys that you’ve uploaded to the various servers that you access via SSH.
If you suspect you were compromised, of course, then don’t put off those SSH key updates – if you haven’t done them already, do them right now!
It’s the last regular working weekday of 2022 (in the UK and the US, at least), in the unsurprisingly relaxed and vacationistic gap between Christmas and New Year…
…so you were probably expecting us to come up either with a Coolest Stories Of The Year In Review listicle, or with a What You Simply Must Know About Next Year (Based On The Coolest Stories Of The Year) thinly-disguised-as-not-a-listicle listicle.
After all, even technical writers like to glide into holiday mode at this time of year (or so we have been told), and nothing is quite as relaxed and vacationistic as putting old wine into new skins, mixing a few metaphors, and gilding a couple of lilies.
So we decided to do something almost, but not quite, entirely unlike that.
Those who cannot remember history…
We are, indeed, going to look forward by gazing back, but – as you might have guessed from the headline – we’re going to go further back than New Year’s Day 2022.
In truth, that mention of 33 1/3 is neither strictly accurate nor specifically a tribute to the late Lieutenant-Sergeant Frank Drebbin, because that headline number should, by rights, have been somewhere between 34.16 and 34.19, depending on how you fractionalise years.
We’d better explain.
Our historical reference here goes back to 1988-11-02, which anyone who has studied the early history of computer viruses and other malware will know, was the day that the dramatic Internet Worm kicked off.
This infamous computer virus was written by one Robert Morris, then a student at Cornell, whose father, who also just happened to be called Robert Morris, was a cryptographer at the US National Security Agency (NSA).
You can only imagine the watercooler gossip at the NSA on the day after the worm broke out.
In case you’re wondering what the legal system thought of malware back then, and whether releasing computer viruses into the wild has ever been considered helpful, ethical, useful, thoughtful or lawful… Morris Jr. ended up on probation for three years, doing 400 hours of community service, and paying a fine of just over $10,000 – apparently the first person in the US convicted under the Computer Fraud and Abuse Act.
The Morris Worm is therefore within a year of 33 1/33 years old…
…and so, because 34.1836 common years is close enough to 33 1/3, and because we rather like the number 33 1/3, apparently a marketing-friendly choice of rotational speed for long-playing gramophone records nearly a century ago, that is the number we chose to sneak into the headline.
Not 33, not 34, and not the acutely factorisable and computer-friendly 32, but 33 1/3 = 100/3.
That’s a delightfully simple and precise rational fraction that, annoyingly, has no exact representation either in decimal or in binary. (1/3 = 0.333…10 = 0.010101…2)
Predicting the future
But we’re not really here to learn about the frustrations of floating point arithmetic, or that there are unexceptionable, human-friendly numbers that your computer’s CPUs can’t directly represent.
We said we’d make some cybersecurity predictions, so here goes.
We’re going to predict that in 2023 we will, collectively, continue to suffer from the same sort of cybersecurity trouble that was shouted from the rooftops more than 100010.010101…2 years ago by that alarming, fast-spreading Morris Worm.
Morris’s worm had three primary self-replication mechanisms that relied on three common coding and system administration blunders.
You might not be surprised to find out that they can be briefly summarised as follows:
Memory mismanagement. Morris exploited a buffer overflow vulnerability in a popular-at-the-time system network service, and achieved RCE (remote code execution).
Poor password choice. Morris used a so-called dictionary attack to guess likely login passwords. He didn’t need to guess everyone’s password – just cracking someone’s would do.
Unpatched systems. Morris probed for email servers that had been set up insecurely, but never subsequently updated to remove the dangerous remote code execution hole he abused.
Sound familiar?
What we can infer from this is that we don’t need a slew of new cybersecurity predictions for 2023 in order to have a really good idea of where to start.
In other words: we mustn’t lose sight of the basics in a scramble to sort out only specific and shiny new security issues.
Sadly, those shiny new issues are important, too, but we’re also still stuck with the cybersecurity sins of the past, and we probably will be for at least another 16 2/3 years, or even longer.
What to do?
The good news is that we’re getting better and better at dealing with many of those old-school problems.
For example, we’re learning to use safer programming practices and safer programming languages, as well as to cocoon our running code in better behaviour-blocking sandboxes to make buffer overflows harder to exploit.
We’re learning to us password managers (though they have brought intriguing issues of the their own) and alternative identity verification technologies as well or instead of relying on simple words that we hope no one will predict or guess.
And we’re not just getting patches faster from vendors (responsible ones, at least – the joke that the S in IoT stands for Security still seems to have plenty of life in it yet), but also showing ourselves willing to apply patches and updates more quickly.
We’re also embracing TLAs such as XDR and MDR (extended and managed detection and response respectively) more vigorously, meaning that we’re accepting that dealing with cyberattacks isn’t just about finding malware and removing it as needed.
These days, we’re much more inclined than we were a few years ago to invest time not only for looking out for known bad stuff that needs fixing, but also for ensuring that the good stuff that’s supposed to be there actually is, and that’s it’s still doing something useful.
We’re also taking more time to seek out potentially bad stuff proactively, instead of waiting until the proverbial alerts pop automatically into our cybersecurity dashboards.
For a fantastic overview both of cybercrime prevention and incident response, why not listen to our latest holiday season podcasts, where our experts liberally share both their knowledge and their advice:
Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.
Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.
Thanks for your support of the Naked Security community in 2022, and please accept our best wishes for a malware-free 2023!
These days, almost every decent app, along with some that are half-decent (as well as a few that aren’t very good at all) will offer you tabbed whateveritis.
Even command windows, which used to be just what they said (windows in which one – and only one – command shell was running), went “tabbed” somewhere in the 1990s, and have been ever since.
If you want two command windows these days, you can either have two on-screen windows, as the name suggests…
… or two tabs inside a single window, both of which are often still referred to as command windows, even though they’re not.
They’re command tabs.
Check with your browser
You don’t call each tab in your browser “a browser window” – not least because that’s not what the browser itself calls them.
Edge, for example has unashamedly distinct menu items entitled New tab Ctrl+T and New window Ctrl+N, which respectively open (forgive us for stating the obvious here) new tabs in the current window, and new windows in (don’t take this the wrong way) new windows.
Some Unix window managers take the tabbing metaphor even further, allowing you to take any two windows, even if they belong to completely different apps, and turn them into a pair of tabs inside a single window. (Or metawindow, if you prefer.)
But there are some old-school programs that have resolutely resisted this trend, notably including the venerable, built-in, no-frills-please, party-like-it’s-1979 Windows text editor NOTEPAD.
Strictly speaking, it’s notepad.exe these days, and it’s been quietly announcing itself as Notepad in the title bar for years now, but it still feels wrong to write about it without putting the whole word in CAPITAL LETTERS, just as you used to do for COMMAND.COM and CONFIG.SYS.
You can open two NOTESPAD, and the program (we still can’t bring ourselves to call it an application, let alone an app, even though it has a cog icon these days, and itself will tell you About this app) even has a menu item for opening a second window.
There’s New Ctrl+N, which literally just opens a new file in the current window, and New window Ctrl+Shift+N.
Opening a new window does what it says, but – by default at least – carefully places the new window smack on top of the old one, so you can pretend you still have only one window if that makes you feel less anxious.
Let’s be clear, change is all very well, and we applaud it in most cases – it’s hard to argue that 640KB wasn’t better than 64KB, that 16 registers weren’t better than 8, and that being able to fit 64 bits into each register wasn’t better than scraping along with 32, or 16, or even 8.
Uncomplicated, unadorned, and unmodern
But NOTEPAD, surely, simply isn’t meant to change?
It’s supposed to be uncomplicated, unadorned, unmodern, and – let’s be honest – not actually terribly good.
Because falling back on NOTEPAD is a sort of badge of honour, a sign of wisdom and experience, a thumb-of-the-nose to planet-sized, memory-gobbling editors such as Emacs and… well, anything at all that’s based on Electron.
When you drive an old car, an actually-old car, you expect three forward gears, no more and no less; you expect the self-starter (if there is one) to be a foot-switch that connects the battery directly to the starter motor via a terrifying DC switch, with no relays or solenoids; you expect to have to operate the windscreen wipers (if there are any) by hand; and you expect to prime the carburettor (Google it – it’s a surprisingly powerful sort of analog computing device for mixing fuel and air) by hand every morning.
You can therefore imagine the holiday season consternation the other day when Windows Central, amongst other websites and social media users, spotted and dutifully reported on a Microsoft tweet with a screenshot like this:
Tabbed editing?
In NOTEPAD?!
The horror!
Have a happy New Year
The good news?
The alarming image apparently vanished pretty quickly, and hasn’t resurfaced since.
Let’s hope that wiser counsel has prevailed, and that the code changes introducing tabbed editing have been safely backed out in time for 2023.
And, to finish on a serious note, is there anything we can learn about cybersecurity here?
Yes!
This incident certainly reminds us that even top-and-centre RED SECURITY WARNINGS with HAZARD TRIANGLES and EXCLAMATION POINTS – like all those alerts advising us NOT TO ENABLE MACROS, or to AVOID ATTACHMENTS FROM THIS SENDER, or that THIS WEBSITE CAN’T BE TRUSTED – are often honoured in the breach, not in the observance.
Remember, in what’s left of the holiday season, and in the New Year that’s round the corner:
Stop. Think. And only then Connect.
Or, if you find rhymes easier to recall:
If in doubt/Don’t give it out!
Especially if there’s a RED SECURITY WARNING from your boss right there, telling you DON’T TAKE SCREENSHOTS!