MOVEit zero-day exploit used by data breach gangs: The how, the why, and what to do…

Last week, Progress Software Corporation, which sells software and services for user interface development, devops, file management and more, alerted customers of its MOVEit Transfer and related MOVEit Cloud products about a critical vulnerability dubbed CVE-2023-34362.

As the name suggests, MOVEit Transfer is a system that makes it easy to store and share files throughout a team, a department, a company, or even a supply chain.

In its own words, “MOVEit provides secure collaboration and automated file transfers of sensitive data and advanced workflow automation capabilities without the need for scripting.”

Unfortunately, MOVEit’s web-based front end, which makes it easy to share and manage files using just a web browser (a process generally considered less prone to misdirected or “lost” files than sharing them via email), turned out to have a SQL injection vulnerability.

SQL injections explained

Web-based SQL injection bugs arise when an HTTP request that’s submitted to a web server is converted insecurely into a query command that’s then issued by the server itself to do a database lookup in order to work out what HTTP reply to construct.

For example, a database search that’s triggered from a web page might end up as a URL requested by your browser that looks like this:

https://search.example.com/?type=file&name=duck

The query text duck could then be extracted from the name parameter in the URL, converted into database query syntax, and and stitched into a command to submit to the database server.

If the backend data is stored in a SQL database, the web server might convert that URL into a SQL command like the one shown below.

The % characters added to the text duck mean that the search term can appear anywhere in the retrieved filename, and the single quote characters at each end are are added as markers to denote a SQL text string:

SELECT filename FROM filesdb WHERE name LIKE '%duck%'

The data that comes back from the query could then be formatted nicely, converted to HTML, and sent back as an HTTP reply to your browser, perhaps giving you a clickable list of matching files for you to download.

Of course, the web server needs to be really careful with the filenames that are submitted as a search term, in case a malicious user were to create and request a URL like this:

https://search.example.com/?type=file&name=duck';DROP table filesdb;--

If that search term were blindly converted into a query string, you might be able to trick the web server into sending the SQL server a command like this:

SELECT filename FROM filesdb WHERE name LIKE '%duck';DROP TABLE filesdb;--%'

Because a semicolon (;) acts as a statement separator in SQL, this single-line command is actually the same as sending three consecutive commands:

SELECT filename FROM filesdb WHERE name LIKE '%duck' -- matches names ending duck
DROP TABLE filesdb -- deletes whole database
--%' -- comment, does nothing

Sneakily, because everying after -- is discarded by SQL as a programmer’s comment, these three lines are the same as:

SELECT filename FROM filesdb WHERE name LIKE '%duck'
DROP TABLE filesdb

You’ll get back a list of all filenames in the database that end with the string duck (the special SQL character % at the start of a search term means “match anything up to this point”)…

…but you’ll be the last person to get anything useful out of the filesdb database, because your rogue search term will follow up the search with the SQL command to delete the whole database.

Little Bobby Tables

If you’ve ever heard syadmins or coders making jokes about Little Bobby Tables, that’s because this sort of SQL injection was immortalised in an XKCD cartoon back in 2007:

As the cartoon concludes in the last frame, you really need to sanitise your database inputs, meaning that you need to take great care not to allow the person submitting the search term to control how the search command gets interpreted by the backend servers involved.

You can see why this sort of trick is known as an injection attack: in the examples above, the malicious search terms cause an additional SQL command to be injected into the handling of the request.

In fact, both these examples involve two injected fommands, following the sneakily-inserted “close quote” character to finsh off the search string early. The first extra command is the destructive DROP TABLE instruction. The second is a “comment command” that causes the rest of the line to be ignored, thus cunningly eating up the trailing %' characters generated by the server’s command generator, which would otherwise have caused a syntax error and prevented the injected DROP TABLE command from working.

Good news and bad news

The good news in this case is that Progress patched all its supported MOVEit versions, along with its cloud-based service, once it became aware of the vulnerability.

So, if you use the cloud version, you’re now automatically up-to-date, and if you are running MOVEit on your own network, we hope you’ve patched by now.

The bad news is that this vulnerability was a zero-day, meaning that Progress found out about it because the Bad Guys had already been exploiting it, rather than before they figured out how to do so.

In other words, by the time you patched your own servers (or Progress patched its cloud service), crooks might already have injected rogue commands into your MOVEit SQL backend databases, with a range of possible outcomes:

  • Deletion of existing data. As shown above, the classic example of a SQL injection attack is large-scale data destruction.
  • Exfiltration of existing data. Instead of dropping SQL tables, attackers could inject queries of their own, thus learning not only the structure of your internal databases, but also extracting and stealing their juiciest parts.
  • Modification of existing data. More subtle attackers might decide to corrupt or disrupt your data instead of (or as well as) stealing it.
  • Implantation of new files, including malware. Attackers could inject SQL commands that in turn launch external system commands, thus achieving arbitrary remote code execution inside your network.

One group of attackers, alleged by Microsoft to be (or to be connected with) the infamous Clop ransomware gang, have apparently been using this vulnerability to implant what are known as webshells on affected servers.

If you’re not familiar with webshells, read our plain-English explainer that we published at the time of the troublesome HAFNIUM attacks back in March 2021:

Webshell danger

Simply put, webshells provide a way for attackers who can add new files to your web server to come back later, break in at their leisure, and parlay that write-only access into complete remote control.

Webshells work because many web servers treat certain files (usually determined by the directory they’re in, or by the extension that they have) as executable scripts used to generate the page to send back, rather than as the actual content to use in the reply.

For example, Microsoft’s IIS (internet information server) is usually configured so that if a web browser requests a file called, say, hello.html, then the raw, unomdified content of that file will be read in and sent back to the browser.

So, if there is any malware in that hello.html file, then it will affect the person browsing to the server, not the server itself.

But if the file is called, say, hello.aspx (where ASP is short for the self-descriptive phrase Active Server Pages), then that file is treated as a script program for the server to execute.

Running that file as a program, instead of simply reading it in as data, will generate the output to be sent in reply.

In other words, if there is any malware in that hello.aspx file, then it will directly affect the server itself, not the person browsing to it.

In short, dropping a webshell file as the side-effect of a command injection attack means that the attackers can come back later, and by visiting the URL corresponding to that webshell’s filename…

…they can run their malware right inside your network, using nothing more suspicious than an unassuming HTTP request made by an everyday a web browser.

Indeed, some webshells consist of just one line of malicious script, for example, a single command that says “get text from a specific HTTP header in the request and run it as a system command”.

This gives general-purpose command-and-control access to any attacker who knows the right URL to visit, and the right HTTP header to use for delivering the rogue command.

What to do?

  • If you’re a MOVEit user, make sure all instances of the software on your network are patched.
  • If you can’t patch right now, turn off the web-based (HTTP and HTTP) interfaces to your MOVEit servers until you can. Apparently this vulnerability is exposed only via MOVEit’s web interface, not via other access paths such as SFTP.
  • Search your logs for newly-added web server files, newly created user accounts, and unexpectedly large data downloads. Progress has a list of places to search, along with filenames and to search for.
  • If you’re a programmer, sanitise thine inputs.
  • If you’re a SQL programmer, used parameterised queries, rather than generating query commands containing characters controlled by the person sending the request.

In many, if not most, webshell-based attacks investigated so far, Progress suggests that you’ll probably find a rogue webshell file named human2.aspx, perhaps along with newly-created malicious files with a .cmdline extension.

(Sophos products will detect and block known webshell files as Troj/WebShel-GO, whether they are called human2.aspx or not.)

Remember, however, that if other attackers knew about this zero-day before the patch came out, they may have injected different, and perhaps more subtle, commands that can’t now be detected by scanning for malware that was left behind, or searching for known filenames that might show up in logs.

Don’t forget to review your access logs in general, and if you don’t have time to do it yourself, don’t be afraid to ask for help!


Learn more about Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response  ▶

Short of time or expertise to take care of cybersecurity threat response? Worried that cybersecurity will end up distracting you from all the other things you need to do?


Researchers claim Windows “backdoor” affects hundreds of Gigabyte motherboards

Researchers at firmware and supply-chain security company Eclypsium claim to have found what they have rather dramatically dubbed a “backdoor” in hundreds of motherboard models from well-known hardware maker Gigabyte.

In fact, Eclypsium’s headline refers to it not merely as a backdoor, but all in upper case as a BACKDOOR.

The good news is that this seems to be a legitimate feature that has been badly implemented, so it’s not a backdoor in the usual, treacherous sense of a security hole that’s been deliberately inserted into a computer system to provide unauthorised access in future.

So, it’s not like a daytime visitor knowingly unlatching a little-known window round the back of the building so they can come back under cover of darkness and burgle the joint.

The bad news is that this seems to be a legitimate feature that has been badly implemented, leaving affected computers potentially vulnerable to abuse by cybercriminals.

So, it’s a bit like a little-known window round the back of the building that’s forgetfully been left unlatched by mistake.

The problem, according to Ecylpsium, is part of a Gigabyte service known as APP Center, which “allows you to easily launch all GIGABYTE apps installed on your system, check related updates online, and download the latest apps, drivers, and BIOS.”

Automatic updates with weaknesses

The buggy component in this APP Center ecosystem, say the researchers, is a Gigabyte program called GigabyteUpdateService.exe, a .NET application that is installed in the %SystemRoot%\System32 directory (your system root is usually C:\Windows), and runs automatically on startup as a Windows service.

Services are the Windows equivalent of background processes or daemons on Unix-style systems: they generally run under a user account of their own, often the SYSTEM account, and they keep running all the time, even if you sign out and your computer is waiting unassumingly at the logon screen.

This GigabyteUpdateService program, it seems, does exactly what its name suggests: it acts as an automated downloader-and-installer for other Gigabyte components, listed above as apps, drivers and even the BIOS firmware itself.

Unfortunately, according to Eclypsium, it fetches and runs software from one of three hard-wired URLs, and was coded in such a way that:

  • One URL uses plain old HTTP, thus providing no cryptographic integrity protection during the download. A manipulator-in-the-middle (MitM) through whose servers your network traffic passes can not only intercept any files that the program downloads, but also undetectably modify them along the way, for example by infecting them with malware, or by replacing them with different files altogether.
  • Two URLs use HTTPS, but the update utility doesn’t verify the HTTPS certificate that the server at the other end sends back. This means that a MitM can present a web certificate issued in the name of the server that the downloader expects, without needing to get that certificate validated and signed by a recognised certificate authority (CA) such as Let’s Encrypt, DigiCert or GlobalSign. Imposters could simply create a fake certificate and “vouch” for it themselves.
  • The programs that the downloader fetches and runs aren’t validated cryptographically to check that they really came from Gigabyte. Windows won’t let the downloaded files run if they aren’t digitally signed, but any organisation’s digital signature will do. Cybercriminals routinely acquire their own code-signing keys by using bogus front companies, or by buying in keys from the dark web that were stolen in data breaches, ransomware attacks, and so on.

That’s bad enough on its own, but there’s a bit more to it than that.

Injecting files into Windows

You can’t just go out and grab a new version of the GigabyteUpdateService utility, because that particular program may have arrived on your computer in an unusual way.

You can reinstall Windows at any time, and a standard Windows image doesn’t know whether you’re going to be using a Gigabyte motherboard or not, so it doesn’t come with GigabyteUpdateService.exe preinstalled.

Gigabyte therefore uses a Windows feature known as WPBT, or Windows Platform Binary Table (it’s pitched as a feature by Microsoft, though you might not agree when you learn how it works).

This “feature” allows Gigabyte to inject the GigabyteUpdateService program into the System32 directory, directly out of your BIOS, even if your C: drive is encrypted with Bitlocker.

WPBT provides a mechanism for firmware makers to store a Windows executable file in their BIOS images, load it into memory during the firmware pre-boot process, and then tell Windows, “Once you’ve unlocked the C: drive and started booting up, read in this block of memory that I’ve left lying around for you, write it out to disk, and run it early in the startup process.”

Yes, you read that correctly.

According to Microsoft’s own documentation, only one program can be injected into the Windows startup sequence in this way:

The on-disk file location is \Windows\System32\Wpbbin.exe on the operating system volume.

Additionally, there are some strict coding limitations placed on that Wpbbin.exe program, notably that:

WPBT supports only native, user-mode applications that are executed by the Windows Session Manager during operating system initialization. A native application refers to an application that does not have a dependency on the Windows API (Win32). Ntdll.dll is the only DLL dependency of a native application. A native application has a PE subsystem type of 1 (IMAGE_SUBSYSTEM_NATIVE).

From native-mode code to .NET app

At this point, you’re probably wondering how a low-level native app that starts life as Wpbbin.exe ends up as a full-blown .NET-based update application called GigabyteUpdateService.exe that runs as a regular system service.

Well, in the same way that the Gigabyte firmware (which can’t itself run under Windows) contains an embedded IMAGE_SUBSYSTEM_NATIVE WPBT program that it “drops” into Windows…

…so, too, the WPBT native-mode code (which can’t itself run as a regular Windows app) contains an embedded .NET application that it “drops” into the System32 directory to be launched later on in the Windows bootup process.

Simply put, your firmware has a specific version of GigabyteUpdateService.exe baked into it, and unless and until you update your firmware, you’ll carry on getting that hard-wired version of the APP Center updater service “introduced” into Windows for you at boot time.

There’s an obvious chicken-and-egg problem here, notably (and ironically) that if you let the APP Center ecosystem update your firmware for you automatically, you may very well end up with your update getting managed by the very same hard-wired, baked-into-the-firmware, vulnerable update service that you want to replace.

In Microsoft’s words (our emphasis):

The primary purpose of WPBT is to allow critical software to persist even when the operating system has changed or been reinstalled in a “clean” configuration. One use case for WPBT is to enable anti-theft software which is required to persist in case a device has been stolen, formatted, and reinstalled. […] This functionality is powerful and provides the capability for independent software vendors (ISVs) and original equipment manufacturers (OEMs) to have their solutions stick to the device indefinitely.

Because this feature provides the ability to persistently execute system software in the context of Windows, it becomes critical that WPBT-based solutions are as secure as possible and do not expose Windows users to exploitable conditions. In particular, WPBT solutions must not include malware (i.e., malicious software or unwanted software installed without adequate user consent).

Quite.

What to do?

Is this really a “backdoor”?

We don’t think so, because we’d prefer to reserve that particular word for more nefarious cybersecurity behaviours, such as purposely weakening encryption algorithms, deliberately building in hidden passwords, opening up undocumented command-and-control pathways, and so on.

Anyway, the good news is that this WPBT-based program injection is a Gigabyte motherboard option that you can turn off.

The Eclypsium researchers themselves said, “Although this setting appears to be disabled by default, it was enabled on the system we examined,” but a Naked Security reader (see comment below) writes, “I just built a system with a Gigabyte ITX board a few weeks ago and the Gigabyte App Center was [turned on in the BIOS] out of the box.”

So, if you have a Gigabyte motherboard and you’re worried about this so-called backdoor, you can sidestep it entirely: Go into your BIOS setup and make sure that the APP Center Download & Install option is turned off.

You could even use your endpoint security software or your corporate network firewall to block access to the three URL slugs that are wired into the insecure update service, which Eclypsium lists as:

http://mb.download.gigabyte.com/FileList/Swhttp/LiveUpdate4
https://mb.download.gigabyte.com/FileList/Swhttp/LiveUpdate4
https://software-nas SLASH Swhttp/LiveUpdate4

Just to be clear, we haven’t tried blocking these URLs, so we don’t know whether you’d block any other necessary or important Gigabyte updates from working, though we suspect that blocking downloads via that HTTP URL is a good idea anyway.

We’re guessing, from the text LiveUpdate4 in the path part of the URL, that you’ll still be able to download and manage updates manually and deploy them in your own way and on your own time…

…but that is only a guess.

Also, keep your eyes open for updates from Gigabyte.

That GigabyteUpdateService program could definitely do with improvement, and when it’s patched, you may need to update your motherboard firmware, not merely your Windows system, to ensure that you don’t still have the old version buried in your firmware, waiting to come back to life in the future.

And if you’re a programmer who is writing code to handle web-based downloads on Windows, always use HTTPS, and always perform at least a basic set of certificate verification checks on any TLS server you connect to.

Because you can.


S3 Ep137: 16th century crypto skullduggery

IT’S HARDER THAN YOU THINK

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Password manager cracks, login bugs, and Queen Elizabeth I versus Mary Queen of Scots… of course!

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do?


DUCK.  Wow!

16th century information technology skullduggery meets the Naked Security podcast, Douglas.

I can’t wait!


DOUG.  Obviously, yes… we’ll get to that shortly.

But first, as always, This Week in Tech History, on 28 May 1987, online service provider CompuServe released a little something called the Graphics Interchange Format, or GIF [HARD G].

It was developed by the late Steve Wilhite, an engineer at CompuServe (who, by the way, swore up and down it was pronounced “jif”) as a means to support colour images on the limited bandwidth and storage capacities of early computer networks.

The initial version, GIF 87a, supported a maximum of 256 colours; it quickly gained popularity due to its ability to display simple animations and its widespread support across different computer systems.

Thank you, Mr. Wilhite.


DUCK.  And what has it left us, Douglas?

Web animations, and controversy over whether the word is pronounced “graphics” [HARD G] or “giraffics” [SOFT G].


DOUG.  Exactly. [LAUGHS]


DUCK.  I just can’t not call it “giff” [HARD G].


DOUG.  Same!

Let’s stamp that, and move on to our exciting story…

…about Queen Elizabeth I, Mary Queen of Scots, and a man playing both sides between ransomware crooks and his employer, Paul.

Ransomware tales: The MitM attack that really had a Man in the Middle


DUCK.  [LAUGHS] Let’s start at the end of the story.

Basically, it was a ransomware attack against a technology company in Oxfordshire, in England.

(Not this one… it was a company in Oxford, 15km upriver from Abingdon-on-Thames, where Sophos is based.)

After being hit by ransomware, they were, as you can imagine, hit up with a demand to pay Bitcoin to get their data back.

And, like that story we had a couple of weeks ago, one of their own defensive team, who was supposed to be helping to deal with this, figured out, “I’m going to run an MiTM”, a Man-in-the-Middle attack.

I know that, to avoid gendered language and to reflect the fact that it’s not always a person (it’s often a computer in the middle) these days…

…on Naked Security, I now write “Manipulator-in-the-Middle.”

But this was literally a man in the middle.

Simply put, Doug, he managed to start emailing his employer from home, using a sort of typosquat email account that was like the crook’s email address.

He hijacked the thread, and changed the Bitcoin address in the historical email traces, because he had access to senior executives’ email accounts…

…and he basically started negotiating as a man-in-the-middle.

So, you imagine he’s negotiating individually now with the crook, and then he’s passing that negotiation on to his employer.

We don’t know whether he was hoping to run off with all of the bounty and then just tell his employer, “Hey, guess what, the crooks cheated us”, or whether he wanted to negotiate the crooks down on his end, and his employer up on the other end.

Because he knew all the right/wrong things to say to increase the fear and the terror inside the company.

So, his goal was basically to hijack the ransomware payment.

Well, Doug, it all went a little bit pear-shaped because, unfortunately for him and fortunately for his employer and for law enforcement, the company decided not to pay up.


DOUG.  [LAUGHS] Hmmmm!


DUCK.  So there was no Bitcoin for him to steal and then cut-and-run.

Also, it seems that he did not hide his traces very well, and his unlawful access to the email logs then came out in the wash.

He obviously knew that the cops were closing in on him, because he tried to wipe the rogue data off his own computers and phones at home.

But they were seized, and the data was recovered.

Somehow the case dragged on for five years, and finally, just as he was about to go to trial, he obviously decided that he didn’t really have a leg to stand on and he pleaded guilty.

So, there you have it, Doug.

A literal man-in-the-middle attack!


DOUG.  OK, so that’s all well and good in 2023…

…but take us back to the 1580s, Paul.

What about Mary, Queen of Scots and Queen Elizabeth I?


DUCK.  Well, to be honest, I just thought that was a great way of explaining a man-in-the middle attack by going back all those years.

Because, famously, Queen Elizabeth and her cousin Mary, Queen of Scots were religious and political enemies.

Elizabeth was the Queen of England; Mary was pretender to the throne.

So, Mary was effectively detained under house arrest.

Mary was living in some luxury, but confined to a castle, and was actually plotting against her cousin, but they couldn’t prove it.

And Mary was sending and receiving messages stuffed into the bungs of beer barrels delivered to the castle.

Apparently, in this case, the man-in-the-middle was a compliant beer supplier who would remove the messages before Mary got them, so they could be copied.

And he would insert replacement messages, encrypted with Mary’s cipher, with subtle changes that, loosely speaking, eventually persuaded Mary to put in writing more than she probably should have.

So she not only gave away the names of other conspirators, she also indicated that she approved of the plot to assassinate Queen Elizabeth.

They were tougher times then… and England certainly had the death penalty in those days, and Mary was tried and executed.

The top 10 cracked ciphertexts from history


DOUG.  OK, so for anyone listening, the elevator pitch for this podcast is, “Cybersecurity news and advice, and a little sprinkle of history”.

Back to our man-in-the-middle in the current day.

We talked about another insider threat just like this not too long ago.

So it’d be interesting to see if this is a pattern, or if this is just a coincidence.

But we talked about some things you can do to protect yourself against these types of attacks, so let’s go over those quickly again.

Starting with: Divide and conquer, which basically means, “Don’t give one person in the company unfettered access to everything,” Paul.


DUCK.  Yes.


DOUG.  And then we’ve got: Keep Immutable logs, which looked like it happened in this case, right?


DUCK.  Yes.

It seems that a key element of evidence in this case was the fact that he’d been digging into senior executives’ emails and changing them, and he was unable to hide that.

So you imagine, even without the other evidence, the fact that he was messing with emails that specifically related to ransomware negotiations and Bitcoin addresses would be extra-super suspicious.


DOUG.  OK, finally: Always measure, never assume.


DUCK.  Indeed!


DOUG.  The good guys won eventually… it took five years, but we did it.

Let’s move on to our next story.

Web security company finds a login bug in an app-building toolkit.

The bug is fixed quickly and transparently, so that’s nice… but there’s a bit more to the story, of course, Paul.

Serious Security: Verification is vital – examining an OAUTH login bug


DUCK.  Yes.

This is a web coding security analysis company (I hope I’ve picked the right terminology there) called SALT, and they found an authentication vulnerability in an app-building toolkit called Expo.

And, bless their hearts, Expo support a thing called OAUTH, the Open Authorization system.

That is the sort of system that is used when you go to a website that has decided, “You know what, we don’t want the hassle of trying to learn how to do password security for ourselves. What we’re going to do is we’re going to say, ‘Login with Google, login with Facebook’,” something like that.

And the idea is that, loosely speaking, you contact Facebook, or Google, or whatever the mainstream service is and you say, “Hey, I want to give example.com permission to do X.”

So, Facebook, or Google, or whatever, authenticates you and then says, “OK, here’s a magic code that you can give to the other end that says, ‘We have checked you out; you’ve authenticated with us, and this is your authentication token.”

Then, the other end independently can check with Facebook, or Google, or whatever to make sure that that token was issued on behalf of you.

So what that means is that you never need to hand over any password to the site… you are, if you like, co-opting Facebook or Google to do the actual authentication part for you.

It’s a great idea if you’re a boutique website and you think, “I’m not going to knit my own cryptography.”

So, this is not a bug in OAUTH.

It’s just an oversight; something that was forgotten in Expo’s implementation of the OAUTH process.

And, loosely speaking, Doug, it goes like this.

The Expo code creates a giant URL that includes all the parameters that are needed for authenticating with Facebook, and then deciding where that final magic access token should be sent.

Therefore, in theory, if you constructed your own URL or you were able to modify the URL, you could change the place where this magic authentication token finally got sent.

But you wouldn’t be able to deceive the user, because a dialog appears that says, “The app at URL-here is asking you to sign into your Facebook account. Do you fully trust this and want to let it do so? Yes or No?”

However, when it came to the point of receiving the authorisation code from Facebook, or Google, or whatever, and passing it onto this “return URL”, the Expo code would not check that you had actually clicked Yes on the approval dialog.

If you actively saw the dialog and clicked No, then you would prevent the attack from happening.

But, essentially, this “failed open”.

If you never saw the dialogue, so you wouldn’t even know that there was something to click and you just did nothing, and then the attackers simply triggered the next URL visit by themselves with more JavaScript…

…then the system would work.

And the reason it worked is that the magic “return URL”, the place where the super-secret authorisation code was to be sent, was set in a web cookie for Expo to use later *before you clicked Yes on the dialog*.

Later on, the existence of that “return URL” cookie was essentially taken, if you like, as proof that you must have seen the dialog, and you must have decided to go ahead.

Whereas, in fact, that was not the case.

So it was a huge slip ‘twixt cup and lip, Douglas.


DOUG.  OK, we have some tips, starting with: When it came to reporting and disclosing this bug, this was a textbook case.

This is almost exactly how you should do it, Paul.

Everything just worked as it should, so this is a great example of how to do this in the best way possible.


DUCK.  And that’s one of the main reasons why I wanted to write it up on Naked Security.

SALT, the people who found the bug…

..they found it; they disclosed it responsibly; they worked with Expo, who fixed it, literally within hours.

So, even though it was a bug, even though it was a coding mistake, it led to SALT saying, “You know what, the Expo people were an absolute pleasure to work with.”

Then, SALT went about getting a CVE, and instead of going, “Hey, the bug’s fixed now, so two days later we can make a big PR splash about it,” they nevertheless set a date three months ahead when they would actually write up their findings and write up their very educational report.

Instead of rushing it out for immediate PR purposes, in case they got scooped at the last minute, they not only reported this responsibly so it could be fixed before crooks found it (and there’s no evidence anyone had abused this vulnerability), they also then gave a bit of leeway for Expo to go out there and communicate with their customers.


DOUG.  And then of course, we talked a bit about this: Ensure that your authentication checks fail closed.

Ensure that it doesn’t just keep working if someone ignores or cancels it.

But the bigger issue here is: Never assume that your own client side code will be in control of the verification process.


DUCK.  If you followed the exact process of the JavaScript code provided by Expo to take you through this OAUTH process, you would have been fine.

But if you avoided their code and actually just triggered the links with JavaScript of your own, including bypassing or cancelling the popup, then you won.

Bypassing your client code is the first thing that an attacker is going to think about.


DOUG.  Alright, last but not least: Log out of web accounts when you aren’t actively using them.

That’s good advice all around.


DUCK.  We say it all the time on the Naked Security podcast, and we have for many years.

3 simple steps to online safety

It’s unpopular advice, because it is rather inconvenient, in the same way as telling people, “Hey, why not set your browser to clear all cookies on exit?”

If you think about it, in this particular case… let’s say the login was happening via your Facebook account; OAUTH via Facebook.

If you were logged out of Facebook, then no matter what JavaScript treachery an attacker tried (killing off the Expo popup, and all of that stuff), the authentication process with Facebook wouldn’t succeed because Facebook would go, “Hey, this person’s asking me to authenticate them. They’re not currently logged in.”

So you would always and unavoidably see the Facebook login pop up at that point: “You need to log in now.”

And that would give the subterfuge away immediately.


DOUG.  OK, very good.

And our last story of the day: Don’t panic, but there’s apparently a way to crack the master password for open-source password manager KeePass.

But, again, don’t panic, because it’s a lot more complicated than it seems, Paul.

You’ve really got to have control of someone’s machine.

Serious Security: That KeePass “master password crack”, and what we can learn from it


DUCK.  You do.

If you want to track this down, it’s CVE-2023-32784.

It’s a fascinating bug, and I wrote a sort of magnum opus style article on Naked Security about it, entitled: That KeePass ‘master password crack’ and what we can learn from it.

So I won’t spoil that article, which goes into C-type memory allocation, scripting language-type memory allocation, and finally C# or .NET managed strings… managed memory allocation by the system.

I’ll just describe what the researcher in this case discovered.

What they did is… they went looking in the KeePass code, and in KeePass memory dumps, for evidence of how easy it might be to find the master password in memory, albeit temporarily.

What if it’s there minutes, hours or days later?

What if the master password is still lying around, maybe in your swap file on disk, even after you’ve rebooted your computer?

So I set up KeePass, and I gave myself a 16-character, all-uppercase password so it would be easy to recognise if I found it in memory.

And, lo and behold, at no point did I ever find my master password lying around in memory: not as an ASCII string; not as a Windows widechar (UTF-16)) string.

Great!

But what this researcher noticed is that when you type your password into KeePass, it puts up… I’ll call it “the Unicode blob character”, just to show you that, yes, you did press a key, and therefore to show you how many characters you’ve typed in.

So, as you type in your password, you see the string blob [●], blob-blob [●●], blob-blob-blob [●●●], and in my case, everything up to 16 blobs.

Well, those blob strings don’t seem like they’d be a security risk, so maybe they were just being left to the .NET runtime to manage as “managed strings”, where they might lie around in memory afterwards…

…and not get cleaned up because, “Hey, they’re just blobs.”

It turns out that if you do a memory dump of KeePass, which gives you a whopping 250MB of stuff, and you go looking for strings like blob-blob, blob-blob-blob, and so on (any number of blobs), there’s a chunk of memory dump where you’ll see two blobs, then three blobs, then four blobs, then five blobs… and in my case, all the way up to 16 blobs.

And then you’ll just get this random collection of “blob characters that happen by mistake”, if you like.

In other words, just looking for those blob strings, even though they don’t give away your actual password, will leak the length of your password.

However, it gets even more interesting, because what this researcher wondered is, “What if the data near to those blob strings in memory may be somehow tied to the individual characters that you type in the password?”

So, what if you go through the memory dump file, and instead of just searching for two blobs, three blobs/four blobs, more…

…you search for a string of blobs followed by a character that you think is in the password?

So, in my case, I was just searching for the characters A to Z, because I knew that was what was in the password.

I’m searching for any string of blobs, followed by one ASCII character.

Guess what happened, Doug?

I get two blobs followed by the third character of my password; three blobs followed by the fourth character of my password; all the way up to 15 blobs immediately followed by the 16th character in my password.


DOUG.  Yes, it’s a wild visual in this article!

I was following along… it was getting a little technical, and all of a sudden I just see, “Whoa! That looks like a password!”


DUCK.  It’s basically as though the individual characters of your password are scattered liberally through memory, but the ones that represent the ASCII characters that were actually part of your password as you typed it in…

…it’s like they’ve got luminescent die attached to them.

So, these strings of blobs inadvertently act as a tagging mechanism to flag the characters in your password.

And, really, the moral of the story is that things can leak out in memory in ways that you simply never expected, and that even a well-informed code reviewer might not notice.

So it’s a fascinating read, and it’s a great reminder that writing secure code can be a lot harder than you think.

And even more importantly, reviewing, and quality-assuring, and testing secure code can be harder still…

…because you have to have eyes in the front, the back, and the sides of your head, and you really have to think like an attacker and try looking for leaky secrets absolutely everywhere you can.


DOUG.  Alright, check it out, it it’s on makedsecurity.sophos.com.

And, as the sun begins to set on our show, it’s time to hear from one of our readers.

On the previous podcast (this is one of my favorite comments yet, Paul), Naked Security listener Chang comments:

There. I’ve done it. After almost two years of binge listening, I finished listening to all of the Naked Security podcast episodes. I’m all caught up.

I enjoyed it from the beginning, starting with the long running Chet Chat; then to the UK crew; “Oh no! It’s Kim” was next; then I finally reached the present day’s “This Week in Tech History.”

What a ride!

Thank you, Chang!

I can’t believe you binged all the episodes, but we do all (I hope I’m not speaking out of turn) very much appreciate it.


DUCK.  Very much indeed, Doug!

It’s nice to know not only that people are listening, but also that they’re finding the podcasts useful, and that it’s helping them learn more about cybersecurity, and to lift their game, even if it’s only a little bit.

Because I think, as I’ve said many times before, if we all lift our cybersecurity game a tiny little bit, then we do much more to keep the crooks at bay than if one or two companies, one or two organisations, one or two individuals put in a huge amount of effort, but the rest of us lag behind.


DOUG.  Exactly!

Well, thank you very much again, Chang, for sending that in.

We really appreciate it.

And if you have an interesting story, comment or question you’d like to submit, we love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH.  Stay secure!

[MUSICAL MODEM]


Serious Security: That KeePass “master password crack”, and what we can learn from it

Over the last two weeks, we’ve seen a series of articles talking up what’s been described as a “master password crack” in the popular open-source password manager KeePass.

The bug was considered important enough to get an official US government identifier (it’s known as CVE-2023-32784, if you want to hunt it down), and given that the master password to your password manager is pretty much the key to your whole digital castle, you can understand why the story provoked lots of excitement.

The good news is that an attacker who wanted to exploit this bug would almost certainly need to have infected your computer with malware already, and would therefore be able to spy on your keystrokes and running programs anyway.

In other words, the bug can be considered an easily-managed risk until the creator of KeePass comes out with an update, which should appear soon (at the beginning of June 2023, apparently).

As the discloser of the bug takes care to point out:

If you use full disk encryption with a strong password and your system is [free from malware], you should be fine. No one can steal your passwords remotely over the internet with this finding alone.

The risks explained

Heavily summarised, the bug boils down to the difficulty of ensuring that all traces of confidential data are purged from memory once you’ve finished with them.

We’ll ignore here the problems of how to avoid having secret data in memory at all, even briefly.

In this article, we just want to remind programmers everywhere that code approved by a security-conscious reviewer with a comment such as “appears to clean up correctly after itself”…

…might in fact not clean up fully at all, and the potential data leakage might not be obvious from a direct study of the code itself.

Simply put, the CVE-2023-32784 vulnerability means that a KeePass master password might be recoverable from system data even after the KeyPass program has exited, because sufficient information about your password (albeit not actually the raw password itself, which we’ll focus on in a moment) might get left behind in sytem swap or sleep files, where allocated system memory may end up saved for later.

On a Windows computer where BitLocker isn’t used to encrypt the hard disk when the system is turned off, this would give a crook who stole your laptop a fighting chance of booting up from a USB or CD drive, and recovering your master password even though the KeyPass program itself takes care never to save it permanently to disk.

A long-term password leak in memory also means that the password could, in theory, be recovered from a memory dump of the KeyPass program, even if that dump was grabbed long after you’d typed the password in, and long after the KeePass itself had no more need to keep it around.

Clearly, you should assume that malware already on your system could recover almost any typed-in password via a variety of real-time snooping techniques, as long as they were active at the time you did the typing. But you might reasonably expect that your time exposed to danger would be limited to the brief period of typing, not extended to many minutes, hours or days afterwards, or perhaps longer, including after you shut your computer down.

What gets left behind?

We therefore thought we’d take a high-level look at how secret data can get left behind in memory in ways that aren’t directly obvious from the code.

Don’t worry if you aren’t a programmer – we’ll keep it simple, and explain as we go.

We’ll start by looking at memory use and cleanup in a simple C program that simulates entering and temporarily storing a password by doing the following:

  • Allocating a dedicated chunk of memory specially to store the password.
  • Inserting a known text string so we can easily find it in memory if needed.
  • Appending 16 pseudo-random 8-bit ASCII characters from the range A-P.
  • Printing out the simulated password buffer.
  • Freeing up the memory in the hope of expunging the password buffer.
  • Exiting the program.

Greatly simplified, the C code might look something like this, with no error checking, using poor-quality pseudo-random numbers from the C runtime function rand(), and ignoring any buffer overflow checks (never do any of this in real code!):

 // Ask for memory char* buff = malloc(128); // Copy in fixed string we can recognise in RAM strcpy(buff,"unlikelytext"); // Append 16 pseudo-random ASCII characters for (int i = 1; i <= 16; i++) { // Choose a letter from A (65+0) to P (65+15) char ch = 65 + (rand() & 15); // Modify the buff string directly in memory strncat(buff,&ch,1); } // Print it out, so we're done with buff printf("Full string was: %s\n",buff); // Return the unwanted buffer and hope that expunges it free(buff);

In fact, the code we finally used in our tests includes some additional bits and pieces shown below, so that we could dump the full contents of our temporary password buffer as we used it, to look for unwanted or left-over content.

Note that we deliberately dump the buffer after calling free(), which is technically a use-after-free bug, but we are doing it here as a sneaky way of seeing whether anything critical gets left behind after handing our buffer back, which could lead to a dangerous data leakage hole in real life.

We’ve also inserted two Waiting for [Enter] prompts into the code to give ourselves a chance to create memory dumps at key points in the program, giving us raw data to search later, in order to see what was left behind as the program ran.

To do memory dumps, we’ll be using the Microsoft Sysinternals tool procdump with the -ma option (dump all memory), which avoids the need to write our own code to use the Windows DbgHelp system and its rather complex MiniDumpXxxx() functions.

To compile the C code, we used our own small-and-simple build of Fabrice Bellard’s free and open-source Tiny C Compiler, available for 64-bit Windows in source and binary form directly from our GitHub page.

Copy-and-pastable text of all the source code pictured in the article appears at the bottom of the page.

This is what happened when we compiled and ran the test program:

C:\Users\duck\KEYPASS> petcc64 -stdinc -stdlib unl1.c
Tiny C Compiler - Copyright (C) 2001-2023 Fabrice Bellard
Stripped down by Paul Ducklin for use as a learning tool
Version petcc64-0.9.27 [0006] - Generates 64-bit PEs only
-> unl1.c
-> c:/users/duck/tcc/petccinc/stdio.h
[. . . .]
-> c:/users/duck/tcc/petcclib/libpetcc1_64.a
-> C:/Windows/system32/msvcrt.dll
-> C:/Windows/system32/kernel32.dll
------------------------------- virt file size section 1000 200 438 .text 2000 800 2ac .data 3000 c00 24 .pdata
-------------------------------
<- unl1.exe (3584 bytes) C:\Users\duck\KEYPASS> unl1.exe Dumping 'new' buffer at start
00F51390: 90 57 F5 00 00 00 00 00 50 01 F5 00 00 00 00 00 .W......P.......
00F513A0: 73 74 65 6D 33 32 5C 63 6D 64 2E 65 78 65 00 44 stem32\cmd.exe.D
00F513B0: 72 69 76 65 72 44 61 74 61 3D 43 3A 5C 57 69 6E riverData=C:\Win
00F513C0: 64 6F 77 73 5C 53 79 73 74 65 6D 33 32 5C 44 72 dows\System32\Dr
00F513D0: 69 76 65 72 73 5C 44 72 69 76 65 72 44 61 74 61 ivers\DriverData
00F513E0: 00 45 46 43 5F 34 33 37 32 3D 31 00 46 50 53 5F .EFC_4372=1.FPS_
00F513F0: 42 52 4F 57 53 45 52 5F 41 50 50 5F 50 52 4F 46 BROWSER_APP_PROF
00F51400: 49 4C 45 5F 53 54 52 49 4E 47 3D 49 6E 74 65 72 ILE_STRING=Inter
00F51410: 6E 65 74 20 45 78 70 6C 7A 56 F4 3C AC 4B 00 00 net ExplzV.<.K.. Full string was: unlikelytextJHKNEJJCPOMDJHAN
00F51390: 75 6E 6C 69 6B 65 6C 79 74 65 78 74 4A 48 4B 4E unlikelytextJHKN
00F513A0: 45 4A 4A 43 50 4F 4D 44 4A 48 41 4E 00 65 00 44 EJJCPOMDJHAN.e.D
00F513B0: 72 69 76 65 72 44 61 74 61 3D 43 3A 5C 57 69 6E riverData=C:\Win
00F513C0: 64 6F 77 73 5C 53 79 73 74 65 6D 33 32 5C 44 72 dows\System32\Dr
00F513D0: 69 76 65 72 73 5C 44 72 69 76 65 72 44 61 74 61 ivers\DriverData
00F513E0: 00 45 46 43 5F 34 33 37 32 3D 31 00 46 50 53 5F .EFC_4372=1.FPS_
00F513F0: 42 52 4F 57 53 45 52 5F 41 50 50 5F 50 52 4F 46 BROWSER_APP_PROF
00F51400: 49 4C 45 5F 53 54 52 49 4E 47 3D 49 6E 74 65 72 ILE_STRING=Inter
00F51410: 6E 65 74 20 45 78 70 6C 7A 56 F4 3C AC 4B 00 00 net ExplzV.<.K.. Waiting for [ENTER] to free buffer... Dumping buffer after free()
00F51390: A0 67 F5 00 00 00 00 00 50 01 F5 00 00 00 00 00 .g......P.......
00F513A0: 45 4A 4A 43 50 4F 4D 44 4A 48 41 4E 00 65 00 44 EJJCPOMDJHAN.e.D
00F513B0: 72 69 76 65 72 44 61 74 61 3D 43 3A 5C 57 69 6E riverData=C:\Win
00F513C0: 64 6F 77 73 5C 53 79 73 74 65 6D 33 32 5C 44 72 dows\System32\Dr
00F513D0: 69 76 65 72 73 5C 44 72 69 76 65 72 44 61 74 61 ivers\DriverData
00F513E0: 00 45 46 43 5F 34 33 37 32 3D 31 00 46 50 53 5F .EFC_4372=1.FPS_
00F513F0: 42 52 4F 57 53 45 52 5F 41 50 50 5F 50 52 4F 46 BROWSER_APP_PROF
00F51400: 49 4C 45 5F 53 54 52 49 4E 47 3D 49 6E 74 65 72 ILE_STRING=Inter
00F51410: 6E 65 74 20 45 78 70 6C 4D 00 00 4D AC 4B 00 00 net ExplM..M.K.. Waiting for [ENTER] to exit main()... C:\Users\duck\KEYPASS>

In this run, we didn’t bother grabbing any process memory dumps, because we could see right away from the output that this code leaks data.

Right after calling the Windows C runtime library function malloc(), we can see that the buffer we get back includes what looks like environment variable data left over from the program’s startup code, with the first 16 bytes apparently altered to look like some sort of left-over memory allocation header.

(Note how those 16 bytes look like two 8-byte memory addresses, 0xF55790 and 0xF50150, that are just after and just before our own memory buffer respectively.)

When the password is supposed to be in memory, we can see the entire string clearly in the buffer, as we would expect.

But after calling free(), note how the first 16 bytes of our buffer have been rewritten with what look like nearby memory addresses once again, presumably so the memory allocator can keep track of blocks in memory that it can re-use…

… but the rest of the our “expunged” password text (the last 12 random characters EJJCPOMDJHAN) has been left behind.

Not only do we need to manage our own memory allocations and de-allocations in C, we also need to ensure that we choose the right system functions for data buffers if we want to control them precisely.

For example, by switching to this code instead, we get a bit more control over what’s in memory:

By switching from malloc() and free() to use the lower-level Windows allocation functions VirtualAlloc() and VirtualFree() directly, we get better control.

However, we pay a price in speed, because each call to VirtualAlloc() does more work that a call to malloc(), which works by continually dividing and subdividing a block of pre-allocated low-level memory.

Using VirtualAlloc() repeatedly for small blocks also uses up more memory overall, because each block dished out by VirtualAlloc() typically consumes a multiple of 4KB of memory (or 2MB, if you are using so-called large memory pages), so that our 128-byte buffer above is rounded up to 4096 bytes, wasting the 3968 bytes at the end of the 4KB memory block.

But, as you can see, the memory we get back is automatically blanked out (set to zero), so we can’t see what was there before, and this time the program crashes when we try to do our use-after-free trick, because Windows detects that we’re trying to peek at memory we no longer own:

C:\Users\duck\KEYPASS> unl2
Dumping 'new' buffer at start
0000000000EA0000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Full string was: unlikelytextIBIPJPPHEOPOIDLL
0000000000EA0000: 75 6E 6C 69 6B 65 6C 79 74 65 78 74 49 42 49 50 unlikelytextIBIP
0000000000EA0010: 4A 50 50 48 45 4F 50 4F 49 44 4C 4C 00 00 00 00 JPPHEOPOIDLL....
0000000000EA0020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0000000000EA0080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Waiting for [ENTER] to free buffer... Dumping buffer after free()
0000000000EA0000: [Program terminated here because Windows caught our use-after-free]

Because the memory we freed up will need re-allocating with VirtualAlloc() before it can be used again, we can assume that it will be zeroed out before it’s recycled.

However, if we wanted to make sure it was blanked out, we could call the special Windows function RtlSecureZeroMemory() just before freeing it, to guarantee that Windows will write zeros into our buffer first.

The related function RtlZeroMemory(), if you were wondering, does a similar thing, but without the guarantee of actually working, because compilers are allowed to remove it as theoretically redundant if they notice that the buffer is not used again afterwards.

As you can see, we need to take considerable care to use the right Windows functions if we want to miminise the time that secrets stored in memory may lie around for later.

In this article, we aren’t going to look at how you prevent secrets getting saved out accidentally to your swap file by locking them into physical RAM. (Hint: VirtualLock() isn’t actually enough on its own.) If you would like to know more about low-level Windows memory security, let us know in the comments and we will look at it in a future article.

Using automatic memory management

One neat way to avoid having to allocate, manage and deallocate memory by ourselves is to use a programming language that takes care of malloc() and free(), or VirtualAlloc() and VirtualFree(), automatically.

Scripting language such as Perl, Python, Lua, JavaScript and others get rid of the most common memory saftey bugs that plague C and C++ code, by tracking memory usage for you in the background.

As we mentioned earlier, our badly-written sample C code above works fine now, but only because it’s still a super-simple program, with fixed-size data structures, where we can verify by inspection that we won’t overwrite our 128-byte buffer, and that there is only one execution path that starts with malloc() and ends with a corresponding free().

But if we updated it to allow variable-length password generation, or added additional features into the generation process, then we (or whoever maintains the code next) could easily end up with buffer overflows, use-after-free bugs, or memory that never gets freed up and therefore leaves secret data hanging around long after it is no longer needed.

In a language like Lua, we can let the Lua run-time environment, which does what’s known in the jargon as automatic garbage collection, deal with acquiring memory from the system, and returning it when it detects we’ve stopped using it.

The C program we listed above becomes very much simpler when memory allocation and de-allocation are taken care of for us:

We allocate memory to hold the string s simply by assigning the string 'unlikelytext' to it.

We can later either hint to Lua explicitly that we are no longer interested in s by assigning it the value nil (all nils are essentially the same Lua object), or stop using s and wait for Lua to detect that it’s no longer needed.

Either way, the memory used by s will eventually be recovered automatically.

And to prevent buffer overflows or size mismanagement when appending to text strings (the Lua operator .., pronounced concat, essentially adds two strings together, like + in Python), every time we extend or shorten a string, Lua magically allocates space for a brand new string, rather than modifying or replacing the original one in its existing memory location.

This approach is slower, and leads to memory usage peaks that are higher than you’d get in C due to the intermediate strings allocated during text manipulation, but it’s much safer in respect of buffer overflows.

But this sort of automatic string management (known in the jargon as immutability, because strings never get mutated, or modified in place, once they’ve been created), does bring new cybersecurity headaches of its own.

We ran the Lua program above on Windows, up to the second pause, just before the program exited:

C:\Users\duck\KEYPASS> lua s1.lua
Full string is: unlikelytextHLKONBOJILAGLNLN Waiting for [ENTER] before freeing string... Waiting for [ENTER] before exiting...

This time, we took a process memory dump, like this:

C:\Users\duck\KEYPASS> procdump -ma lua lua-s1.dmp ProcDump v11.0 - Sysinternals process dump utility
Copyright (C) 2009-2022 Mark Russinovich and Andrew Richards
Sysinternals - www.sysinternals.com [00:00:00] Dump 1 initiated: C:\Users\duck\KEYPASS\lua-s1.dmp
[00:00:00] Dump 1 writing: Estimated dump file size is 10 MB.
[00:00:00] Dump 1 complete: 10 MB written in 0.1 seconds
[00:00:01] Dump count reached.

Then we ran this simple script, which reads the dump file back in, finds everywhere in memory that that the known string unlikelytext appeared, and prints it out, together with its location in the dumpfile and the ASCII characters that immediately followed:

Even if you’ve used script languages before, or worked in any programming ecosystem that features so-called managed strings, where the system keeps track of memory allocations and deallocations for you, and handles them as it sees fit…

…you might be surprised to see the output that this memory scan produces:

C:\Users\duck\KEYPASS> lua findit.lua lua-s1.dmp
006D8AFC: unlikelytextALJBNGOAPLLBDEB
006D8B3C: unlikelytextALJBNGOA
006D8B7C: unlikelytextALJBNGO
006D8BFC: unlikelytextALJBNGOAPLLBDEBJ
006D8CBC: unlikelytextALJBN
006D8D7C: unlikelytextALJBNGOAP
006D903C: unlikelytextALJBNGOAPL
006D90BC: unlikelytextALJBNGOAPLL
006D90FC: unlikelytextALJBNG
006D913C: unlikelytextALJBNGOAPLLB
006D91BC: unlikelytextALJB
006D91FC: unlikelytextALJBNGOAPLLBD
006D923C: unlikelytextALJBNGOAPLLBDE
006DB70C: unlikelytextALJ
006DBB8C: unlikelytextAL
006DBD0C: unlikelytextA

Lo and behold, at the time we grabbed our memory dump, even though we’d finished with the string s (and told Lua that we didn’t need it any more by saying s = nil), all the strings that the code had created along the way were still present in RAM, not yet recovered or deleted.

Indeed, if we sort the above output by the strings themselves, rather than following the order in which they appeared in RAM, you’ll be able to picture what happened during the loop where we concatenated one character at a time to our password string:

C:\Users\duck\KEYPASS> lua findit.lua lua-s1.dmp | sort /+10
006DBD0C: unlikelytextA
006DBB8C: unlikelytextAL
006DB70C: unlikelytextALJ
006D91BC: unlikelytextALJB
006D8CBC: unlikelytextALJBN
006D90FC: unlikelytextALJBNG
006D8B7C: unlikelytextALJBNGO
006D8B3C: unlikelytextALJBNGOA
006D8D7C: unlikelytextALJBNGOAP
006D903C: unlikelytextALJBNGOAPL
006D90BC: unlikelytextALJBNGOAPLL
006D913C: unlikelytextALJBNGOAPLLB
006D91FC: unlikelytextALJBNGOAPLLBD
006D923C: unlikelytextALJBNGOAPLLBDE
006D8AFC: unlikelytextALJBNGOAPLLBDEB
006D8BFC: unlikelytextALJBNGOAPLLBDEBJ

All those temporary, intermediate strings are still there, so even if we had successfully wiped out the final value of s, we’d still be leaking everything except its last character.

In fact, in this case, even when we deliberately forced our program to dispose of all unneeded data by calling the special Lua function collectgarbage() (most scripting languages have something similar), most of the data in those pesky temporary strings stuck around in RAM anyway, because we’d compiled Lua to do its automatic memory management using good old malloc() and free().

In other words, even after Lua itself reclaimed its temporary memory blocks to use them again, we couldn’t control how or when those memory blocks would get re-used, and thus how long they would lie around inside the process with their left-over data waiting to be sniffed out, dumped, or otherwise leaked.

Enter .NET

But what about KeePass, which is where this article started?

KeePass is written in C#, and uses the .NET runtime, so it avoids the problems of memory mismanagement that C programs bring with them…

…but C# manages its own text strings, rather like Lua does, which raises the question:

Even if the programmer avoided storing the entire master password on one place after he’d finished with it, could attackers with access to a memory dump nevertheless find enough left-over temporary data to guess at or recover the master password anyway, even if those attackers got access to your computer minutes, hours, or days after you’d typed the password in ?

Simply put, are there detectable, ghostly remnants of your master password that survive in RAM, even after you’d expect them to have been expunged?

Annoyingly, as Github user Vdohney discovered, the answer (for KeePass verions earlier than 2.54, at least) is, “Yes.”

To be clear, we don’t think that your actual master password can be recovered as a single text string from a KeePass memory dump, because the author created a special function for master password entry that goes out of its way to avoid storing the full password where it could easily be spotted and sniffed out.

We satisfied ourselves of this by setting our master password to SIXTEENPASSCHARS, typing it in, and then taking memory dumps immediately, shortly, and long afterwards.

We searched the dumps with a simple Lua script that looked everwhere for that password text, both in 8-bit ASCII format, and in 16-bit UTF-16 (Windows widechar) format, like this:

The results were encouraging:

C:\Users\duck\KEYPASS> lua searchknown.lua kp2-post.dmp
Reading in dump file... DONE.
Searching for SIXTEENPASSCHARS as 8-bit ASCII... not found.
Searching for SIXTEENPASSCHARS as UTF-16... not found.

But Vdohney, the discoverer of CVE-2023-32784, noticed that as you type in your master password, KeePass gives you visual feedback by constructing and displaying a placeholder string consisting of Unicode “blob” characters, up to and including the length of your password:

In widechar text strings on Windows (which consist of two bytes per character, not just one byte each as in ASCII), the “blob” character is encoded in RAM as the hex byte 0xCF followed by 0x25 (which just happens to be a percent sign in ASCII).

So, even if KeePass is taking great care with the raw characters you type in when you enter the password itself, you might end up with left-over strings of “blob” characters, easily detectable in memory as repeated runs such as CF25CF25 or CF25CF25CF25

…and, if so, the longest run of blob characters you found would probably give away the length of your password, which would be a modest form of password information leakage, if nothing else.

We used the following Lua script to look for signs of left-over password placeholder strings:

The output was surprising (we have deleted successive lines with the same number of blobs, or with fewer blobs than the previous line, to save space):

C:\Users\duck\KEYPASS> lua findblobs.lua kp2-post.dmp
000EFF3C: *
[. . .]
00BE621B: **
00BE64C7: ***
[. . .]
00BE6E8F: ****
[. . .]
00BE795F: *****
[. . .]
00BE84F7: ******
[. . .]
00BE8F37: *******
[ continues similarly for 8 blobs, 9 blobs, etc. ]
[ until two final lines of exactly 16 blobs each ]
00C0503B: ****************
00C05077: ****************
00C09337: *
00C09738: *
[ all remaining matches are one blob long]
0123B058: *

At close-together but ever-increasing memory addresses, we found a systematic list of 3 blobs, then 4 blobs, and so on up to 16 blobs (the length of our password), followed by many randomly scattered instances of single-blob strings.

So, those placeholder “blob” strings do indeed seem to be leaking into memory and staying behind to leak the password length, long after the KeePass software has finished with your master password.

The next step

We decided to dig further, just like Vdohney did.

We changed our pattern matching code to detect chains of blob characters followed by any single ASCII character in 16-bit format (ASCII characters are represented in UTF-16 as their usual 8-bit ASCII code, followed by a zero byte).

This time, to save space, we have suppressed the output for any match that exactly matches the previous one:

Surprise, surprise:

C:\Users\duck\KEYPASS> lua searchkp.lua kp2-post.dmp
00BE581B: *I
00BE621B: **X
00BE6BD3: ***T
00BE769B: ****E
00BE822B: *****E
00BE8C6B: ******N
00BE974B: *******P
00BEA25B: ********A
00BEAD33: *********S
00BEB81B: **********S
00BEC383: ***********C
00BECEEB: ************H
00BEDA5B: *************A
00BEE623: **************R
00BEF1A3: ***************S
03E97CF2: *N
0AA6F0AF: *W
0D8AF7C8: *X
0F27BAF8: *S

Look what we get out of .NET’s managed string memory region!

A closely-bunched set of temporary “blob strings” that reveal the successive characters in our password, starting with the second character.

Those leaky strings are followed by widely-distributed single-character matches that we assume arose by chance. (A KeePass dump file is about 250MB in size, so there is plenty of room for “blob” characters to appear as if by luck.)

Even if we take those extra four matches into account, rather than discarding them as likely mismatches, we can guess that the master password is one of:

?IXTEENPASSCHARS
?NXTEENPASSCHARS
?WXTEENPASSCHARS
?SXTEENPASSCHARS

Obviously, this simple technique doesn’t find the first character in the password, because the first “blob string” is only constructed after that first character has been typed in

Note that this list is nice and short because we filtered out matches that didn’t end in ASCII characters.

If you were looking for characters in a different range, such as Chinese or Korean characters, you might end up with more accidental hits, because there are a lot more possible characters to match on…

…but we suspect you’ll get pretty close to your master password anyway, and the “blob strings” that relate to the password seem to be grouped together in RAM, presumably because they were allocated at about the same time by the same part of the .NET runtime.

And there, in an admittedly long and discursive nutshell, is the fascinating story of CVE-2023-32784.

What to do?

  • If you’re a KeePass user, don’t panic. Although this is a bug, and is technically an exploitable vulnerability, remote attackers who wanted to crack your password using this bug would need to implant malware on your computer first. That would give them many other ways to steal your passwords directly, even if this bug didn’t exist, for example by logging your keystrokes as you type. At this point, you can simply watch out for the forthcoming update, and grab it when it’s ready.
  • If you aren’t using full-disk encryption, consider enabling it. To extract left-over passwords from your swap file or hibernation file (operating system disk files used to save memory contents temporarily during heavy load or when your computer is “sleeping”), attackers would need direct access to your hard disk. If you have BitLocker or its equivalent for other operating systems activated, they won’t be able to access your swap file, your hibernation file, or any other personal data such as documents, spreadsheets, saved emails, and so on.
  • If you’re a programmer, keep yourself informed about memory management issues. Don’t assume that just because every free() matches its corresponding malloc() that your data is safe and well-managed. Sometimes, you may need to take extra precautions to avoid leaving secret data lying around, and those precautions very from operating system to operating system.
  • If you’re a QA tester or a code reviewer, always think “behind the scenes”. Even if memory management code looks tidy and well-balanced, be aware of what’s happening behind the scenes (because the original programmer might not have known to do so), and get ready to do some pentesting-style work such as runtime monitoring and memory dumping to verify that secure code really is behaving as it’s supposed to.

CODE FROM THE ARTICLE: UNL1.C

#include <stdio.h>
#include <string.h>
#include <stdlib.h> void hexdump(unsigned char* buff, int len) { // Print buffer in 16-byte chunks for (int i = 0; i < len+16; i = i+16) { printf("%016X: ",buff+i); // Show 16 bytes as hex values for (int j = 0; j < 16; j = j+1) { printf("%02X ",buff[i+j]); } // Repeat those 16 bytes as characters for (int j = 0; j < 16; j = j+1) { unsigned ch = buff[i+j]; printf("%c",(ch>=32 && ch<=127)?ch:'.'); } printf("\n"); } printf("\n");
} int main(void) { // Acquire memory to store password, and show what // is in the buffer when it's officially "new"... char* buff = malloc(128); printf("Dumping 'new' buffer at start\n"); hexdump(buff,128); // Use pseudorandom buffer address as random seed srand((unsigned)buff); // Start the password with some fixed, searchable text strcpy(buff,"unlikelytext"); // Append 16 pseudorandom letters, one at a time for (int i = 1; i <= 16; i++) { // Choose a letter from A (65+0) to P (65+15) char ch = 65 + (rand() & 15); // Then modify the buff string in place strncat(buff,&ch,1); } // The full password is now in memory, so print // it as a string, and show the whole buffer... printf("Full string was: %s\n",buff); hexdump(buff,128); // Pause to dump process RAM now (try: 'procdump -ma') puts("Waiting for [ENTER] to free buffer..."); getchar(); // Formally free() the memory and show the buffer // again to see if anything was left behind... free(buff); printf("Dumping buffer after free()\n"); hexdump(buff,128); // Pause to dump RAM again to inspect differences puts("Waiting for [ENTER] to exit main()..."); getchar(); return 0;
}

CODE FROM THE ARTICLE: UNL2.C

#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <windows.h> void hexdump(unsigned char* buff, int len) { // Print buffer in 16-byte chunks for (int i = 0; i < len+16; i = i+16) { printf("%016X: ",buff+i); // Show 16 bytes as hex values for (int j = 0; j < 16; j = j+1) { printf("%02X ",buff[i+j]); } // Repeat those 16 bytes as characters for (int j = 0; j < 16; j = j+1) { unsigned ch = buff[i+j]; printf("%c",(ch>=32 && ch<=127)?ch:'.'); } printf("\n"); } printf("\n");
} int main(void) { // Acquire memory to store password, and show what // is in the buffer when it's officially "new"... char* buff = VirtualAlloc(0,128,MEM_COMMIT,PAGE_READWRITE); printf("Dumping 'new' buffer at start\n"); hexdump(buff,128); // Use pseudorandom buffer address as random seed srand((unsigned)buff); // Start the password with some fixed, searchable text strcpy(buff,"unlikelytext"); // Append 16 pseudorandom letters, one at a time for (int i = 1; i <= 16; i++) { // Choose a letter from A (65+0) to P (65+15) char ch = 65 + (rand() & 15); // Then modify the buff string in place strncat(buff,&ch,1); } // The full password is now in memory, so print // it as a string, and show the whole buffer... printf("Full string was: %s\n",buff); hexdump(buff,128); // Pause to dump process RAM now (try: 'procdump -ma') puts("Waiting for [ENTER] to free buffer..."); getchar(); // Formally free() the memory and show the buffer // again to see if anything was left behind... VirtualFree(buff,0,MEM_RELEASE); printf("Dumping buffer after free()\n"); hexdump(buff,128); // Pause to dump RAM again to inspect differences puts("Waiting for [ENTER] to exit main()..."); getchar(); return 0;
}

CODE FROM THE ARTICLE: S1.LUA

-- Start with some fixed, searchable text s = 'unlikelytext' -- Append 16 random chars from 'A' to 'P' for i = 1,16 do s = s .. string.char(65+math.random(0,15))
end print('Full string is:',s,'\n') -- Pause to dump process RAM print('Waiting for [ENTER] before freeing string...')
io.read() -- Wipe string and mark variable unused s = nil -- Dump RAM again to look for diffs print('Waiting for [ENTER] before exiting...')
io.read()

CODE FROM THE ARTICLE: FINDIT.LUA

-- read in dump file local f = io.open(arg[1],'rb'):read('*a') -- look for marker text followed by one -- or more random ASCII characters local b,e,m = 0,0,nil
while true do -- look for next match and remember offset b,e,m = f:find('(unlikelytext[A-Z]+)',e+1) -- exit when no more matches if not b then break end -- report position and string found print(string.format('%08X: %s',b,m))
end

CODE FROM THE ARTICLE: SEARCHKNOWN.LUA

io.write('Reading in dump file... ')
local f = io.open(arg[1],'rb'):read('*a')
io.write('DONE.\n') io.write('Searching for SIXTEENPASSCHARS as 8-bit ASCII... ')
local p08 = f:find('SIXTEENPASSCHARS')
io.write(p08 and 'FOUND' or 'not found','.\n') io.write('Searching for SIXTEENPASSCHARS as UTF-16... ')
local p16 = f:find('S\x00I\x00X\x00T\x00E\x00E\x00N\x00P\x00'.. 'A\x00S\x00S\x00C\x00H\x00A\x00R\x00S\x00')
io.write(p16 and 'FOUND' or 'not found','.\n')

CODE FROM THE ARTICLE: FINDBLOBS.LUA

-- read in dump file specified on command line local f = io.open(arg[1],'rb'):read('*a') -- Look for one or more password blobs, followed by any non-blob -- Note that blob chars (●) encode into Windows widechars
-- as litte-endian UTF-16 codes, coming out as CF 25 in hex. local b,e,m = 0,0,nil
while true do -- We want one or more blobs, followed by any non-blob. -- We simplify the code by looking for an explicit CF25 -- followed by any string that only has CF or 25 in it, -- so we will find CF25CFCF or CF2525CF as well as CF25CF25. -- We'll filter out "false positives" later if there are any. -- We need to write '%%' instead of \x25 because the \x25 -- character (percent sign) is a special search char in Lua! b,e,m = f:find('(\xCF%%[\xCF%%]*)',e+1) -- exit when no more matches if not b then break end -- CMD.EXE can't print blobs, so we convert them to stars. print(string.format('%08X: %s',b,m:gsub('\xCF%%','*')))
end

CODE FROM THE ARTICLE: SEARCHKP.LUA

-- read in dump file specified on command line local f = io.open(arg[1],'rb'):read('*a') local b,e,m,p = 0,0,nil,nil
while true do -- Now, we want one or more blobs (CF25) followed by the code -- for A..Z followed by a 0 byte to convert ACSCII to UTF-16 b,e,m = f:find('(\xCF%%[\xCF%%]*[A-Z])\x00',e+1) -- exit when no more matches if not b then break end -- CMD.EXE can't print blobs, so we convert them to stars. -- To save space we suppress successive matches if m ~= p then print(string.format('%08X: %s',b,m:gsub('\xCF%%','*'))) p = m end
end

Serious Security: Verification is vital – examining an OAUTH login bug

Researchers at web coding security company SALT just published a fascinating description of how they found an authentication bug dubbed CVE-2023-28131 in a popular online app-buildin toolkit known as Expo.

The good news is that Expo responded really quickly to SALT’s bug report, coming up with a fix within just a few hours of SALT’s responsible disclosure.

Fortunately, the fix didn’t rely on customers downloading anything, because the patch was implemented inside Expo’s cloud service, and didn’t require patches to any pre-installed apps or client-side code.

Expo’s advisory not only explained what happened and how the company fixed it, but also offered programming advice to its customers on how to avoid this sort of possible vulnerability with other online services.

SALT then waited three months before publishing its report, rather than rushing it out for publicity purposes as soon as it could, thus giving Expo users a chance to digest and act upon Expo’s response.

Keeping it simple

The buggy authentication process is explained in detail in SALT’s report, but we’ll present a greatly simplified description here of what went wrong in Expo’s OAUTH service.

OAUTH, short for Open Authorization Framework, is a process that allows you to access private data in an online service (such as editing your online profile, adding a new blog article, or approving a web service to make social media posts for you), without ever setting up a password with, or logging directly into, that service itself.

When you see web services that offer you a Login with Google or Facebook option, for example, they’re almost always using OAUTH in the background, so that you don’t need to create a new username and a new password with yet another website, or give your phone number out to yet another online service.

Strictly speaking, you authenticate indirectly, only ever putting your Google or Facebook credentials into one of those sites.

Some users don’t like this, because they don’t want to authenticate to Google or Facebook just to prove their identity to other, unrelated sites. Others like it because they assume that sites such as Facebook and Google have more experience in handling the login process, storing password hashes securely, and doing 2FA, than a boutique website that has tried to knit its own cryptographic security processes.

Outsourced authentication

Greatly simplified, an OAUTH-style login, via your Facebook account to a site called example.com, goes something like this:

  • The site example.com says to your app or browser, “Hello, X, go and get a magic access token for this site from Facebook.”
  • You visit a special Facebook URL, logging in if you haven’t already, and say, “Give me a magic access token for example.com.”
  • If Facebook is satisfied that you are who you claim, it replies, “Hello, X, here is your magic access token.”
  • You hand the access token to example.com, which can then contact Facebook itself to validate the token.

Note that only Facebook sees your Facebook password and 2FA code, if needed, so the Facebook service acts as an authentication broker between you and example.com.

Behind the scenes, there’s a final validation, like this:

  • The site example.com says to Facebook, “Did you issue this token, and does it validate user X?”
  • If Facebook agrees, it tells example.com, “Yes, we consider this user to be authenticated.”

Subvertible sequence

The bug that the SALT researchers found in the Expo code can be triggered by maliciously subverting Expo’s handling of what you might call the “authentication brokerage” process.

The key points are as follows:

  • Expo itself adds a wrapper around the verification process, so that it handles the authentication and the validation for you, ultimately passing a magic access token for the desired website (example.com in the exchange above) back to the app or website you’re connecting from.
  • The parameters used in handling the verification are packed into a big URL that’s submitted to the Expo service.
  • One of these parameters is stored temporarily in a web cookie that specifies the URL to which the final magic security token will be sent to enable access.
  • Before the security token is delivered, a popup asks you to verify the URL that’s about to be authorised, so you can catch out anyone trying to substitute a bogus URL into the login process.
  • If you approve the popup, Expo redirects you to the Facebook verification process.
  • If Facebook approves the verification, it returns a magic access token to the Expo service, and Expo passes it on to the URL you just approved in the popup, dubbed the returnURL.
  • The app or website listening at the specified returnURL receives Expo’s callback, acquires the access token, and is therefore authenticated as you.

Unfortunately, the SALT researchers found that they could subvert the login process by using JavaScript code to trigger access to the initial Expo login URL, but then killing off the verification popup before you had time to read it or approve it yourself.

At this point, however, Expo’s service had already set a cookie named ru (short for returnURL) to tell it where to call back with your magic access token at the end.

This meant that a cybercriminal could trick Expo’s code into “remembering” a returnURL such as https://roguesite.example, without you ever seeing the dialog to warn you that an attack was under way, let alone approving it by mistake.

Then the researchers used a second chunk of JavaScript code to simulate Expo’s redirect to Facebook’s verification process, which would automatically succeed if (like many people) you were already logged into Facebook itself.

Facebooks’s verification, in turn, would redirect the Expo login process back into Expo’s own JavaScript code…

…which would trustingly but erroneously grab the never-actually-verified returnURL for its callback from that magic ru cookie that it set at the start, without your approval or knowledge.

Fail open or fail closed?

As you can see from the description above, the vulnerability was caused by Expo’s code failing inappropriately.

Authentication code should generally fail closed, in the jargon, meaning that the process should not succeed unless some sort of active approval has been signalled.

We’re guessing that Expo didn’t intend the system to fail open, given that SALT’s report shows that its popup approval dialog looked like this:

 The app at https://roguesite.example is asking you to sign into your Facebook account. Do you fully trust https://roguesite.example and agree to let it: [No] [Yes]

The default answer, as you would expect, was set to [No], but this would only cause the system to fail closed if you religiously used Expo’s own client-side code to control the verification process.

By supplying their own JavaScript to run the sequence of verification requests, the researchers were able to treat the approval dialog as if it had said:

 If you don't explicitly tell us to block https://roguesite.example from logging in via your Facebook account, we'll let it do so: [Allow] [Block]

The solution, among other changes, was for Expo’s initial login code to set that magic ru cookie only after you’d explicitly approved the so-called returnURL, so that Expo’s later JavaScript login code would fail closed if the verification popup was skipped, instead of blindly trusting a URL that you had never seen or approved.

In many ways, this bug is similar to the Belkin Wemo Smart Plug bug that we wrote about two weeks ago, even though the root cause in Belkin’s case was a buffer overflow, not a rogue web callback.

Belkin’s code allocated a 68-byte memory buffer in its server-side code, but relied on checking in its client-side code that you didn’t try to send more than 68 bytes, thus leaving the server at the mercy of attackers who decided to talk to the server using their own client-side code that bypassed the verification process.

What to do?

  • When reporting and writing up bugs, consider following SALT’s example. Disclose responsibly, giving the vendor a reasonable time to fix the vulnerability, plus a reasonable time to advise their own users, before publishing details that would allow anyone else to create an exploit of their own.
  • When receiving bug reports, consider following Expo’s example. Reply quickly, keep in contact with the reporter of the bug, patch the vulnerability as soon as you can, provide a helpful investigative report for your users, and keep it objective. (Resist your marketing team’s suggestions to praise yourself for “taking security seriously” or to dismiss the issue as unimportant. That’s for your users to decide, based on the promptness and the pertinence of your response, and their own assessment of the risk.)
  • Ensure that your authentication code fails closed. Make sure you don’t have verification or approval steps that can be neutralised simply by ignoring or cancelling them.
  • Never asssume that your own client-side code will be in control of the verification process. Presume that attackers will reverse-engineer your protocol and create client code of their own to circumvent as many checks as they can.
  • Logout of web accounts when you aren’t actively using them. Many people login to accounts such as Google, Amazon, Facebook, Apple and others, and then stay logged in indefinitely, because it’s convenient. Logging out prevents many actions (including authentications, posts, likes, shares and much more) from happening when you don’t expect them – you’ll see a login prompt instead.

Don’t forget that by logging out of web services whenever you can, and by clearing all your browser cookies and stored web data frequently, you also reduce the amount of tracking information that sites can collect about you as you browse.

After all, if you aren’t logged in, and you don’t have any tracking cookies left over from before, sites no longer know exactly who you are, or what you did last time you visited.


go top