It’s only a week since Elon Musk’s take-private of Twitter on 28 October 2022…
…but if you take into account the number of news stories about it (and, perhaps ironically under the circumstances, the volume of Twitter threadspace devoted to it), it probably feels a lot longer.
There’s been plenty to set the fur flying, starting with Musk’s curious choice of metaphor in arriving at Twitter HQ on takeover day with a kitchen sink, as though the company’s products and services were already so close to complete that they needed nothing more than the aforementioned dishwashing receptacle to finish things off.
Then there was the peremptory, if not-at-all unexpected, dismissal of the top tier of management; a pair of pranksters carrying cardboard boxes who tricked journalists into reporting they’d just been sacked and escorted offsite; staff who had been sacked apparently finding out when their access codes abruptly stopped working; and Twitter’s apparent rush to switch its well-known Blue Badge into a subscription service, not simply a verification system.
At the time of writing [2022-11-04T17:00Z], however, Twitter’s own documentation still stressed that so-called Verified Accounts are so labelled in order to denote that “an account of public interest is authentic, […] notable, and active.”
In fact, once you’re Verified, at least under today’s rules, you can’t voluntarily cast off your blue badge yourself, though you can have it pulled by Twitter“at any time without notice.”
Where FUD goes…
As you can therefore imagine, or as you’ve probably seen for yourself, Twitter’s current intention to make the blue badge into a pay-to-play service has stirred up plenty of fear, uncertainty and doubt, and where FUD goes…
…cybercriminals love to follow, whether it’s calling you up out of the blue (no pun intended) and telling you “Microsoft” has detetced “dangerous viruses” on your computer, or texting you to ask you to reschedule your latest home “delivery”, or emailing you to warn you about an Instagram copyright “infringement” on your account.
Indeed, the Twitter Verified scamming started quickly, with Zack Whittaker at TechCrunch publishing screenshots of blue-badge-themed phishing attacks last weekend:
Twitter’s ongoing verification chaos is now a cybersecurity problem. It looks like some people (including in our newsroom) are getting crude phishing emails trying to trick people into turning over their Twitter credentials. pic.twitter.com/Nig4nhoXWF
The emails reported to Whittaker had been sent to journalists, and guessed that Twitter would be charging $20 a month for a blue-badge privilege. (The crooks actually went for $19.99, presumably because round numbers are surpisingly uncommon as prices in the English speaking world, with that one-cent reduction apparently making a $1000 ripoff look like a bargain when it turns up for just $999.99.)
The crooks in this scam suggested that you could simply “reverify” in order to retain your existing blue badge and thus avoid future charges, and helpfully provided a login button so you could do just that.
Of course, clicking through took you to a fake site that tried to harvest your phone number and Twitter login details, but you can imagine many other approaches that scammers could take, including:
Inviting you to “sign up early” to avoid disappointment, and then phishing for your payment card details.
Offering to help you stake a claim on an existing account name, and then phishing for significant personal information.
Urging you to “pre-apply” to save time later, then requesting similar information.
Elon Musk himself, apparently, has subsequently said, “Power to the people! Blue for $8/month,” which certainly invalidates the first round of scam emails that insisted the price was going to be $19.99…
…but does nothing to prevent the next round of scammers from simply coming up with new verbiage that’s updated for the new terms and conditions.
What to do?
Our usual cybersecurity advice applies, and it will help you avoid phishing scams whether their hook is the Twitter takeover, Black Friday “superdeals”, home delivery “failures”, bank account “problems”, or any other sort of message that tries to lure you in with fear (including fear of missing out), uncertainty and doubt:
Use a password manager. This helps stop you putting a real password into a fake site, because your password manager won’t recognise the imposter web pages.
Turn on 2FA if you can. Two-factor authentication means you need a one-time code as well as your password, making stolen passwords alone less useful to the crooks.
Avoid login links and action buttons in emails. If there’s action you need to take on the website of a service you genuinely use, find your own way to the real site using a URL you already know or can look up securely.
Never ask the sender of an uncertain message if they’re legitimate. If they’re genuine, they’ll say so, but if they’re scammers, they’ll say exactly the same thing, so you’ve learned nothing!
Remember: If in doubt, don’t give it out.
If it sounds like a scam, simply assume that it is, and bail out up front.
Researchers at threat intelligence company Group-IB just wrote an intriguing real-life story about an annoyingly simple but surprisingly effective phishing trick known as BitB, short for browser-in-the-browser.
You’ve probably heard of several types of X-in-the-Y attack before, notably MitM and MitB, short for manipulator-in-the-middle and manipulator-in-the-browser.
In a MitM attack, the attackers who want to trick you position themselves somewhere “in the middle” of the network, between your computer and the server you’re trying to reach.
(They might not literally be in the middle, either geographically or hop-wise, but MitM attackers are somewhere along the route, not right at either end.)
The idea is that instead of having to break into your computer, or into the server at the other end, they lure you into connecting to them instead (or deliberately manipulate your network path, which you can’t easily control once your packets exit from your own router), and then they pretend to be the other end – a malevolent proxy, if you like.
They pass your packets on to the official destination, snooping on them and perhaps fiddling with them on the way, then receive the official replies, which they can snoop on and tweak for a second time, and pass them back to you as though you’d connected end-to-end just as you expected.
If you’re not using end-to-end encryption such as HTTPS in order to protect both the confidentiality (no snooping!) and integrity (no tampering!) of the traffic, you are unlikely to notice, or even to be able to detect, that someone else has been steaming open your digital letters in transit, and then sealing them again up afterwards.
Attacking at one end
A MitB attack aims to work in a similar way, but to sidestep the problem caused by HTTPS, which makes a MitM attack much harder.
MitM attackers can’t readily interfere with traffic that’s encrypted with HTTPS: they can’t snoop on your data, because they don’t have the cryptographic keys used by each end to protect it; they can’t change the encrypted data, because the cryptographic verification at each end would then raise the alarm; and they can’t pretend to be the server you’re connecting to because they don’t have the cryptographic secret that the server uses to prove its identity.
An MitB attack therefore typically relies on sneaking malware onto the your computer first.
That’s generally more difficult than simply tapping into the network at some point, but it gives the attackers a huge advantage if they can manage it.
That’s because, if they can insert themselves right inside your browser, they get to see and to modify your network traffic before your browser encrypts it for sending, which cancels out any outbound HTTPS protection, and after your browser decrypts it on the way back, thus nullifying the encryption applied by the server to protect its replies.
What abour a BitB?
But what about a BitB attack?
Browser-in-the-browser is quite a mouthful, and the trickery involved doesn’t give cybercriminals anywhere near as much power as a MitM or a MitB hack, but the concept is forehead-slappingly simple, and if you’re in too much of a hurry, it’s surprisingly easy to fall for it.
The idea of a BitB attack is to create what looks like a popup browser window that was generated securely by the browser itself, but that is actually nothing more than a web page that was rendered in an existing browser window.
You might think that this sort of trickery would be doomed to fail, simply because any content in site X that pretends to be from site Y will show up in the browser itself as coming from a URL on site X.
One glance at the address bar will make it obvious that you’re being lied to, and that whatever you’re looking at is probably a phishing site.
Foe example, here’s a screenshot of the example.com website, taken in Firefox on a Mac:
If attackers lured you to a fake site, you might fall for the visuals if they copied the content closely, but the address bar would give away that you weren’t on the site you were looking for.
In a Browser-in-the-Browser scam, therefore, the attacker’s aim is to create a regular web page that looks like the web site and content you’re expecting, complete with the window decorations and the address bar, simulated as realistically as possible.
In a way, a BitB attack is more about art than it is about science, and it’s more about web design and managing expectations than it is about network hacking.
For example, if we create two screen-scraped image files that look like this…
…will create what looks like a browser window inside an existing browser window, like this:
In this very basic example, the three macOS buttons (close, minimise, maximise) at the top left won’t do anything, because they aren’t operating system buttons, they’re just pictures of buttons, and the address bar in what looks like a Firefox window can’t be clicked in or edited, because it too is just a screenshot.
But if we now add an IFRAME into the HTML we showed above, to suck in bogus content from a site that has nothing to do with example.com, like this…
…you’d have to admit that the resulting visual content looks exactly like a standalone browser window, even though it’s actually a web page inside another browser window.
The text content and the clickable link you see below were downloaded from the dodgy.test HTTPS link in the HTML file above, which contained this HTML code:
<html> <body style='font-family:sans-serif'> <div style='width:530px;margin:2em;padding:0em 1em 1em 1em;'> <h1>Example Domain</h1> <p>This window is a simulacrum of the real website, but it did not come from the URL shown above. It looks as though it might have, though, doesn't it? <p><a href='https://dodgy.test/phish.click'>Bogus information...</a> </div> </body>
</html>
The graphical content topping and tailing the HTML text makes it look as though the HTML really did come from example.com, thanks to the screenshot of the address bar at the top:
The artifice is obvious if you view the bogus window on a different operating system, such as Linux, because you get a Linux-like Firefox window with a Mac-like “window” inside it.
The fake “window dressing” components really do stand out as the images they really are:
Would you fall for it?
If you’ve ever taken screenshots of apps, and then opened the screenshots later in your photo viewer, we’re willing to bet that at some point you’ve tricked yourself into treating the app’s picture as if it were a running copy of the app itself.
We’ll wager that you’ve clicked on or tapped in an app-in-an-app image at least one in your life, and found yourself wondering why the app wasn’t working. (OK, maybe you haven’t, but we certainly have, to the point of genuine confusion.)
Of course, if you click on an app screenshot inside a photo browser, you’re at very little risk, because the clicks or taps simply won’t do what you expect – indeed, you may end up editing or scribbling lines on the image instead.
But when it comes to a browser-in-the-browser “artwork attack” instead, misdirected clicks or taps in a simulated window can be dangerous, because you’re still in an active browser window, where JavaScript is in play, and where links still work…
…you’re just not in the browser window you thought, and you’re not on the website you thought, either.
Worse still, any JavaScript running in the active browser window (which came from the original imposter site you visited) can simulate some of the expected behaviour of a genuine browser popup window in order to add realism, such as dragging it, resizing it, and more.
As we said at the start, if you’re waiting for a real popup window, and you see something that looks like a popup window, complete with realistic browser buttons plus an address bar that matches what you were expecting, and you’re in a bit of a hurry…
…we can fully understand how you might misrecognise the fake window as a real one.
Steam Games targeted
In the Group-IB research we mentioned above, the real-world BinB attack that the researchers came aross used Steam Games as a lure.
A legitimate looking site, albeit one you’d never heard of before, would offer you a chance to win places at an upcoming gaming tournament, for example…
…and when the site said it was popping up a separate browser window containing a Steam login page, it really presented a browser-in-the-browser bogus window instead.
The researchers noted that the attackers didn’t just use BitB trickery to go for usernames and passwords, but also tried to simulate Steam Guard popups asking for two-factor authentication codes, too.
Fortunately, the screenshots presented by Group-IB showed that the criminals they happened upon in this case weren’t terribly careful about the art-and-design aspects of their scammery, so most users probably spotted the fakery.
But even a well-informed user in a hurry, or someone using a browser or operating system they weren’t familiar with, such as at a friend’s house, might not have noticed the inaccuracies.
Also, more fastidious criminals would almost certainly come up with more realistic fake content, in the same way that not all email scammers make spelling mistakes in their messages, thus potentially leading more people into giving away their access credentials.
What to do?
Here are three tips:
Browser-in-the-Browser windows aren’t real browser windows. Although they may seem like operating system level windows, with buttons and icons that look just like the real deal, they don’t behave like operating system windows. They behave like web pages, because that’s what they are. If you’re suspicous, try dragging the suspect window outside the main browser window that contains it. A real browser window will behave independently, so you can move it outside and beyond the original browser window. A fake browser window will be “imprisoned” inside the real window it’s shown in, even if the attacker has used JavaScript to try to simulate as much genuine-looking behaviour as possible. This will quickly give away that it’s part of a web page, not a true window in its own right.
Examine suspect windows carefully. Realistically mocking up the look and feel of an operating system window inside a web page is easy to do badly, but difficult to do well. Take those extra few seconds to look for telltale signs of fakery and inconsistency.
If in doubt, don’t give it out. Be suspicious of sites you’ve never heard of, and that you have no reason to trust, that suddenly want you to login via a third-party site.
Never be in a hurry, because taking your time will make you much less likely to see what you think is there instead of what seeing what actually is there.
In three words: Stop. Think. Connect.
Featured image of photo of app window containing image of photo of Magritte’s “La Trahison des Images” created via Wikipedia.
Have you ever come really close to clicking a phishing link simply through coincidence?
We’ve had a few surprises, such as when we bought a mobile phone from a click-and-collect store a couple of years back.
Having lived outside the UK for many years before that, this was our first-ever purchase from this particular business for well over a decade…
…yet the very next morning we received an SMS message claiming to be from this very store, advising us we’d overpaid and that a refund was waiting.
Not only was this our first interaction with Brand X for ages, it was also the first-ever SMS (genuine or otherwise) we’d ever received that mentioned Brand X.
What’s the chance of THAT happening?
(Since then, we’ve made a few more purchases from X, ironically including another mobile phone following the discovery that phones don’t always do well in bicycle prangs, and we’ve had several more SMS scam messages targeting X, but they’ve never lined up quite so believably.)
Let’s do the arithmetic
Annoyingly, the chances of scam-meets-real-life coincidences are surprisingly good, if you do the arithmetic.
After all, the chance of guessing the winning numbers in the UK lottery (6 numbered balls out of 59) is an almost infinitesimally tiny 1-in-45-million, computed via the formula known as 59C6 or 59 choose 6, which is 59!/6!(59-6)!, which comes out as 59x56x55x54x53x52/6x5x4x3x2x1 = 45,057,474.
That’s why you’ve never won the jackpot…
…even though quite a few people have, over the many years it’s been going.
In the same way, phishing crooks don’t need to target or trick you, but merely to trick someone, and one day, maybe, just maybe, that someone might be you.
We had a weird reminder of this just last night, when we were sitting on the sofa, idly reading an article in tech publication The Register about 2FA scamming.
The first surprise was that at the very moment we thought, “Hey, we wrote up something like this about two weeks ago,” we reached the paragraph in the El Reg story that not only said just that, but linked directly to our own article!
What’s the chance of THAT happening?
Of course, any writer who says they’re not bothered whether other people notice their work or not is almost certainly not to be trusted, and we’re ready to admit (ahem) that we took a screenshot of the relevant paragraph and emailed it to ourselves (“purely for PR documentation purposes” was the explanation we decided on).
Now it gets weirder
Here’s where the coincidence of coincidences gets weirder.
After sending the email from our phone to our laptop, we moved less than two metres to our left, and sat down in front of said laptop to save the attached image, only to find that during the couple of seconds we were standing up…
…the VERY SAME CROOKS AS BEFORE had emailed us yet another Facebook Pages 2FA scam, containing almost identical text to the previous one:
What’s the chance of THAT happening, combined with the chance of the previous coincidence that just happened while we were reading the article?
Sadly, given the ease with which cybercriminals can register new domain names, set up new servers, and blast out millions of emails around the globe…
…the chance is high enough that it would be more surprising if this sort of co-incidence NEVER happened.
Small changes to the scam
Interestingly, these crooks had made modest changes to their scam.
Like last time, they created an HTML email with a clickable link that itself looked like a URL, even though the actual URL it linked to was not the one that appeared in the text.
This time, however, the link you saw if you hovered over the blue text in the email (the actual URL target rather than the apparent one) really was a link to a URL hosted on the facebook.com domain.
Instead of linking directly from their email to their scam site, with its fake password and 2FA prompts, the criminals linked to a Facebook Page of their own, thus giving them a facebook.com link to use in the email itself:
This one-extra-click-away trick gives the criminals three small advantages:
The final dodgy link isn’t directly visible to email filtering software, and doesn’t pop up if you hover over the link in your email client.
The scam link draws apparent legitimacy from appearing on Facebook itself.
Clicking the scam link somehow feels less dangerous because you’re visiting it from your browser rather than going there it directly from an email, which we’ve all been taught to be cautious about.
We didn’t miss the irony, as we hope you won’t either, of a totally bogus Facebook Page being set up specifically to denounce us for the allegedly poor quality of our own Facebook Page!
From this point on, the scam follows exactly the same workflow as the one we wrote up last time:
Firstly, you’re asked for your name and other reasonable-sounding amounts of personal information.
Secondly, you need to confirm your appeal by entering your Facebook password.
Finally, as you might expect when using your password, you’re asked to put in the one-time 2FA code that your mobile phone app just generated, or that arrived via SMS.
Of course, as soon as you provide each data item in the process, the crooks are using the phished information to login in real time as if they were you, so they end up with access to your account instead of you.
Last time, just 28 minutes elapsed between the crooks creating the fake domain they used in the scam (the link they put in the email itself), which we thought was pretty quick.
This time, it was just 21 minutes, though, as we’ve mentioned, the fake domain wasn’t used directly in the bogus email we received, but was placed instead on an online web page hosted, ironically enough, as a Page on facebook.com itself.
We reported the bogus Page to Facebook as soon as we found it; the good news is that it’s now been knocked offline, thus breaking the connection between the scam email and the fake Facebook domain:
What to do?
Don’t fall for scams like this.
Don’t use links in emails to reach official “appeal” pages on social media sites. Learn where to go yourself, and keep a local record (on paper or in your bookmarks), so that you never need to use email web links, whether they’re genuine or not.
Check email URLs carefully. A link with text that itself looks like a URL isn’t necessarily the URL that the link directs you to. To find the true destination link, hover over the link with your mouse (or touch-and-hold the link on your mobile phone).
Don’t assume that all internet addresses with a well-known domain are somehow safe. Domains such as facebook.com, outlook.com or play.google.com are legitimate services, but not everyone who uses those services can be trusted. Individual email accounts on a webmail server, pages on a social media platform, or apps in an online software store all end up hosted by platforms with trusted domain names. But the content provided by individual users is neither created by nor particularly strongly vetted by that platform (no matter how much automated verification the platform claims to do).
Check website domain names carefully. Every character matters, and the business part of any server name is at the end (the right-hand side in European languages that go from left-to-right), not at the beginning. If I own the domain dodgy.example then I can put any brand name I like at the start, such as visa.dodgy.example or whitehouse.gov.dodgy.example. Those are simply subdomains of my fraudulent domain, and just as untrustworthy as any other part of dodgy.example.
If the domain name isn’t clearly visible on your mobile phone, consider waiting until you can use a regular desktop browser, which typically has a lot more screen space to reveal the true location of a URL.
Consider a password manager. Password managers associate usernames and login passwords with specific services and URLs. If you end up on an imposter site, no matter how convincing it looks, your password manager won’t be fooled because it recognises the site by its URL, not by its appearance.
Don’t be in a hurry to put in your 2FA code. Use the disruption in your workflow (e.g. the fact that you need to unlock your phone to access the code generator app) as a reason to check that URL a second time, just to be sure, to be sure.
Consider reporting scam pages to Facebook. Annoyingly, you need to have a Facebook account of your own to do so (non-Facebook users are unable to submit reports to help the greater community, which is a pity), or to have a friend who will send in the report for you. But our experience in this case was that reporting it did work, because Facebook soon blocked access to the offending Page.
Remember, when it comes to personal data, especially passwords and 2FA codes…
Well, the Melissa virus just called, and it’s finding life tough in 2022.
It’s demanding a return to the freewheeling days of the last millennium, when Office macro viruses didn’t face the trials and tribulations that they do today.
In the 1990s, you could insert VBA (Visual Basic for Applications) macro code into documents at will, email them to people, or ask them to download them from a website somewhere…
…and then you could just totally take over their computer!
In fact, it was even better/worse that that.
If you created a macro subroutine with a name that mirrored one of the common menu items, such as FileSave or FilePrint, then your code would magically and invisibly be invoked whenver the user activated that option.
Worse still, if you gave your macro a name like AutoOpen, then it would run every time the document was opened, even if the user only wanted to look at it.
And if you installed your macros into a central repository known as the global template, your macros would automatically apply all the time.
Worst of all, perhaps, an infected document could implant macros into the global template, thus infecting the computer, and the same macros (when they detected they were running from the global template but the document you just opened was uninfected) could copy themselves back out again.
That led to regular “perfect storms” of fast-spreading and long-running macro virus outbreaks.
Macro viruses spread like crazy
Simply put, once you’d opened one infected document on your computer, every document you opened or created thereafter would (or could, at least) get infected as well, until you had nothing but infected Office files everywhere.
As you can imagine, at that point in the game, any file you sent to or shared with a colleague, customer, prospector, investor, supplier, friend, enemy, journalist, random member of the public…
…would contain a fully-functional copy of the virus, ready to do its best to infect them when they opened it, assuming they weren’t infected already.
And if that wasn’t enough on its own, Office macro malware could deliberately distribute itself, instead of waiting for you to send a copy to someone else, by reading your email address book and sending itself to some, many or all of the names in there.
If you had an address book entry that was an email group, such as Everyone, or All Friends, or All Global Groups, then every time the virus emailed the group, hundreds or thousands of infectious messages would go flying across the internet to all your colleagues. Many of them would soon mail you back as the virus got hold of their computer, too, and a veritable email storm would result.
The first macro malware, which spread by means of infected Word files, appeared in late 1995 and was dubbed Concept, because at that time it was little more than a proof-of-concept.
But it was quickly obvious that malicious macros were going to be more than just a passing headache.
Microsoft was slow to come to the cybersecurity party, carefully avoiding terms such such as virus, worm, Trojan Horse and malware, resolutely referring to the Concept virus as a nothing more than a “prank macro”.
A gradual lockdown
Over the years, however, Microsoft gradually implemented a series of functional changes in Office, by variously:
Making it easier and quicker to detect whether a file was a pure document, thus swiftly differentiating pure document files, and template files with macro code inside. In the early days of macro viruses, back when computers were much slower than today, significant and time-consuming malware-like scanning was needed on every document file just to figure out if it needed scanning for malware.
Making it harder for template macros to copy themselves out into uninfected files. Unfortunately, although this helped to kill off self-spreading macro viruses, it didn’t prevent macro malware in general. Criminals could still create their own booby-trapped files up front and send them individually to each potential victim, just as they do today, without relying on self-replication to spread further.
Popping up a ‘dangerous content’ warning so that macros couldn’t easily run by mistake. As useful as this feature is, because macros don’t run until you choose to allow them, crooks have learned how to defeat it. They typically add content to the document that helpfully “explains” which button to press, often providing a handy graphical arrow pointing at it, and giving a believable reason that disguises the security risk involved.
Adding Group Policy settings for stricter macro controls on company networks. For example, administrators can block macros altogether in Office files that came from outside the network, so that users can’t click to allow macros to run in files received via email or downloaded the web, even if they want to.
At last, in February 2022, Microsoft announced, to sighs of collective relief from the cybersecurity community, that it was planning to turn on the “inhibit macros in documents that arrived from the internet” by default, for everyone, all the time.
The security option that used to require Group Policy intervention was finally adopted as a default setting.
In other words, as a business you were still free to use the power of VBA to automate your internal handling of official documents, but you wouldn’t (unless you went out of your way to permit it) be exposed to potentially unknown, untrusted and unwanted macros that weren’t from an approved, internal source.
As we reported at the time. Microsoft described the change thus:
VBA macros obtained from the internet will now be blocked by default.
For macros in files obtained from the internet, users will no longer be able to enable content with a click of a button. A message bar will appear for users notifying them with a button to learn more. The default is more secure and is expected to keep more users safe including home users and information workers in managed organizations.
We were enthusiatic, though we thought that the change was somewhat half-hearted, noting that:
We’re delighted to see this change coming, but it’s nevertheless only a small security step for Office users, because: VBA will still be fully supported, and you will still be able to save documents from email or your browser and then open them locally; the changes won’t reach older versions of Office for months, or perhaps years, [given that] change dates for Office 2021 and earlier haven’t even been announced yet; mobile and Mac users won’t be getting this change; and not all Office components are included. Apparently, only Access, Excel, PowerPoint, Visio, and Word will be getting this new setting.
Well, it turns out not only that our enthusiasm was muted, but also that it was short-lived.
Following user feedback, we have rolled back this change temporarily while we make some additional changes to enhance usability. This is a temporary change, and we are fully committed to making the default change for all users.
We will provide additional details on timeline in the upcoming weeks.
What to do?
In short, it seems that sufficiently many companies not only rely on receiving and using macros from potentially risky sources, but also aren’t yet willing to change that situation by adapting their corporate workflow.
If you were happy with this change, and want to carry on blocking macros from outside, use Group Policy to enable the setting regardless of the product defaults.
If you weren’t happy with it, why not use this respite to think about how you can change your business workflow to reduce the need to keep transferring unsigned macros to your users?
It’s an irony that a cybersecurity change that a cynic might have described “as too little, too late” turns out, in real life, to have been “too much, too soon.”
Let’s make sure that we’re collectively ready for modest cybersecurity changes of this sort in future…
We’ll tell this story primarily through the medium of images, because a picture is worth 1024 words.
This cybercrime is a visual reminder of three things:
It’s easy to fall for a phishing scam if you’re in a hurry.
Cybercriminals don’t waste any time getting new scams going.
2FA isn’t a cybersecurity panacea, so you still need your wits about you.
It was 19 minutes past…
At 19 minutes after 3 o’clock UK time today [2022-07-01T14:19:00.00Z], the criminals behind this scam registered a generic and unexceptionable domain name of the form control-XXXXX.com, where XXXXX was a random-looking string of digits, looking like a sequence number or a server ID:
28 minutes later, at 15:47 UK time, we received an email, linking to a server called facebook.control-XXXX.com, telling us that there might be a problem with one of the Facebook Pages we look after:
As you can see, the link in the email, highlighted in blue by our Oulook email client, appears to go directly and correctly to the facebook.com domain.
But that email isn’t a plaintext email, and that link isn’t a plaintext string that directly represents a URL.
Instead, it’s an HTML email containing an HTML link where the text of the link looks like a URL, but where the actual link (known as an href, short for hypertext reference) goes off to the crook’s imposter page:
As a result, clicking on a link that looked like a Facebook URL took us to the scammer’s bogus site instead:
Apart from the incorrect URL, which is disguised by the fact that it starts with the text facebook.contact, so it might pass muster if you’re in a hurry, there aren’t any obvious spelling or grammatical errors here.
Facebook’s experience and attention to detail means that the company probably wouldn’t have left out the space before the words “If you think”, and wouldn’t have used the unusual text ex to abbreviate the word “example”.
But we’re willing to bet that some of you might not have noticed those glitches anyway, if we hadn’t mentioned them here.
If you were to scroll down (or had more space than we did for the screenshots), you might have spotted a typo further along, in the content that the crooks added to try to make the page look helpful.
Or you might not – we highlighted the spelling mistake to help you find it:
Next, the crooks asked for our password, which wouldn’t usually be part of this sort of website workflow, but asking us to authenticate isn’t totally unreasonable:
We’ve highlighted the error message “Password incorrect”, which comes up whatever you type in, followed by a repeat of the password page, which then accepts whatever you type in.
This is a common trick used these days, and we assume it’s because there’s a tired old piece of cybersecurity advice still knocking around that says, “Deliberately put in the wrong password first time, which will instantly expose scam sites because they don’t know your real password and therefore they’ll be forced to accept the fake one.”
To be clear, this has NEVER been good advice, not least when you’re in a hurry, because it’s easy to type in a “wrong” password that is needlessly similar to your real one, such as replacing pa55word! with a string such as pa55pass! instead of thinking up some unrelated stuff such as 2dqRRpe9b.
Also, as this simple trick makes clear, if your “precaution” involves watching out for apparent failure followed by apparent success, the crooks have just trivially lulled you into into a false sense of security.
We also highlighted that the crooks also deliberately added a slightly annoying consent checkbox, just to give the experience a veneer of official formality.
Now you’ve handed the crooks your account name and password…
…they immediately ask for the 2FA code displayed by your authenticator app, which theoretically gives the criminals anywhere between 30 seconds and a few minutes to use the one-time code in a fraudulent Facebook login attempt of their own:
Even if you don’t use an authenticator app, but prefer to receive 2FA codes via text messages, the crooks can provoke an SMS to your phone simply by starting to login with your password and then clicking the button to send you a code.
Finally, in another common trick these days, the criminals soften the dismount, as it were, by casually redirecting you to a legitimate Faceook page at the end.
This gives the impression that the process finished without any problems to worry about:
What to do?
Don’t fall for scams like this.
Don’t use links in emails to reach official “appeal” pages on social media sites. Learn where to go yourself, and keep a local record (on paper or in your bookmarks), so that you never need to use email web links, whether they’re genuine or not.
Check email URLs carefully. A link with text that itself looks like a URL isn’t necessarily the URL that the link directs you to. To find the true destination link, hover over the link with your mouse (or touch-and-hold the link on your mobile phone).
Check website domain names carefully. Every character matters, and the business part of any server name is at the end (the right-hand side in European languages that go from left-to-right), not at the beginning. If I own the domain dodgy.example then I can put any brand name I like at the start, such as visa.dodgy.example or whitehouse.gov.dodgy.example. Those are simply subdomains of my fraudulent domain, and just as untrustworthy as any other part of dodgy.example.
If the domain name isn’t clearly visible on your mobile phone, consider waiting until you can use a regular desktop browser, which typically has a lot more screen space to reveal the true location of a URL.
Consider a password manager. Password managers associate usernames and login passwords with specific services and URLs. If you end up on an imposter site, no matter how convincing it looks, your password manager won’t be fooled because it recognises the site by its URL, not by its appearance.
Don’t be in a hurry to put in your 2FA code. Use the disruption in your workflow (e.g. the fact that you need to unlock your phone to access the code generator app) as a reason to check that URL a second time, just to be sure, to be sure.
Remember that phishing crooks move really fast these days in order to milk new domain names as quickly as they can.
Fight back against their haste by taking your time.
Remember those two handy sayings: Stop. Think. Connect.
And after you’ve stopped and thought: If in doubt, don’t give it out.