Category Archives: News

‘Dirty little secret’ extortion email threatens to give your family coronavirus

Cybercriminals really do know no limits.

Remember sextortion, where they say they’ll spam your friends and family with x-rated photos of you that they got via malware?

At least, they will unless you pay them $2000.

Well, the Sophos Security team just sent us a phish they received that shows the stakes just got a lot higher and way more offensive.

Now, the price is $4000, and if you don’t pay…

…then they’re threatening to infect your family with coronavirus.

As crazy as that sounds, the crooks are making that threat because they want you to believe that they really do have deep, dark insights into everything you do, because they’re deep inside your computer and your digital life, and because they can track you and your family everywhere.

The weird look to the text below is because the crooks have used lookalike Greek characters in place of English letters such as A, N, O, T and Vto disguise the words from simple text matching:

Subject: [YOUR NAME] : [YOUR PASSWORD]

I know every dιrτy liττle secreτ abοuτ your lιfe. To ρrove my poιnτ, tell me, does [REDACTED] ring αny bell το yοu? It was οηe οf yοur pαsswοrds.

Whαt dο Ι κnow αbοuτ you?

Tο sταrt with, I κηοw all of yοur passwords. I αm awαre of your whereαbοuτs, what yοu eaτ, wιth whοm you tαlk, every liττle τhing yοu do in α day.

What αm Ι cαpable οf dοιηg?

Ιf I wαηt, I cοuld eνen infect yοur whοle fαmily with τhe CοronαVirus, reνeαl all of yοur secrets. There αre cοunτless τhiηgs I cαn dο.

Whατ should yοu do?

Yοu need tο ραy me $4000. You’ll mαke τhe ρayment viα Βiτcoiη τo the belοw-mentιοηed αddress. Ιf you dοn’t knοw how tο do τhis, seαrch ‘how tο buy bιτcoin’ in Goοgle.
Βitcoin Address:
[REDACTED]
(Ιt is cAsE sensiτiνe, sο cοpy αηd ραste it)

You hαve 24 hours τo maκe the ραyment. Ι hαve a unique pιxel withιn τhis email messαge, and rιght now, I κηοw thατ yοu hαve reαd thιs email.

If I dο ηoτ geτ the paymenτ:

Ι wιll iηfect eνery member οf your family with τhe CοronαVιrus. No matter how smart yοu αre, belieνe me, ιf Ι waητ to αffect, Ι caη. Ι will also gο αheαd aηd reνeαl yοur secreτs. Ι will comρletely ruiη yοur lιfe.

Nonetheless, ιf I do geτ ραιd, Ι wιll erαse every lιτtle informατιοη I have αbοut yοu immediατely. You will never hear from me αgαιn. It ιs a nοn-ηegotιαble οffer, sο dοn’t wαsτe my τιme αnd yours by reρlyiηg to thιs emαil.

Nikita

As we’ve seen so often in sextortion emails, the “proof” that they really can see deep into your online life is a password that very likely is one you used to have…

…but they’ve extracted it from publicly available data leaked in an old data breach, so even though it might have been a secret once, it hasn’t been for years.

What to do?

  • Don’t send any money. It’s all a pack of lies.
  • Don’t be scared. In scams like these, the crooks don’t have any data on you, let alone details about all your family members and where they live.
  • Don’t think of replying. It’s tempting to contact the crooks, just in case, but they have nothing to sell; you have nothing to buy; and by contacting them you are just giving them another chance to scare you into making a mistake.
  • Let people know about this scam. Make sure others don’t fall for this horrible scam either. Let’s face it, we already have enough to worry about at the moment.

Latest Naked Security podcast

NIST shared dataset of tattoos that’s been used to identify prisoners

In 2017, the Electronic Frontier Foundation (EFF) filed a Freedom of Information Act (FOIA) lawsuit looking to force the FBI and the National Institute of Standards and Technology (NIST) to cough up info about Tatt-C (also known as the Tattoo Recognition Challenge): a tattoo recognition program that involves creating an “open tattoo database” to use in training software to automatically recognize tattoos.

For years, the EFF has been saying that developing algorithms that the FBI and law enforcement can use to identify similar tattoos from images – similar to how automated facial recognition systems work – raises significant First Amendment questions. The thinking goes like this: you can strip out names and other personally identifiable information (PII) from the tattoo images, but the images themselves often contain PII, such as when they depict loved ones’ faces, names, birthdates or anniversary dates, for example.

As part of the Tatt-C challenge, participating institutions received a CD-ROM full of images to test the third parties’ tattoo recognition software. That dataset has 15,000 images, and most were collected from prisoners, who have no say in whether their biometrics are collected and who were unaware of what those images would be used for.

Since 2017, when the EFF used a FOIA lawsuit to get at the names of the participating institutions, it’s been trying to find out whether the entities realize that there’s been no ethical review of the image collection procedure, which is generally required when conducting research with human subjects.

On Tuesday, the EFF presented a scorecard with those institutions’ responses.

The results: nearly all of the entities that responded confirmed that they’d deleted the data. However, 15 institutions didn’t bother to respond, or said “You can count us as a non-response to this inquiry”, to a letter sent by the EFF in January.

In that letter, the EFF requested that the entities destroy the dataset; conduct an internal review of all research generated using the Tatt-C dataset; and review their policies for training biometric recognition algorithms using images or other biometric data collected from individuals who neither consented to being photographed, nor to the images being used to train algorithms.

Nearly all the entities that responded confirmed that the data had been deleted. But at least one university was still conducting research with the dataset five years later: the University of Campinas (UNICAMP) School of Engineering Computer Engineering in Brazil. The university sent a letter saying that researchers are only required to seek ethics review for human data collected within Brazil. Thus, its researcher would keep working on the tattoo images through the end of year and then would delete them.

UNICAMP also refused to acknowledge that the images contained personal information, the EFF says. The group’s take on the matter:

Tattoos are also incredibly personal and often contain specific information and identifiers that could be used to track down a person even if their face and identity have been obscured. For example, even though the names of the inmates were removed from the Tatt-C metadata, the tattoos themselves sometimes contained personal information, such as life-like depictions of loved ones, names, and birth dates that all remain viewable to researchers.

UNICAMP also said that its researcher – Prof. Léo Pini Magalhãe – is adding to the dataset by grabbing images of tattoos from the web: a practice that the EFF noted has increasingly come under fire from Congress in light of the Clearview AI face recognition scandal.

Clearview has been sued for scraping 3 billion faceprints so it can sell its facial recognition technology to law enforcement and other clients; been told to knock it off by Facebook, Google and YouTube; and has lost its entire database of (mostly law enforcement agency) clients to hackers.

It’s not that the FBI and NIST didn’t at least try to strip PII from the images’ metadata. It’s that they failed to identify PII in the images themselves. In one example, by using image data such as the photo-realistic images of inmates’ relatives, their names, dates of birth and death, EFF says it was able to identify the individual within minutes with a Google search.

After the EFF raised concerns about the PII in the images, NIST retroactively stripped images containing PII from its dataset. It was too late to strip the PII from the dataset copies it had distributed to third parties, however.

As well, NIST’s and the FBI’s evaluation of the dataset also failed to consider that the individuals associated with the tattoos could be reidentified when their inked biometrics were combined with other datasets, such as those compiled from Flickr or other social media sites.

The EFF has found a number of cases where the recipients of the dataset have, in fact, identified individuals via their tattoos:

Documents produced in response to our FOIA suit include a presentation showing that researchers at the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation had the ability to match tattoos from websites to a national criminal database. Researchers at Nanyang Technological University used the Flickr API to download thousands of images, which it then used in research that also involved the NIST dataset.

The EFF maintains that tattoos are unique: unlike other biometrics, such as faceprints or fingerprints, they’re an expression of identity. The choice to get a tattoo is a form of speech, it says, whether that means promoting their favorite sports team, celebrating the birth of a child, or a traditional tattoo tied to one’s heritage.

That makes this a free-speech issue, the group says:

It’s rare for a tattoo not to be an expression of the wearer’s culture and beliefs. In recognizing the First Amendment right to get a tattoo, and limitations on the government from preventing citizens from expressing this right, the Ninth Circuit Court of Appeals has said, ‘We have little difficulty recognizing that a tattoo is a form of pure expression entitled to full constitutional protection.’

In fact, NIST itself has justified the usefulness of tattoo recognition in identifying individuals, saying that the images “suggest affiliation to gangs, subcultures, religious or ritualistic beliefs, or political ideology.”


Latest Naked Security podcast

Cryptojacking is almost conquered – crushed along with Coinhive

Cryptojacking may not be entirely dead following the shutdown of a notorious cryptomining service, but it isn’t very healthy, according to a paper released this week.

Cryptomining websites embed JavaScript code that forces the user’s browser to begin mining for cryptocurrency. The digital asset of choice is normally Monero, which is often used in cybercrime because of its enhanced anonymity features.

Some cryptomining sites sought the visitor’s permission to co-opt their browser, often in exchange for blocking ads. Others did it surreptitiously (which is what we call cryptojacking). Either way, one name kept cropping up in these cases: Coinhive.

Coinhive provided Monero cryptomining scripts for use on websites, retaining 30% of the funds for itself. It showed up on large numbers of cryptomining and cryptojacking sites. Researchers tracked them with a tool called CMTracker.

Monero underwent a hard fork and its price plummeted. This contributed to Coinhive shuttering its service in March 2019, claiming that falling prices made it economically unviable.

Given Coinhive’s popularity, how prevalent is cryptojacking now? That’s what researchers at the University of Cincinnati and Lakehead University in Ontario, Canada explored in their paper, called Is Cryptojacking Dead after Coinhive Shutdown?

The researchers checked 2,770 websites that CMTracker had previously identified as cryptomining sites to see if they were still running the scripts. They found that 99% of sites had ceased activities, but that around 1% (24 sites) were still operating with working scripts that mined cryptocurrency. Manual checks on a subset of the sites found that a significant proportion (11.6%) were still running Coinhive scripts that were trying to connect to the operation’s dead servers.

So, where do these new scripts come from? The researchers found them linking back to eight distinct domains with names like hashing.win and webminepool.com. Searching on the eight domains surfaced 632 websites using their scripts. By far the most popular was minero.cc.

Browser-based cryptominers often seek out certain online properties like movie streaming sites to help ensure that victims stay connected, the paper said. However, they can use tricks like hidden pop-under windows to maintain a connection even after the user closes a browser tab, and technologies like WebSockets, WebWorkers and WebAssembly to make connections more robust and take direct advantage of client hardware.

The researchers said:

Cryptojacking did not end after Coinhive shut down. It is still alive but not as appealing as it was before. It became less attractive not only because Coinhive discontinued their service, but also because it became a less lucrative source of income for website owners. For most of the sites, ads are still more profitable than mining.

Will browser-based cryptojacking stay suppressed? A lot depends on its profitability. Should Monero or some other cryptojacking-friendly currency grow sufficiently in value, there will doubtless be another rush to capitalise on it.

This study didn’t look at server-side cryptojacking. This has been a scourge for companies like Tesla, which saw cryptojacking hackers compromise its cloud-based servers in early 2018. Something similar happened to the LA Times. The advantage in those attacks is that the servers keep mining, whereas a home user may shut down their laptop or desktop at the end of the day.


Latest Naked Security podcast

Delayed Adobe patches fix long list of critical flaws

Notice anything missing from last week’s Microsoft Patch Tuesday?

Obscured by a long list of Microsoft patches and some fuss about a missing SMB fix, the answer is Adobe, which normally times its update cycle to coincide with the OS giant’s monthly schedule.

It’s mostly a practical convenience – admins and end-users get all the important client patches at once, which includes Adobe’s ubiquitous Acrobat and Reader software.

And yet March’s roster was Adobe-less. This week the company made amends, issuing fixes for an unusually high CVE-level 41 vulnerabilities, 21 of which are rated critical.

It’s not clear what caused the delay although it might simply be their number and the need to finalise patches before making them public.

The two patching hotspots are the 22 CVEs in Photoshop and 13 in Acrobat and Reader.

Of these, 16 uncovered in Photoshop/CC for Windows and macOS are rated critical compared to a more modest 9 in Acrobat and Reader.

That said, Reader is ubiquitous on Window and Macs, which is why admins will probably zero in on those as the top priority.

The Acrobat/Reader criticals include five use-after-free CVEs, a buffer overflow, memory corruption, a stack-based buffer overflow, and an out-of-bounds write.

Interestingly, these cluster heavily around only two categories of the recently completely revised MITRE Corporation Common Weakness Enumeration (CWE) Top 25 most dangerous software flaws, specifically CWE-119 and CWE-416.

The first of those generic programming weaknesses, CWE-119 (Improper Restriction of Operations within the Bounds of a Memory Buffer), is by some distance the most common class of software weakness as measured by the number of CVEs associated with it and their severity.

A similar concentration of CWE-119 weaknesses is true for many of the critical flaws in Photoshop. The answer for Acrobat/Reader DC is to update to version 2020.006.20042 (APSB20-13), while for Photoshop it’s version 20.0.9 for Photoshop CC 2019, and version 21.1.1 for Photoshop 2020.

Most of the Acrobat/Reader flaws allow arbitrary code execution which would be exploited by persuading users to open a malicious PDF, so these should be patched as soon as possible.

At least there is some good news – as far as anyone knows, none of the vulnerabilities are being exploited in the wild.


Latest Naked Security podcast

Facebook accidentally blocks genuine COVID-19 news

Fake news, bogus miracle cures: Facebook has been dealing with a lot, and COVID-19 isn’t making it any easier.

Like many other companies, Facebook is trying to keep its employees safe by allowing them to opt for working remotely, so as to avoid infection.

But when humans are taken out of the content moderation loop, it might suggest that automated systems are running the show. Facebook is denying that a recent content moderation glitch has anything to do with workforce issues, but it’s also saying that automated systems are to blame for being overzealous in stamping out misinformation.

On Tuesday, Guy Rosen, Facebook’s VP of Integrity, confirmed user complaints about valid posts about the pandemic (among other things) having been blocked by mistake by automated systems:

On Wednesday, a Facebook spokesperson confirmed that all affected posts have now been restored. While users may still see notifications about content having been removed when they log in, they should also see that posts that adhere to community standards are back on the platform, the spokesperson said.

Facebook says it routinely uses automated systems to help enforce its policies against spam. The spokesperson didn’t say what, exactly, caused the automated systems to go haywire, nor how Facebook fixed the problem.

They did deny that the issue was related to any changes in Facebook’s content moderator workforce, however.

Regardless of whether the blame should lie with humans or scripts, The Register reports that it took just one day for COVID-19 content moderation to flub it. On Monday, Facebook had put out an industry statement saying that it was joining Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube to scrub misinformation contained in posts about COVID-19. (Speaking of which, just for the record, health authorities say that neither drinking bleach nor gargling with salt water will cure COVID-19).

We are working closely together on COVID-19 response efforts. We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world. We invite other companies to join us as we work to keep our communities healthy and safe.

Within one day, its automated systems were, in fact, squashing authoritative updates. From what the Register can discern, the systems-run-amok situation was first spotted by Mike Godwin, a US-based lawyer and activist who coined Godwin’s Law: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.”

On Tuesday, Godwin said that he’d tried to post a non-junky, highly cited story about a Seattle whiz-kid having built a site to keep the world updated on the pandemic as it spreads, as in, it gets updated minute by minute.

When Godwin tried to share the story on Facebook, he got face-palmed:

Other users reported similar. One such carries quite a bit of Facebook cred: Alex Stamos, formerly Facebook’s chief security officer and now an infowar researcher at Stanford University, weighed in:

A Facebook post about keeping its workers and its platform safe said that it requested any of its workers who can work at home to do so. However, that’s not an option for all of the company’s tasks, it specified:

For both our full-time employees and contract workforce there is some work that cannot be done from home due to safety, privacy and legal reasons.

According to Stamos, content moderation is one of the tasks that can’t be done at home due to Facebook’s privacy commitments. So which is it: were content moderators sent home as Stamos suggested, leaving the machines in charge? How does that jibe with Facebook’s statement that staffing had nothing to do with the glitch?

Either way, this crisis is pointing to some kinks needing to be worked out in the human/script content moderation process. Facebook workers have a lot on their plate when it comes to keeping users connected with family, friends and colleagues they can no longer see face to face, and when it comes to keeping us all properly informed, as opposed to drinking bleach or wasting our time on other snakeoil posts.

The last thing we need is to be kept from reading about things that whiz-kids are cooking up. Let’s hope that Facebook gets this figured out.


Latest Naked Security podcast

go top