Who doesn’t know their mother’s maiden name?!
A bot that’s trying to convince you it’s human but which hasn’t been programmed to answer that question or improvise very convincingly, that’s who. Or, as I said when I finished playing a new online Turing Test game called Bot or Not, NAILED IT!!
Bot or Not is an online game that pits people against either bots or humans. It’s up to players to figure out which they’re engaging with in the 3-minute game, in which they’re forced to question not only whether their opponent is human but exactly how human they themselves are.
The creators of Bot or Not – a Mozilla Creative Awards project that was conceived, designed, developed and written by the New York City-based design and research studio Foreign Objects – say that these days, bots are growing increasingly sophisticated and are proliferating both online and offline. It’s getting tougher to tell who’s human, which can come in handy in customer service situations but is a bit scary when you think about scam bots preying on us on Tinder and Instagram, or corporate bots that try to steal your data.
The friendly face of pervasive surveillance
In their explanation of Bot or Not’s purpose, the game’s creators point to a recent Gartner industry report that predicted that by 2020, the average person will engage in more conversations with bots than with their spouses.
Think about it: how often do you talk to voice assistants like Siri or OK Google? Chatbots have become seamlessly integrated into our lives, presenting what Foreign Objects calls “a massive risk to privacy” and will remain so for as long as collecting personal data remains the primary business model for major tech platforms.
Big tech knows that in order to get the most data out of our daily lives, they need us to invite bots into our homes, and to enjoy ourselves while we do so.
One example: smart speakers, those always-listening devices that are constantly surveilling our homes. As we’ve reported in the past, smart speakers mistakenly eavesdrop up to 19 times a day. They record conversations when they hear their trigger words… or by something that more or less sounds like one of their trigger words. Or by a burger advertisement. Or, say, by a little girl with a hankering for cookies and a dollhouse.
Last year, smart-speaker makers found themselves embroiled in backlash over privacy after news that smart speakers from both Apple and Google were capturing voice recordings that the companies were then letting their employees and contractors listen to and analyze. Both companies suspended their contractors’ access.
What does Bot or Not have to do with all that? Foreign Objects says that while government regulation is struggling to keep up with new technologies, there’s little public awareness or legal resistance to stop companies from developing a global surveillance network on an unprecedented scale – something that’s already been done on a massive scale with the plethora of devices with smart assistants.
Governments are not only lagging behind on policy, they are also part of the problem.
This is about more than these devices listening in on our private moments. It’s about big-tech corporations willingly handing over citizens’ private data to police without consent, Foreign Objects says.
As chatbots slide seamlessly into our personal and domestic lives, it has never been more important to demand transparency from companies and policy initiative from regulators.
Smart speakers running on artificial intelligence (AI) are one thing. Chatbots, however, are taking data interception to a whole new level, say the creators of Bot or Not:
In the hands of big platforms, chatbots with realistically human-like voices are openly manipulative attempts to gather our data and influence our behaviours.
They point to advanced “duplex” chatbots released in the last few years by Microsoft and Google, so-called because they can speak and listen at the same time, mimicking the experience of human conversation. If you’re wondering how that might feel, you can look to Google’s Duplex neural network AI, introduced last year and designed to sound and respond like a human being, down to all the “umms” and “aahs.”
It was too real. Google faced a backlash over its failure to disclose that the person on the other end of the line – a supposedly human hairdresser taking a customer booking was one such – was actually a bot.
Sociologist of technology Zeynep Tufecki’s response at the time:
[The lack of disclosure is] horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.
Deception: “It’s a feature, not a bug”
Google later added a disclosure feature to Duplex’s interactions, but Bot or Not’s creators aren’t sure that a warning label is enough. They liken these human-like voice chatbots to deepfakes in their potential to give rise to entirely new forms of deception and abuse, particularly to those who are already vulnerable to bot-based scams, such as the elderly.
These things are meant to trick us into thinking they’re human, Foreign Objects points out. Google didn’t screw up with those “umms” and “aahs.” Deception is part of parcel of the design:
There is a fundamental contradiction in human-like service bots. On one hand, legally and ethically, they need to disclose their artificiality; on the other, they are designed to deceive users into thinking, and acting, as if they were also humans. Duplex stunned audiences because its ‘um’s and ‘ah’s’ mimic the affect and agency of a fellow human being.
I found Bot or Not pretty easy to nail as a bot. I mean, come on, it didn’t know its own mother’s maiden name.
But would I have the same ease with Google Duplex? … and what does it all matter?
It matters when bots/AI/voice assistants get pulled into court to provide evidence in trials, for one. It’s happened before, Foreign Objects points out: in 2017, Amazon had to fight to keep recordings from its Echo IoT device out of court in a murder case.
Amazon claimed that Alexa’s data was in fact part of Amazon’s protected speech. … which, some have argued, might in fact bestow First Amendment protections. And this is why that matters, according to Foreign Objects:
In the US, First Amendment protections would mean that the makers of bots, like Google, Amazon and countless others, could not be held responsible for the consequences of their creations, even if those bots act maliciously in the world. All the same, … insisting that expressions made by ‘bots’ are strictly the speech of their creators comes wrapped up in its own complications, especially when humans are conversing daily with bots as friends, therapists, or even lovers.
In light of AI advancement, it’s important to be on guard as we engage with these chatbots in ever more intimate contexts such as these. We should all bear in mind that no matter how “LOL,” “IDK” and “ahhh”-ish they come off as, they are, in fact, surveillance-gathering tools. Does it matter whether they’re corporations or crooks trying to get at our data?
Either way, Foreign Objects says, this is privacy invasion in the ever-growing web of pervasive surveillance.