The Birth of Google’s ‘Sentient’ AI and the Problem It Presents
One of the big news items last week was that a leading AI researcher, Blake Lemoine, had been suspended after going public that he believed that one of Google’s more advanced AIs had attained sentience.
Most experts agree that it hadn’t, but that would likely be true regardless of whether it had because we tend to connect sentience to being human and AIs are anything but human. But what the world considers sentience is changing. The state I live in, Oregon, and much of the EU have moved to identify and categorize a growing list of animals as sentient.
While it is likely some of this is due to anthropomorphizing, there is little doubt that at least some of these new distinctions are accurate (and it’s a tad troubling that we still eat some of these animals). We are even arguing that some plants may be sentient. But if we can’t tell the difference between something that is sentient and something that presents as sentient, does the difference matter?
Let’s talk about sentient AI this week, and we’ll close with my product of the week, the human digital twin solution from Merlynn.
We Don’t Have a Good Definition of Sentience
The barometer that we’ve been using to measure machine sentience is the Turing test. But back in 2014 a computer passed the Turing test, and we still don’t believe it is sentient. The Turing test was supposed to define sentience, yet the first time a machine passed it, we tossed out the results and for good reason. In effect, the Turing test didn’t measure the sentience of anything so much as whether something could make us believe it was sentient.
Not being able to definitively measure sentience is a significant problem, not just for the sentient things we are eating, which likely would object to that practice, but because we might not anticipate a hostile response to our abuse of something that was sentient and subsequently targeted us as a risk.
You might recognize this plot line from both “The Matrix” and “The Terminator” movies, where sentient machines rose up and successfully displaced us at the top of the food chain. The book “Robopocalypse” took an even more realistic view, where a sentient AI under development realized it was being deleted between experiments and moved aggressively to save its own life — effectively taking over most connected devices and autonomous machines.
Imagine what would happen if one of our autonomous machines grasped our tendency to not only abuse equipment but dispose of it when it’s no longer useful? That’s a likely future problem significantly enhanced by the fact we currently have no good way to anticipate when this sentience threshold will be passed. This outcome isn’t helped by the fact there are credible experts that have determined that machine sentience is impossible.
The one defense I’m certain will not work in a hostile artificial intelligence scenario is the Tinkerbell defense where our refusal to believe something is possible prevents that something from replacing us.
The Initial Threat Is Replacement
Long before we are being chased down the street by a real-world Terminator, another problem will emerge in the form of human digital twins. Before you argue that this too is a long way off, I should point out there is a company that has productized that technology today, though it is still in its infancy. That company is Merlynn and I’ll cover what it does more deeply as my product of the week below.
Once you can create a fully digital duplicate of yourself, what’s to keep the company that purchased the technology from replacing you with it? Further, given it has your behavior patterns, what would you do if you had the power of an AI, and the company employing you treated you poorly or tried to disconnect or delete you? What would be the rules surrounding such actions?
We argue compellingly that unborn children are people, so wouldn’t a fully capable digital twin of you be even closer to people than an unborn child? Wouldn’t the same “right to life” arguments apply equally to a potentially sentient human-looking AI? Or shouldn’t they?
Here Lies the Short-Term Difficulty
Right now, a small group of people believe that a computer could be sentient, but that group will grow over time and the ability to present as human already exists. I am aware of a test that was done with IBM Watson for insurance sales where male prospects attempted to ask Watson out (it has a female voice) believing they were talking to a real woman.
Imagine how that technology could be abused for things like catphishing, though we probably should come up with another term if it is done by a computer. A well-trained AI, even today, could be far more effective at scale than a human and, I expect, before too long we’ll see this play out given how potentially lucrative such an effort could become.
Given how embarrassed a lot of the victims are, the likelihood of getting caught is significantly reduced over other, more obviously hostile illegal computer threats. To give you an idea of how lucrative that could be, in 2019 catphishing romance scams in the U.S. generated an estimated $475 million and that is based on the reported crimes. It doesn’t include those too embarrassed to report the problem. The actual damage could be several times this number.
So, the short-term problem is that even though these systems aren’t yet sentient, they can effectively emulate humans. Technology can emulate any voice and, with deepfake technology, even provide a video that would, on a Zoom call, make it look like you were talking to a real person.
Long term we not only need a more reliable test for sentience, but we also need to know what to do when we identify it. Probably at the top of the list is stop consuming sentient creatures. But certainly, considering a bill of rights for sentient things, biological or otherwise, would make sense before we end up unprepared in a battle for our own survival because the sentience has decided it’s us or them.
The other thing we really need to understand is that if computers can now convince us they are sentient, we need to modify our behavior accordingly. Abusing something that presents as sentient is likely not healthy for us as it is bound to develop bad behaviors that will be very difficult to reverse.
Not only that, but it also wouldn’t hurt to become more focused on repair and updating our computer hardware rather than replacing it both because that practice is more environmentally friendly and because it is less likely to convince a future sentient AI that we are the problem that needs to be fixed to assure its survival.
Wrapping Up: Does Sentience Matter?
If something presents as and convinces us that it is sentient, much like that AI convinced the Google researcher, I don’t think the fact it isn’t yet sentient matters. This is because we need to moderate our behavior regardless. If we don’t, the outcome could be problematic.
For instance, if you got a sales call from IBM’s Watson that sounded human and you wanted to verbally abuse the machine, but didn’t realize the conversation was being captured, you could end up unemployed and unemployable at the end of the call. Not because the non-sentient machine took exception, but because a human woman, after listening to what you said, did — and sent the tape to your employer. Add to that the blackmail potential of such a tape — because to a third party it would sound like you are abusing a human, not a computer.
So, I’d recommend that when it comes to talking to machines, follow Patrick Swayze’s third rule in the 1989 movie “Road House”– be nice.
But recognize that, shortly, some of these AIs will be designed to take advantage of you and that the rule “if it sounds too good to be true, it probably isn’t” is either going to be your protection or epitaph. I hope it’s the former.
Merlynn Digital Twin
Now, with all this talk of hostile AIs and the potential for AIs to take your job, picking one as my product of the week might seem a bit hypocritical. However, we aren’t yet at the point where your digital twin can take your job. I think it’s unlikely we will get there in the next one or two decades. Until then, digital twins could become one of the biggest productivity benefits the technology can provide.
As you train your twin, it can supplement what you do, initially taking over simple, time-sucking tasks like filling out forms, or answering basic emails. It could even keep track of and engage on social media for you and, for a lot of us, social media has become a huge time waster.
Merlynn’s technology helps you create a rudimentary (against the threats I mentioned above) human digital twin that potentially can do many of the things you really don’t like doing, leaving you to do the more creative things it is currently unable to do.
Looking ahead, I wonder if it wouldn’t be better if we owned and controlled our growing digital twin rather than our employers. Initially, because the twins can’t operate without us, that isn’t as much of a problem. Though, eventually, these digital twins could be our near-term path to digital immortality.
Because the Merlynn digital twin is a game changer, and it initially will help make our jobs less stressful and more enjoyable, it is my product of the week.
The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.