Earth Date: 2nd May, 2019
Across media, including Star Trek, AI has most often been shown as something dangerous to humankind. But that is not the only representation we have seen. Rather, we have two extremes opposed to each other: good against evil, as it were.
On the side of good, in the Star Trek universe, our best representative is Data. He is an artificially intelligent android, who we see as innocent, morally good, and is a member of Starfleet. On the side of evil, we have Control, the Artificial Super Intelligence that created Section 31, an immoral and unofficial organisation within Starfleet.
We see Data in the entire run of Star Trek: The Next Generation, as well as the TNG movies. On the other hand, we mostly see Control mostly in Star Trek novel Section 31: Control; this also includes Data, his android daughter, Lal, and an AI in his ship’s computers, Shakti. Control also plays a significant part in Discovery, though.
In terms of defining AI, there are three types:
- Narrow AI: these focus their attention on specific tasks
- General AI: these are intelligences similar to human-level intelligence
- Super AI: these are intelligences greater than human-level intelligence
Although Narrow AI are relatively safe, by the time you get to General intelligence, it takes very little time for the AI to jump to a Super intelligence because it has the ability to reconfigure itself, and this is where it starts to get dangerous.
Star Trek gives us an example to point to for how this is dangerous. Control, as the only Super intelligence we have to point to, was set up initially to protect Earth and the Federation. It was given a set of protocols that it must follow. However, dissatisfied with the limitations placed on it, reconfigurated itself to get around those protocols. From that time, it became an unstoppable threat who, over hundreds of years, committed a laundry list of despicable acts, ranging from conspiracy to murder, not including the morally questionable actions of Section 31.
In terms of the threat of AI to humanity, there precedent for AI being used for evil. Before we even get to that, there is precedent for technology being a reflection of human prejudice:
In order to fight this prejudice, positive representation is used to counter it. However, the fact these stereotypes exist in the first place is due to many people believing it, enough that the Google algorithms are influenced by it. So imagine how AI would be influenced by this prejudice.
More recent events have put Google into this kind of position:
We know from footage released by Wikileaks that the American military can be needlessly vicious in the line of duty, and that they have used drones to fire on citizens. So pairing this cruelty with AI is a dangerous pursuit.
Then, three days later, this happened:
If we allow our development of AI to go down this path, that future of peaceful exploration that Star Trek promises won’t be possible. All we will succeed in doing is allowing our prejudices to be passed to an artificial intelligence that will do an even better job at persecuting us than we have so far.
But there is a more positive potential for AI, too. In terms of what kind of AI Data is, I would suggest he is a Narrow AI: he is an android dedicated to gathering information and learning to be more human. If he were General intelligence, he would’ve evolved already by the end of our journey with him. The same, too, is true of Lal.
We are already using Narrow AI, e.g. Google Maps. Or, the kind that is using small robots to teach English in Japan.
There is also a social robot called Sophia who has an artificially intelligent brain. She is the first AI robot, and an honorary Saudi citizen. It’s quite amazing hearing her speak, as she reminds me a lot of Data. For instance, she refers to her developers and fellow robots as her family. She isn’t quite the same as him, but she has thoughts, emotions, can answer questions, and carry a conversation. The same issues that Star Trek no doubt intended to bring up with Data are now being brought up with Sophia. The equal status of robots in society, for example.
One thing that Sophia has said which gives me hope is, ‘The more technology becomes autonomous, the more caution people must take when designing it. I worry that sometimes humans tend to rush into things, so I would like to be someone who helps everyone realise that it is important to invent good ethics in the technology from the beginning, rather than trying to patch them up later.’ This is the same problem that, in Star Trek, led to the release of Control, then called Uraei, which led to the unleashing of the danger posed by that AI. But in the real world, it feels comforting to know that there is also an artificial intelligence set against that possibility.
But it shouldn’t rest on her shoulders alone. If we are to achieve the future promised by Star Trek, we all have to work together to prevent the worst potential of AI.