Siri, Apple’s virtual voice assistant, debuted almost 15 years ago. I remember watching Apple’s commercial featuring Samuel Jackson, where he used it to schedule tasks and help him cook. Soon enough, my friends and I were crowding around someone’s tiny iPhone 4 asking Siri inane questions.
There was no global debate. No congressional testimonies. No statements from the president. Just a bunch of amused people asking their iPhones how much wood would a woodchuck chuck if a woodchuck could chuck wood — the answer is 42, by the way.
Fast forward to today and the discourse surrounding artificial intelligence, or AI, feels very different — less humorous and more dystopian. Things transformed from celebrity endorsements to publicized congressional testimonies from tech executives.
What changed? Well, a lot.
Obviously, the technology has improved tremendously. If you ask Siri a question, she’ll provide the Wikipedia page on it. ChatGPT, on the other hand, will distill the information and answer your question directly. It can simulate conversations, answer hypotheticals and — most notoriously — do your homework, albeit imperfectly.
There is no doubt that natural language processors have drastically improved. And I’ll point out that neither these improvements nor the turnaround in attitudes were as big of a lurch as they feel. Our skepticism rose in tandem with technological improvements.
Siri was benign. The launch of Alexa, Amazon’s voice assistant, was when people started asking questions. And ChatGPT’s release was when the CEO testified before the Senate Judiciary Committee and when President Joe Biden took the stance that AI could “overtake human thinking.”
This rise in skepticism is a good thing. Sure, it may feel bad, but it is a result of our institutions at work. For all the talk of dysfunctional government, polarized politics and the overall imminent collapse of American society and capitalism, the congressional testimony went surprisingly well. Senators from both parties agreed the nation should proceed with caution, and so did a major tech entrepreneur.
Often, we hear that those who have failed to learn from history are doomed to repeat it. But who’s to say we haven’t learned from history? We know that machines displaced workers in the 19th century and that the Gutenberg printing press practically led to the Protestant Reformation. We understand the underlying principle that there cannot be radical changes in technology without radical changes in society occurring as a result.
The tech side is already policing itself. Many prompts are off limits on ChatGPT, such as asking it to incite violence. Other AI chatbots such as Snapchat’s “My AI” also give somewhat lame answers to controversial questions. This is indicative that the private sector is interested in maintaining ethical boundaries without consumers demanding the government twists tech’s arm, something we too often see.
I would also contend that the uncanny valley plays a disproportionate role in all this. In aesthetics, the uncanny valley is defined as a relation between an object’s degree of human resemblance and our emotional response to it. In simple terms, the uncanny valley explains why we think Pixar animations are cute, but real-life human robots can be “uncanny.”
As someone who studies data science and works in an industry currently implementing AI, I see other technologies improving at lightning pace. Computer vision, another subset of AI, is one of these technologies. Broadly speaking, computer vision is a computer’s ability to process and analyze images. It’s like ChatGPT, but for pictures instead of words.
You have undoubtedly interacted with this technology if you have ever set foot in an airport or a self-driving car. Computer vision just doesn’t draw much attention because a computer recognizing things in pictures feels like an advanced sensor, whereas a computer recognizing written prompts feels like HAL 9000, the omniscient entity in “2001: A Space Odyssey.” One is uncanny, the other is not. The same goes for sound recognition software — I doubt many people are scared of the Shazam app.
Perhaps a true expert could argue that one type of AI is more potent than the other. However, for us average STEM nerds (and non-STEM nerds), it seems as though we are only concerned about how computers speak, not how they see and listen. This leads me to believe that our worries might be more correlated with our imagination than the potency of the technology itself.
Regardless, technological innovation tends to move one way. There’s a reason why USBs rendered file cabinets nearly obsolete — once a technology has been introduced, it is extremely hard to remove it. As more than one person pointed out at the congressional hearing, we would be fools to not progress this technology out of fear. It will be developed one way or another. It’s better if it is done transparently and with the cooperation of a liberal democracy, rather than our adversaries taking the mantle.
The truth is this technology will be developed by the elites — for the simple reason that creating AI models is simply not a mom-and-pop operation. It will be financed by big investors, scrutinized by politicians and developed by tech overlords. It’s easy to be skeptical of these classes — I am as well. But a broken clock is right twice a day, and they got this time right.