Millions upon millions of us routinely, sometimes compulsively, interact with three disembodied “people.” We know them as Alexa, Siri and Google. Most frequently, we ask them about quotidian things, such as the weather forecast or how to make a casserole. We also order goods and services — sometimes indispensable ones — through them. (Or we can program them to automatically order what we need).
We beseech our devices to provide point-by-point directions to our favorite restaurant, even if we know the way. They can even control the lights, thermostats and security in our homes. It’s a perfect setup for sociopaths, who can get what they want without having to talk to an actual person.
These nearly ubiquitous gadgets, which rule the so-called “internet of things” are — at once — a dynamic triumvirate and the most obsequious servants. They do our bidding, but many of us are more dependent on their power than we are aware of (or perhaps want to admit).
For its part, Google is by bar the most popular search engine on the planet. (That’s a weird description given that it’s not actually “on” the planet, but you get the point.) Google accounts for 85%-90% of internet searches worldwide. Bing is a distant second, providing roughly 8% of all searches. I’ve always joked that Bing merely pretends to be a search engine. (If you’ve used it, you know why.) That could dramatically change in the not-too-distant future.
Microsoft, which created Bing, recently bought ChatGPT. ChatGPT, which stands for Chat Generative Pre-Trained Transformer, is an artificial intelligence (AI) platform. ChatGPT is such a powerful aggregator of information that it has made Google’s parent company, Alphabet, sit up and take notice. That was nearly unthinkable until very recently.
Of course, the rise of AI has uses — and implications — that go far beyond internet searches. For example, AI is the technology behind self-driving vehicles, including 18-wheelers. (Look up the phrase “trolley problem” to consider just one potential challenge with such vehicles.)
As someone who has been an adjunct professor at a university, I am concerned about a challenge that may not seem as urgent as Tesla’s self-driving failures, but it’s nonetheless important. That challenge is ChatGPT’s ability to generate short- and long-form written documents. Anyone can ask ChatGPT to spit out, say, a homework assignment or college admissions essay — and the platform will immediately comply.
As has been widely reported, the documents that ChatGPT produces usually are very well done. In fact, it’s often impossible to distinguish between something that was written by a human being as opposed to an AI algorithm. In short, ChatGPT makes cheating infinitely easy — and often difficult to detect. (Incidentally, some brilliant techies are working on “countermeasures” that are designed to detect whether documents were generated by AI.)
One of the many philosophical questions that such technology raises is whether submitting something from ChatGPT is, technically speaking, plagiarism. On a related note, one can debate whether such documents are the result of a truly “creative process”, at least as we historically have understood that phrase. In addition to ethical questions, there are ethereal and aesthetic ones. For example, could AI ever produce works as beautiful and poignant as those written by Toni Morrison or James Baldwin? And who gets to judge?
Margaret Wolfe Hungerford admonishes us that “beauty is in the eye of the beholder.” Should it matter whether the eye is judging a work that was generated by a machine?
AI, perhaps more than any invention in history, has transformed science fiction into science fact. Technological innovation has virtually always created a tension between “Can we?” and “Should we?” Perhaps most importantly, there is the question of whether we’re close to reaching “the singularity,” which is the point at which AI becomes “self-aware.” (Think of “The Terminator”).
In the end, all of these deep ethical, moral, and philosophical issues raise the primordial question of what it means to be human. Incidentally, AI also tells fibs. Yes, AI will occasionally fabricate an answer to a question that it posed to it. (Perhaps this is the most important indicator that we have breached the singularity.)
Celebrated sci fi writer Philip K Dick, who died in 1982, quite presciently wrote: “There will come a time when it isn’t ‘They’re spying on me through my phone’ anymore. Eventually, it will be ‘My phone is spying on me.’”
Let’s hope that we have not quite gotten to that point.
Larry Smith is a community leader. The views expressed are his own. Contact him at larry@leaf-llc.com.