There’s an exchange early in the classic '80s movie TRON. Some scientists are talking shop:
ALAN: Ever since he got that Master Control Program set up, system’s got more bugs than a bait store.
GIBBS: Well, you have to expect some static. Computers are just machines after all, they can’t think…
ALAN: They’ll start to soon enough.
GIBBS: (wryly) Yes, won’t that be grand – the computers will start thinking, and people will stop.
Gibbs has a point. The modern vision of a utopian future is one where work is relieved, and people are free to pursue leisure, or exercise their creativity with art, writing, and poetry. Thinking computers are here now, in the form of “large language models” (LLMs) like ChatGPT. Setting aside the irony that creative works are the first and most visible applications of LLM technology – is that imagined future actually a good one?
Mom is always right
When I was a kid, I remember a day going to yard sales with my mom in the family minivan. It was early summer, a hot day. The windows were down, and I complained that if the vehicle has good air conditioning, we should use it. What was the point in getting all hot? “To get used to the warm weather,” came the answer. What an injustice! We were sweating back there! Later in life, I took a short trip to Arizona in August. Everyone scurried from building to building. Where the sun was doubled, reflected off of glass skyscrapers, the temperature jump was alarming. It was actually unsafe to spend long stretches outside unprepared. But when I returned to Massachusetts, for the rest of the summer 85 or 90 degrees Fahrenheit felt like nothing.
All that to say, the work that LLM technology offers to relieve isn’t just about achieving a result. The effort maintains and builds our abilities. Work pushes us to connect to each other for help, or to persevere in doing something difficult. Outsourcing that work eventually means losing the ability to do it yourself.
Attention must be paid
Simply put, an LLM is a document completion engine. You give it text, and it extends it. The result doesn’t have to be true, it just has to be convincing. No amount of pre-training or guard rails will make it truthful. It does often say true things, but that’s not the point, it’s more of a happy accident.
Because they are built from essentially the whole public internet, LLMs also have a strong connection to The Algorithm. Algorithms that run social media feeds and online advertising are designed to attract human attention, a precious thing. Social media algorithms and LLMs are oriented towards capturing that attention. The foundational LLM paper is even called, “Attention is all you need”. A prescient title. LLM intelligence is not like ours. It can’t know what it’s like to be a human.
If this was a person, someone who wanted your attention and had this kind of indifference towards truth, they would be considered a con-man or bullshitter[1]. Untrustworthy.
Don’t create the torment nexus
LLMs clearly manifest a type of intelligence. Sure, it’s “just” some linear algebra and a ton of data. But it does exhibit a type of intelligence. One without empathy. Not being human, it can’t have empathy – and intelligence without empathy can be dangerous [2] [3].
Science fiction is littered with cautionary tales about inhuman intelligence. For that matter, so is myth: genies give people whatever they want, but because people have self-destructive desires (like the desire to avoid work), it goes wrong. In TRON, Infocom has the MCP (Master Control Program), an overgrown chess program that is given access to whatever information it can consume, until its intelligence and capabilities are seemingly endless. The company leadership comes to rely on the program so completely that it becomes their entire interface for understanding and operating the business. There is also the irony that Infocom’s success was built on the misuse of intellectual property, much as LLM companies have done [4] [5].
I don’t think I am wise enough to safely use a genie in a bottle. And I don’t want to outsource my creative efforts to an addictive, bullshitting alien intellect, even if it might save time and effort in the short term.
-
AI chatbot pushed teen to kill himself, lawsuit alleges | AP News ↩︎
-
Belgian man dies by suicide following exchanges with chatbot | Brussels Times ↩︎
-
AI, Copyright, and the Law: The Ongoing Battle Over Intellectual Property Rights | IP & Technology Law Society ↩︎
-
Generative AI Has an Intellectual Property Problem | Harvard Business Review ↩︎