On AI and the Disappointment of LLMs
If you have spent any time on the internet in the last few months, you've heard about ChatGPT. ChatGPT is the most user friendly and accessible version of the currently en vogue form of artificial intelligence (AI), large language models (LLMs).
You may have heard people describe these as beginning to show "sparks of real general intelligence" or describing how knowledge workers will all be replaced, or you may have heard this version of AI disparaged as "complicated next-word-predictors."
I'm here to tell you it doesn't matter which side you're on, because either way, this isn't the future we imagined, but it's the one we deserve.
We thought that our AI would be omniscient, fact-filled, analytical. If you have played with previous generations of AI experiments, they knew the facts, they just stumbled on understanding and communicating with us. LLMs are the opposite.
LLMs are consummate bullshitters. They will take any position with equal aplomb. They will make up citations to papers never written by authors who don't exist. They will claim to understand what you are saying, and reply with confidence. Accuracy, understanding, a model of the world? Not important.
Instead of the calculating, inhuman precision of machines with human faces, our AI is a reflection of ourselves—jumbled facts strung together with slick used car salesperson patter. Anything to get you to the next prompt. We have created the liberal arts business bro of AI—incredible breadth, and truly incomprehensible depth, or lack thereof.