Talking to AI has been around for a long time. At least via text.
Some early AI research at MIT in the 1960s and 70s involved the concept of a blocks world. It was basically a small room with blocks kind of like old-timey children’s toy blocks.
Presumably this was influenced by or cross-pollinated with the AI Lab directors’ ideas. Those directors would be Marvin Minsky and Seymour Papert.
Papert had worked with the famous child-mental-development researcher Jean Piaget, so it’s a not a big surprise they’d come up with a simplified world of childlike blocks to test a childlike artificial intelligence—although the experiments were clearly never very intelligent at all, nowhere near a human child. That said, there were many good ideas generated from those days, some which are still not fully explored.
Papert split off into how to advance learning of real human children, helping create what became the Lego Mindstorms robotics kits. Meanwhile Minsky kept on with AI theory and mentioned the blocks world concept and mental development a lot in his 1980s book Society of Mind.
Blocks worlds could be virtual with 3D renderings on a computer monitor or physical with an actual robot manipulating real blocks.
An AI program called SHRDLU used a virtual blocks world. What was SHRDLU? From Terry Winograd’s home page:
SHRDLU is a program for understanding natural language, written by Terry Winograd at the M.I.T. Artificial Intelligence Laboratory in 1968-70. SHRDLU carried on a simple dialog (via teletype) with a user, about a small world of objects (the BLOCKS world) shown on an early display screen (DEC-340 attached to a PDP-6 computer).
Sounds impressive. And may have influenced computer games. But did it really have any understanding?
Via Jonathan Blow:
Except as far as I ever heard, nobody was ever able to run SHRDLU in a state where it would work as well as implied in the paper, and neither was anything after ever able to do that, and these facts raise a lot of questions.
Via Nintil:
SHRDLU was either very finicky or it just didn’t work.
But was it always supposed to be known to be just a fragile demo? Later Winograd made it sound like it didn’t really work so much as was highly jury-rigged to just the specific dialogue(s) for the demos which raises questions on how far he actually got coding SHRDLU’s internal models and other logic:
Key quote from that:
Instead of “perish or publish” it’s “demo or die.”
What is a Potemkin Village?
Potemkin village, in its original meaning, any of a number of fake villages designed to impress the Russian empress Catherine the Great.
…
Despite the dubious nature of the origin story, the phrase “Potemkin village” has endured and is used to describe a situation in which an undesirable reality is hidden behind an impressive facade designed to deceive observers into thinking the reality is better than it actually is.
https://www.britannica.com/topic/Potemkin-village
The tragedy of demo culture and specifically of AI—do the demos, get degrees, get jobs to pay the bills, leave the code to gather dust and largely be forgotten, contemporary commercial AI efforts can’t match what people envisioned, AI winters, and so on…until now…
But actually lots of AI research in the past several decades has resulted in working solutions that ended up being used in all kinds of industries and everyday scenarios. It’s just that once it works they don’t call it AI, it’s just normal computer science.
Up until recently.
And then some absolute son of a bitch created ChatGPT, and now look at us. Look at us, resplendent in our pauper’s robes, stitched from corpulent greed and breathless credulity, spending half of the planet’s engineering efforts to add chatbot support to every application under the sun when half of the industry hasn’t worked out how to test database backups regularly. This is why I have to visit untold violence upon the next moron to propose that AI is the future of the business – not because this is impossible in principle, but because they are now indistinguishable from a hundred million willful fucking idiots.
https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/
In the past decade marketing and big money have latched onto the term “AI” in ways that are perhaps unprecedented in AI history. And the stuff that works is still keeping the label AI. But still we have the problem of hype and demos—sometimes even really pathetic generative AI demos yet they are described as the future of everything—promising us things that clearly aren’t going to happen with that technology and even if they work would they be accepted?
The townsfolk are slowly grabbing the pitchforks and torches again…