The algorithms around us | MIT Technology Review

Lofty predictions aside, the book is a useful guide to navigating AI. That includes understanding its downsides. Anyone who’s played around with ChatGPT or its ilk, for instance, knows that these models frequently make stuff up. And if their accuracy improves in the future, Mollick warns, that shouldn’t make us less wary. As AI becomes more capable, he explains, we are more likely to trust it and therefore less likely to catch its mistakes.

The risk with AI is not only that we might get things wrong; we could lose our ability to think critically and originally.

Ethan Mollick, professor, Wharton School of Business

In a study of management consultants, Mollick and his colleagues found that when participants had access to AI, they often just pasted the tasks they were given into the model and copied its answers. This strategy usually worked in their favor, giving them an edge over consultants who didn’t use AI, but it backfired when the researchers threw in a trick question with misleading data. In another study, job recruiters who used high-quality AI became “lazy, careless, and less skilled in their own judgement” than recruiters who used low-quality or no AI, causing them to overlook good candidates. “When AI is very good, humans have no reason to work hard and pay attention,” Mollick laments.

He has a name for the allure of the AI shortcut: The Button. “When faced with the tyranny of the blank page, people are going to push The Button,” he writes. The risk is not only that we might get things wrong, he says; we could lose our ability to think critically and originally. By outsourcing our reasoning and creativity to AI, we adopt its perspective and style instead of developing our own. We also face a “crisis of meaning,” Mollick points out. When we use The Button to write an apology or a recommendation letter, for example, these gestures—which are valuable because of the time and care we put into them—become empty.

Mollick is optimistic that we can avoid many of AI’s pitfalls by being deliberate about how we work with it. AI often surprises us by excelling at things we think it shouldn’t be able to do, like telling stories or mimicking empathy, and failing miserably at things we think it should, like basic math. Because there is no instruction manual for AI, Mollick advises trying it out for everything. Only by constantly testing it can we learn its abilities and limits, which continue to evolve.

And if we don’t want to become mindless Button-pushers, Mollick argues, we should think of AI as an eccentric teammate rather than an all-knowing servant. As the humans on the team, we’re obliged to check its lies and biases, weigh the morality of its decisions, and consider which tasks are worth giving it and which we want to keep for ourselves.


Beyond its practical uses, AI evokes fear and fascination because it challenges our beliefs about who we are. “I’m interested in AI for what it reveals about humans,” writes Hannah Silva in My Child, the Algorithm, a thought-provoking mix of memoir and fiction cowritten with an early precursor of ChatGPT. Silva is a poet and performer who writes plays for BBC Radio. While navigating life as a queer single parent in London, she begins conversing with the algorithm, feeding it questions and excerpts of her own writing and receiving long, rambling passages in return. In the book, she intersperses its voice with her own, like pieces of found poems.

The algorithms around us | MIT Technology Review
My Child, the Algorithm:
An Alternatively Intelligent Book
of Love

Hannah Silva

FOOTNOTE PRESS, 2023

Silva’s algorithm is less refined than today’s models, and so its language is stranger and more prone to nonsense and repetition. But its eccentricities can also make it sound profound. “Love is the expansion of vapor into a shell,” it declares. Even its glitches can be funny or insightful. “I’m thinking about sex, I’m thinking about sex, I’m thinking about sex,” it repeats over and over, reflecting Silva’s own obsession. “These repetitions happen when the algorithm stumbles and fails,” she observes. “Yet it’s the repetitions that make the algorithm seem human, and that elicit the most human responses in me.”

In many ways, the algorithm is like the toddler she’s raising. “The algorithm and the child learn from the language they are fed,” Silva writes. They both are trained to predict patterns. “E-I-E-I-…,” she prompts the toddler. “O!” he replies. They both interrupt her writing and rarely do what she wants. They both delight her with their imaginativeness, giving her fresh ideas to steal. “What’s in the box?” the toddler asks her friend on one occasion. “Nothing,” the friend replies. “It’s empty.” The toddler drops the box, letting it crash on the floor. “It’s not empty!” he exclaims. “There’s a noise in it!”

Related articles

8 Significant Research Papers on LLM Reasoning

Simple next-token generation, the foundational technique of large language models (LLMs), is usually insufficient for tackling complex reasoning...

AI-Generated Masterpieces: The Blurring Lines Between Human and Machine Creativity

Hey there! Just the other day, I was admiring a beautiful painting at a local art gallery when...

Marek Rosa – dev blog: GoodAI LTM Benchmark v3 Released

 The main purpose of the GoodAI LTM Benchmark has always been to serve as an objective measure for...