This week I’d like to explore the question of whether LLMs could be considered a new form of intelligence through a reading of Ethan Mollick’s latest book, Co-Intelligence. Mollick is an Associate Professor of Management at Wharton, specialising in entrepreneurship and innovation and has thought deeply about how AI can be effectively implemented in the workplace. The Economist reviewed Mollick’s Co-Intelligence alongside my book Feeding the Machine as “two sides of the debate” on AI so I decided to see what the other side has to say.
There are two different arguments about the nature of AI throughout the book. The weaker claim is that AI is a useful tool for summarising information, generating ideas and helping with simple administrative tasks. The stronger claim is that AI is a new form of intelligence, “something remarkably close to an alien co-intelligence”. While the book has some useful tips for beginners on how to get the most out of LLMs, I was disappointed that the stronger claim was not better defended. It felt like an after-thought or marketing gimmick for what might have otherwise just been an extended blog post on ‘prompt engineering’.
When Mollick mentions AI he is almost exclusively talking about one particular type of generative AI: large language models. His “four rules for working with artificial intelligence” can be summed up as you should at least see what AI can do for most of your tasks, provide it with specific instructions and personas, but not rely on it for producing final products because it hallucinates. Good advice, but nothing beyond what most have discovered on their own.
What drew me to the book was the stronger thesis: that we might very soon conceive of LLMs as a unique form of intelligence, similar to but distinct from our own. This idea is gestured at but never properly backed up by evidence. Here’s an example from the book of how the two versions of the argument are entwined. Mollick discusses how he introduces his business students to LLMs:
“I put on a show, demonstrating how AI can help generate ideas, write business plans, turn those business plans into poems (not that there is a lot of demand for that), and generally fill the role of company cofounder” (my italics).
That’s quite a leap from producing generic business plans that no serious entrepreneur would rely on to fulfilling all of the varied roles of a company founder. The strange thing is Mollick provides a lot of sensible advice about the limitations of LLMs and seems well aware of how they work and what they can be used for. He also stresses that any sense of consciousness, sentience or self-awareness is an illusion. When we type questions to ChatGPT we encounter a responsive agent that writes nuanced and persuasive prose and this convinces us that it may have its own thoughts and feelings. But at their basis LLMs are just next token prediction machines that bear little resemblance to human thinking.
Far from an original, creative partner in intelligence, many of the examples offered of how ChatGPT could be integrated into the workplace and classroom emphasise its mediocrity. Take the following example of a learning simulation at a business school to help teach skills like negotiation. ChatGPT offers:
“You are a salesperson trying to sell 100 pens to a customer. The pens are usually $1, but the customer is trying to negotiate the price down. The customer starts by offering to buy the pens for $0.50 each. How do you respond?”
I won’t be quitting my day job as a professor just yet. Beyond these pointers and practical examples, we never really arrive at a deeper examination of the nature of AI, intelligence or how LLM’s capabilities could be compared to humans. What I took from the book was that when you don’t know anything about a subject or want something generic produced in a hurry – ChatGPT is a great time saver. But its use cases remain much more limited than its most vociferous advocates claim.
By the end of the book, it’s almost as if the argument for AI as a co-intelligence was never really the point. Anthropomorphising AI is just something that will happen, even though it is ultimately wrong: “Treating AI as a person, then, is more than a convenience; it seems like an inevitability, even if AI never truly reaches sentience. We seem to be willing to fool ourselves into seeing consciousness everywhere, and AI will certainly be happy to help us do so.”
Perhaps more accurately, investors in AI will be happy to help play along with this false narrative, even though it is profoundly inaccurate and misleading. What I didn’t get was why we should all play along with the ruse.
Nice piece, James! And now I feel I don't need to bother with Mollick's book.