If you’re casually interested in AI, then I think Ethan Mollick’s “Co-Intelligence: Living and Working with AI” is a book that you might find interesting. It’s not a technical book, and I believe it would be easy for someone not deeply involved in this world to read. It provides a very general introduction into how to utilize Large Language Models (LLMs) and serves as an introduction of what it means to live and work alongside these new tools.

“Co-Intelligence” unpacks the arrival and impact of LLMs, including tools like ChatGPT, Claude and Google’s Gemini models. Mollick, a professor of management at Wharton, approaches AI not as a computer scientist, but rather focuses on the practical applications and societal implications. In his own classroom, he has made AI mandatory, designing assignments that require students to engage with AI for tasks ranging from critiquing AI-generated essays to empowering them to tackle ambitious projects that might otherwise seem impossible (like encouraging non-coders to develop working app prototypes or create websites with original AI-generated content). He guides the reader through understanding AI as a new form of “co-intelligence,” which can be harnessed to help improve our own productivity and knowledge.

One concept I found interesting is what Mollick calls the “jagged frontier” of AI. This refers to the sometimes unpredictable nature of AI’s abilities. It might perform complex tasks with ease, like drafting a sophisticated marketing plan, and then struggle with something that seems simple to us. He gives an example of an AI easily writing code for a webpage but then providing a clearly wrong answer to a simple tic-tac-toe problem. This highlights why we can’t blindly trust AI and understanding its specific strengths and weaknesses through experimentation is key.

Mollick also delves into AI’s creative ability. He discusses how AI can excel in creative tasks, sometimes outperforming humans on subjective tests. This leads to interesting discussions about the future of creative work and education. The “Homework Apocalypse” he describes, where AI can effortlessly complete traditional school assignments, is a challenge educators and parents are currently facing. Mollick suggests this doesn’t mean the end of learning, but rather a shift in how and what we learn, emphasizing the need for human expertise to guide and evaluate AI.

The sheer volume of AI generated content being posted on the internet has is also becoming a problem and something we need to figure out how to navigate.

Even if AI doesn’t advance further, some of its implications are already inevitable. The first set of certain changes from AI is going to be about how we understand, and misunderstand, the world. It is already impossible to tell AI-generated images from real ones, and that is simply using the tools available to anyone today.

[…]

Our already fragile consensus about what facts are real is likely to fall apart, quickly.

Well, that’s just downright cheery! If anything, it highlights the importance of developing our ability to think critically and analytically in an AI-influenced information age.

Mollick lays out ways that we can better work with AI and leverage its strengths to help us, calling it the “four rules of co-intelligence.” These include always giving AI tools a seat at the table to participate in tasks, maintaining a human-in-the-loop throughout the the process to validate and verify AI work, treating AI as a specific kind of collaborator by telling it what persona to adopt, and remembering that current AI is likely the “worst” version we’ll ever use due to rapid improvements.

The bit on assigning personas was interesting. In my own experience, I’ve seen the benefits of giving AI a persona through system prompts. There’s also this fun example.

To make the most of this relationship, you must establish a clear and specific AI persona, defining who the AI is and what problems it should tackle. Remember that LLMs work by predicting the next word, or part of a word, that would come after your prompt.

[…]

Telling it to act as a teacher of MBA students will result in a different output than if you ask it to act as a circus clown. This isn’t magical—you can’t say Act as Bill Gates and get better business advice—but it can help make the tone and direction appropriate for your purpose.

The idea of these rules is that it can (theoretically) make working with AI feel less like a technical challenge and more like a collaborative effort.

Mollick also examines some philosophical questions that the use of AI brings, such as a “crisis of meaning” in creative work of all kinds. One specific example:

Take, for example, the letter of recommendation. Professors are asked to write letters for students all the time, and a good letter takes a long time to write. You have to understand the student and the reason for the letter, decide how to phrase the letter to align with the job requirements and the student’s strengths, and more. The fact that it is time-consuming is somewhat the point. That a professor takes the time to write a good letter is a sign that they support the student’s application. We are setting our time on fire to signal to others that this letter is worth reading.

Or we can push The Button.

The Button, of course, is AI.

Then The Button starts to tempt everyone. Work that was boring to do but meaningful when completed by humans (like performance reviews) becomes easy to outsource—and the apparent quality actually increases. We start to create documents mostly with AI that get sent to AI-powered inboxes, where the recipients respond primarily with AI. Even worse, we still create the reports by hand but realize that no human is actually reading them.

Side note: this exact scenario is something I’ve recently joked about with a manager at work. We have our yearly performance reviews and have to write a self assessment. Everyone now feeds a list of bullet points into their favorite LLM. The manager takes this overly verbose text and feeds it into an LLM to simplify the text.

On top of all this, Mollick also points out the need to always be skeptical of AI generated output, citing a famous case in 2023 where an attorney used ChatGPT to prepare a legal brief and was caught when defense lawyers could not find any records of 6 cases that were cited in the filing.

There is an interesting website I recently heard about, that is tracking fake citations used in court filings. 121 instances have currently been identified!

All in all, it’s a clear reminder of AI’s capacity for hallucination and the critical need for human oversight. The book frames AI not as a replacement, but as a powerful, though sometimes flawed, partner that can augment our abilities.

Overall, “Co-Intelligence” offers a decent overview for those curious about using current AI tools and thinking about their future integration into our lives. While it may present a more surface-level exploration for those already deeply familiar with LLMs, it provides some useful insights into the shifts AI is bringing to work and creativity. For someone looking for a general, non-technical introduction to the topic, it’s a solid read.