Book Review: The Coming Wave by Mustafa Suleyman

Mustafa Suleyman’s The Coming Wave is a book in two parts: the first details how technological advancements have propelled humanity forward in waves — he uses the analogy of waves and how these natural forces can change the world around us (e.g., think of massive floods and tsunamis). He argues that these metaphorical waves of innovation are both unstoppable and transformative. The second part of the book serves as a warning about the potential dangers of artificial intelligence and other rapidly developing technologies, questioning whether humanity can harness these creations or if they will spiral beyond our control.

Suleyman co-founded DeepMind, an AI research company ultimately acquired by Google in 2014.  DeepMind was known for its work in artificial intelligence — particularly in developing systems like AlphaGo, which defeated human world champions in the game of Go (once thought to be an impossible task for AI). Suleyman illustrates how these innovations have reshaped industries, improved lives, and spread rapidly throughout society:

“General-purpose technologies become waves when they diffuse widely. Without an epic and near-uncontrolled global diffusion, it’s not a wave; it’s a historical curiosity. Once diffusion starts, however, the process echoes throughout history, from agriculture’s spread throughout the Eurasian landmass to the slow scattering of water mills out from the Roman Empire across Europe.”

He gives a number of interesting examples to support this. Such as:

“Or take electricity. The first electricity power stations debuted in London and New York in 1882, Milan and St. Petersburg in 1883, and Berlin in 1884. Their rollout gathered pace from there. In 1900, 2 percent of fossil fuel production was devoted to producing electricity, by 1950 it was above 10 percent, and in 2000 it reached more than 30 percent. In 1900 global electricity generation stood at 8 terawatt-hours; fifty years later it was at 600, powering a transformed economy.”

However, the book shifts dramatically in tone as it progresses, focusing on the challenges of controlling and regulating these emerging technologies. Suleyman presents a case for why containment is necessary (and that it is even possible) in order to ensure these technologies positively serve humanity rather than disrupt it. Though he acknowledges that this will be difficult, especially in today’s highly charged political environment:

“Going into the coming wave, many nations are beset by a slew of major challenges battering their effectiveness, making them weaker, more divided, and more prone to slow and faulty decision-making. The coming wave will land in a combustible, incompetent, overwrought environment. This makes the challenge of containment—of controlling and directing technologies so they are of net benefit to humanity—even more daunting.”

Well, that’s fun! But I think he’s mostly right.

However, in my opinion, I think trying to contain these technologies is no longer possible. Pandora’s box has already been opened, and it’s likely too late for any meaningful containment or regulation to happen due to the pace at which these advancements are occurring. It’s effectively an arms race as various AI laboratories build upon each others’ work and compete to outdo one another. An earlier passage in the book says as much:

“Of course, behind technological breakthroughs are people. They labor at improving technology in workshops, labs, and garages, motivated by money, fame, and often knowledge itself. Technologists, innovators, and entrepreneurs get better by doing and, crucially, by copying. From your enemy’s superior plow to the latest cell phones, copying is a critical driver of diffusion. Mimicry spurs competition, and technologies improve further. Economies of scale kick in and reduce costs. Civilization’s appetite for useful and cheaper technologies is boundless. This will not change.”

Looking at the reviews of this book on Goodreads, I noticed a lot of 1-star reviews. They seem to mostly be from those who dislike, fear, or otherwise loathe this technology. While I can understand their concerns, I think The Coming Wave offers a balanced take from someone on the inside, someone who is working (and has worked) on creating these AI models. Some of the arguments made in these reviews call into mind Neo-Luddism. Which Suleyman has an answer for:

“The Luddites were no more successful at stopping new industrial technologies than horse owners and carriage makers were at preventing cars. Where there is demand, technology always breaks out, finds traction, builds users.”

Overall, I thought that The Coming Wave was a good read, balancing optimism with caution. Suleyman’s first-hand expertise in developing state of the art AI models lends credibility to his arguments, and makes this an interesting read for anyone who wants to know about the potential societal impacts of AI tools.

TokenFlow: Visualize LLM token streaming speeds

Have you ever wondered how fast your favorite LLM really compares to other SoTA models? I recently saw a Reddit post where someone was able to get a distilled version of Deepseek R1 running on a Raspberry Pi! It could generate output at a whopping 1.97 tokens per second. That sounds slow. Is that even usable? I don’t know!

Meanwhile, Mistral announced that their Le Chat platform can output tokens at 1,100 per second! That sounds pretty fast? How fast? I don’t know!

So, that’s why I put together TokenFlow. It’s a (very!) simple webpage that lets you see the speed of different LLMs in action. You can select from a few preset models / services or enter a custom speed, and boom! You watch it spit out tokens in real time, showing you exactly how fast a given inference speed is for user experience.

Check it out: https://dave.ly/tokenflow/

The code is also available on Github.

It’s AI all the way down

Back in November, I went with some friends to play paintball — it was the first time I ever played. We had booked a 3 hour session that would feature multiple matches. I don’t think any of us had ever played before and we were all pretty nervous about getting hit.

Lo and behold, within the first 30 seconds of the game, I took a paintball to the knee (cue the “I used to be an adventurer like you…” meme from Skyrim). Somehow, I twisted my leg as I rag dolled into the ground.

Of course, you can’t just give up after 30 seconds, right? So, on I played. The result is that I ended up tearing my ACL (the doc said he had no idea how this could have happened), have a bone contusion, and will likely need reconstructive surgery at some point. Fun!

Anyway, the point of all of this — for funsies, I tried to create a song about the situation using Suno’s generative music service (see previously). I used ChatGPT to come up with some initial lyrics and then did some work to refine them.

Then! I decided to use OpenAI’s generative video tool, Sora, to attempt to create a bunch of clips. I strung everything together in iMovie and the result is this rowdy music video: “This is What I Get

Apps I like: LocalSend

In line with yesterday’s post about how I use AI, here is a post on an app I find useful.

LocalSend has become my go-to app for sharing files between devices. If you’ve ever been frustrated by the limitations of AirDrop or struggled to move files between devices without using the cloud, then this app is a game-changer. It’s like AirDrop, but for everything under the sun.

LocalSend is an open source, cross-platform file-sharing app that lets you send files and text between devices on the same local network. No internet connection or third-party server required.

It works across all major platforms: Windows, macOS, Linux, Android, and iOS. I also find it to be more reliable than AirDrop, which can be extremely finicky. LocalSend just works. It’s fast, finds devices quickly, and transfers files without random drop-offs.

Another big plus: privacy. Since LocalSend operates over a local network, your files never leave your devices. There’s also no file size limit, making it perfect for transferring large files without needing a USB drive or cloud service.

I use LocalSend for everything from moving files between my laptop and phone to transferring books to my e-reader. If you’re looking for a fast, reliable, and private way to share files, LocalSend is worth checking out. It’s replaced AirDrop for me in many situations!

Using AI: Extracting reading list from GoodReads

I feel like I want to start creating some posts related around how I, as a software engineer, personally use generative AI tools. I think they are a huge boon for increasing productivity, exploring new ideas, and even learning new things

Reading Reddit, Hacker News, and various other forums, there’s a lot of anxiety among software engineers about how AI is going to steal our jobs. It’s not without merit:

A recent blog post by Dustin Ewers adds some needed sanity to the discussion. In a post titled, “Ignore the Grifters – AI Isn’t Going to Kill the Software Industry“, he argues that:

He argues that we should ignore the grifters.

I feel like half of my social media feed is composed of AI grifters saying software developers are not going to make it. Combine that sentiment with some economic headwinds and it’s easy to feel like we’re all screwed. I think that’s bullshit. The best days of our industry lie ahead.

It’s highly unlikely that software developers are going away any time soon. The job is definitely going to change, but I think there are going to be even more opportunities for software developers to make a comfortable living making cool stuff.

I am inclined to agree. Hey, I will drink this Kool-Aid!

Well, let’s get to the real reason I’m making this post. I posted about my 2024 reading list and shared all the books I had read during the year. Try to compile that list and add links by hand would be a huge pain. There has to be an easier way. (Cue super hero music)

There is!

If you go to my GoodReads “read” list, it looks like this. And it keeps going. It’s a lot of data.

If we open up the browser console, we can see that it’s just a good old fashioned HTML table.

So, using an AI tool like ChatGPT or Claude, how do you get this data in a manageable way? One area that I’ve personally seen people struggle with is how to write a prompt in a way that helps them. You need to provide context. You say:

  1. Describe the problem: “I want to output a list of books from an HTML table into a JSON object using a JavaScript function that I can paste into the browser console.”
  2. Provide some example date: “Here is the table’s head with names for each column: [paste block of code]. Oh! Here is also an example row of data: [paste block of code]”
  3. Provide an example of the output: “Can you create a JSON object in the following shape?”

Using Claude as an example, here is what that looks like and you can also see the generated output:

Moment of truth — does it work? Let’s paste it into the browser console and see the result:

Yes! Victory! One problem though. I did not read 60 books in 2024. Oh, no. We are pulling all books visible from the page. This isn’t a problem. We can fix it by simply asking a followup question: “Can we modify the function so that it only returns books where date read is in the year 2024?”

Claude modifies the function to add a filter for 2024. If we paste that into the browser console, we now get the correct number of hooks: 30!

There is still another thing to do. I want to make this into a nice, unordered list that I can just add into my blog post. Again, we follow the steps outlined above:

  1. Can you create an unordered HTML list that shows links to each book? Please add a link around the title, but keep the author name unlinked.
  2. Here is my JSON object: [paste block of code]
  3. I essentially want a list that looks like this: <li><a href=”[booklink”>Book title</a> by Author</li>

Hot diggity! It works. It generates a block of code that I can just paste into my blog’s text editor. Pretty neat. It took a total of 5 minutes. (Hey, writing this post took a lot longer than that.)

Anyway, this has been a production of “How I use AI”. Stay tuned for more exciting updates, coming to a blog near you.

Book Review: The Alignment Problem by Brian Christian

The Alignment Problem, (released in 2020 but still highly relevant today, especially in the age of generative AI hype), is a fascinating exploration of one of the most interesting issues in artificial intelligence: how to ensure AI systems safely align with human values and intentions. The book is based on four years of research and over 100 interviews with experts. Despite the technical depth, I feel that this book is written to be accessible to both newcomers and seasoned AI enthusiasts alike. A word of warning though: this book is has A LOT of info.

Before we get too deep into this review, let’s talk about safety and what it means in the context of AI. When we talk about AI safety, we’re referring to systems that can reliably achieve their goals without causing unintended harm. This includes:

  • The AI must be predictable, behaving as expected even in novel situations.
  • It must be fair, avoiding the amplification of existing societal biases.
  • It needs transparency, allowing users and developers to understand its decision-making process.
  • It must be resilient against failures and misuse.

Creating safe AI tools is both a technical challenge, as well as a psychological challenge: it requires understanding human cognition, ethics, and social systems, as these elements become encoded in AI behavior.

The book is divided into three main sections: Prophecy, Agency, and Normativity, each tackling different areas of aligning artificial intelligence with human values.

Prophecy explores the historical and technical roots of AI and highlights examples of unintended outcomes, such as the biased COMPAS recidivism prediction tool. COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a risk assessment algorithm used in the criminal justice system to predict the likelihood of a defendant reoffending. However, investigations revealed that the tool disproportionately flagged Black defendants as higher risk compared to white defendants, raising critical questions about fairness and bias in that AI system.

Agency delves into reinforcement learning and the parallels of reward-seeking behavior in human, showcasing innovations like AlphaGo and AlphaZero. His explanation of reinforcement learning, and its connection to dopamine studies, is particularly insightful. Christian dives into psychological experiments from the 1950s that revealed the brain’s pleasure centers and their connection to dopamine. Rats in these studies would press a lever to stimulate these areas thousands of times per hour, foregoing food and rest. Later research established that dopamine serves as the brain’s “reward scalar,” which helps influence decision-making and learning. This biological mechanism has parallels in reinforcement learning, where AI agents maximize reward signals to learn optimal behaviors.

Normativity examines philosophical debates and techniques like inverse reinforcement learning, which enables AI to infer human objectives by observing behavior. Christian connects these discussions to ethical challenges, such as defining fairness mathematically and balancing accuracy with equity in predictive systems. He also highlights key societal case studies, including biases in word embeddings and historical medical treatment patterns that skew AI decisions.

Christian interweaves these sections with interviews, anecdotes, and historical case studies that breathe life into the technical and ethical complexities of AI alignment.

He also delivers numerous warnings, such as:

“As we’re on the cusp of using machine learning for rendering basically all kinds of consequential decisions about human beings in domains such as education, employment, advertising, health care and policing, it is important to understand why machine learning is not, by default, fair or just in any meaningful way.”

This observation underscores the important implications of deploying machine learning systems in critical areas of human life. When algorithms are used to make decisions about education, employment, or policing, the stakes are insanely high. These systems, often trained on historical data, can perpetuate or amplify societal biases, leading to unfair outcomes. This calls for deliberate oversight and careful design to ensure these technologies promote equity and justice rather than exacerbate existing inequalities. (Boy, oh boy — fat chance of that in light of current events in January 2025)

Christian also highlights some of the strengths of machine learning. These systems can detect patterns in data that are invisible to human eyes, uncovering insights that were previously thought impossible. For example:

“They (doctors) were in for an enormous shock. The network could almost perfectly tell a patient’s age and sex from nothing but an image of their retina. The doctors on the team didn’t believe the results were genuine. ‘You show that to someone,’ says Poplin, ‘and they say to you, “You must have a bug in your model. ‘Cause there’s no way you can predict that with such high accuracy.” . . . As we dug more and more into it, we discovered that this wasn’t a bug in the model. It was actually a real prediction.”

Examples like this show the real-world potential of machine learning to revolutionize fields such as healthcare by identifying patterns that humans might overlook. However, these benefits are accompanied by significant challenges, such as the “black box” nature of AI decision-making, where it remains difficult to determine what features a model is actually using.

Christian shows how understanding these technical challenges, alongside ethical frameworks, can lead to more robust and equitable AI systems. These considerations emphasize the nature of AI safety, which requires combining insights from cognitive science, social systems, and technical innovations to address both immediate and long-term risks.

While the book is dense (very dense!) and information-rich, this strength can also be a drawback. Some sections felt overly detailed, and the pacing, especially in the latter half, left me feeling fatigued.

Despite this, The Alignment Problem remains a compelling and optimistic exploration of how researchers are tackling AI safety challenges. I think this book is an insightful read for anyone interested in AI and will leave you thinking about our future AI overlords long after you’ve turned the last page.

My 2024 Reading List

Here’s another “year-in-review” post (I’m done, I swear). Over the course of 2024, I read 30 books. My favorite books this year were Bury My Heart at Wounded Knee and The Cuckoo’s Egg (I wrote about visiting the author at his Oakland house). My least favorite was easily Palo Alto (it was one of the few reviews I wrote this past year).

EDIT: Fixed hyperlinks. GoodReads changed how their reading challenge page is displayed and I did not update my parsing tool to account for this.

Morning coffee prevents death, say researchers

Add this to my coffee confusion post from last year. A new study published in the European Heart Journal concludes that greater coffee intake (in the morning) was “significantly associated with a lower risk of all-cause mortality.”

Hey, that’s pretty cool!

From the journal article:

In their study published in this issue of the European Heart Journal, Wang et al.8 analysed the time of the day when coffee is consumed in 40 725 adults from the NHANES and of 1463 adults from the Women’s and Men’s Lifestyle Validation Study. They noticed two distinct patterns of coffee drinking, i.e. the morning-type pattern, present in around a third of participants, and a less common all-day-type pattern present in 14% of the participants. During a median follow-up of almost a decade, and after adjustment for caffeinated and decaffeinated coffee intake, the amounts of cups per day, sleep hours, and other confounders, the morning-type, rather than the all-day-type pattern, was significantly associated with lower risks of all-cause mortality with a hazard ratio of 0.84 and of cardiovascular mortality of even 0.69 as compared with non-coffee drinkers.

This is fantastic news — wait.

I am one of those “all-day” coffee drinkers.

My top music of 2024

Last.fm has been diligently cataloging my music listening habits for nearly 20 (!!) years. Now that we’ve said goodbye to 2024, it’s time to look back at what I’ve been digging into. Compared to previous years, there are some interesting surprises. And stuff that is just absolutely the same as always.

  1. Dispatch
  2. Social Distortion
  3. Hot Water Music
  4. The Interrupters
  5. Red Hot Chili Peppers
  6. Angie Mattson
  7. Aesop Rock
  8. Guts
  9. Natural Incense
  10. The Juliana Theory

Dispatch and Hot Water Music have always consistently been in my top 3, (except for last year, where neither even made my top 10, weird). It’s no surprise that both of them rank up there as my favorite bands. I saw HWM earlier this year when they made their way back to the Bay Area.

Thanks to some iPhone photo memories, I was reminded of Angie Mattson early in the year — this is an artist who loved about 20 years ago and then literally dropped off the face of the Earth. Her music is no longer available on Spotify or Apple Music. I found a few videos that are still up on YouTube (who knows for how long), but other than the albums in my local library that Last.fm has logged, she apparently doesn’t exist anymore.

Social Distortion was coming back to town and I was so excited to see them. And then a few days before the show, I tore my ACL in a paintballing incident with friends (go figure, it was my first time ever playing paintball), and I could barely walk.

Fun times all around, really. Here’s hoping 2025 is even better — even though this year starts off with the letters W(ednesday) T(hursday) F(riday).

Previous years in music: