Category: ai

Using AI: Extracting reading list from GoodReads

I feel like I want to start creating some posts related around how I, as a software engineer, personally use generative AI tools. I think they are a huge boon for increasing productivity, exploring new ideas, and even learning new things

Reading Reddit, Hacker News, and various other forums, there’s a lot of anxiety among software engineers about how AI is going to steal our jobs. It’s not without merit:

A recent blog post by Dustin Ewers adds some needed sanity to the discussion. In a post titled, “Ignore the Grifters – AI Isn’t Going to Kill the Software Industry“, he argues that:

He argues that we should ignore the grifters.

I feel like half of my social media feed is composed of AI grifters saying software developers are not going to make it. Combine that sentiment with some economic headwinds and it’s easy to feel like we’re all screwed. I think that’s bullshit. The best days of our industry lie ahead.

It’s highly unlikely that software developers are going away any time soon. The job is definitely going to change, but I think there are going to be even more opportunities for software developers to make a comfortable living making cool stuff.

I am inclined to agree. Hey, I will drink this Kool-Aid!

Well, let’s get to the real reason I’m making this post. I posted about my 2024 reading list and shared all the books I had read during the year. Try to compile that list and add links by hand would be a huge pain. There has to be an easier way. (Cue super hero music)

There is!

If you go to my GoodReads “read” list, it looks like this. And it keeps going. It’s a lot of data.

If we open up the browser console, we can see that it’s just a good old fashioned HTML table.

So, using an AI tool like ChatGPT or Claude, how do you get this data in a manageable way? One area that I’ve personally seen people struggle with is how to write a prompt in a way that helps them. You need to provide context. You say:

  1. Describe the problem: “I want to output a list of books from an HTML table into a JSON object using a JavaScript function that I can paste into the browser console.”
  2. Provide some example date: “Here is the table’s head with names for each column: [paste block of code]. Oh! Here is also an example row of data: [paste block of code]”
  3. Provide an example of the output: “Can you create a JSON object in the following shape?”

Using Claude as an example, here is what that looks like and you can also see the generated output:

Moment of truth — does it work? Let’s paste it into the browser console and see the result:

Yes! Victory! One problem though. I did not read 60 books in 2024. Oh, no. We are pulling all books visible from the page. This isn’t a problem. We can fix it by simply asking a followup question: “Can we modify the function so that it only returns books where date read is in the year 2024?”

Claude modifies the function to add a filter for 2024. If we paste that into the browser console, we now get the correct number of hooks: 30!

There is still another thing to do. I want to make this into a nice, unordered list that I can just add into my blog post. Again, we follow the steps outlined above:

  1. Can you create an unordered HTML list that shows links to each book? Please add a link around the title, but keep the author name unlinked.
  2. Here is my JSON object: [paste block of code]
  3. I essentially want a list that looks like this: <li><a href=”[booklink”>Book title</a> by Author</li>

Hot diggity! It works. It generates a block of code that I can just paste into my blog’s text editor. Pretty neat. It took a total of 5 minutes. (Hey, writing this post took a lot longer than that.)

Anyway, this has been a production of “How I use AI”. Stay tuned for more exciting updates, coming to a blog near you.

Book Review: The Alignment Problem by Brian Christian

The Alignment Problem, (released in 2020 but still highly relevant today, especially in the age of generative AI hype), is a fascinating exploration of one of the most interesting issues in artificial intelligence: how to ensure AI systems safely align with human values and intentions. The book is based on four years of research and over 100 interviews with experts. Despite the technical depth, I feel that this book is written to be accessible to both newcomers and seasoned AI enthusiasts alike. A word of warning though: this book is has A LOT of info.

Before we get too deep into this review, let’s talk about safety and what it means in the context of AI. When we talk about AI safety, we’re referring to systems that can reliably achieve their goals without causing unintended harm. This includes:

  • The AI must be predictable, behaving as expected even in novel situations.
  • It must be fair, avoiding the amplification of existing societal biases.
  • It needs transparency, allowing users and developers to understand its decision-making process.
  • It must be resilient against failures and misuse.

Creating safe AI tools is both a technical challenge, as well as a psychological challenge: it requires understanding human cognition, ethics, and social systems, as these elements become encoded in AI behavior.

The book is divided into three main sections: Prophecy, Agency, and Normativity, each tackling different areas of aligning artificial intelligence with human values.

Prophecy explores the historical and technical roots of AI and highlights examples of unintended outcomes, such as the biased COMPAS recidivism prediction tool. COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a risk assessment algorithm used in the criminal justice system to predict the likelihood of a defendant reoffending. However, investigations revealed that the tool disproportionately flagged Black defendants as higher risk compared to white defendants, raising critical questions about fairness and bias in that AI system.

Agency delves into reinforcement learning and the parallels of reward-seeking behavior in human, showcasing innovations like AlphaGo and AlphaZero. His explanation of reinforcement learning, and its connection to dopamine studies, is particularly insightful. Christian dives into psychological experiments from the 1950s that revealed the brain’s pleasure centers and their connection to dopamine. Rats in these studies would press a lever to stimulate these areas thousands of times per hour, foregoing food and rest. Later research established that dopamine serves as the brain’s “reward scalar,” which helps influence decision-making and learning. This biological mechanism has parallels in reinforcement learning, where AI agents maximize reward signals to learn optimal behaviors.

Normativity examines philosophical debates and techniques like inverse reinforcement learning, which enables AI to infer human objectives by observing behavior. Christian connects these discussions to ethical challenges, such as defining fairness mathematically and balancing accuracy with equity in predictive systems. He also highlights key societal case studies, including biases in word embeddings and historical medical treatment patterns that skew AI decisions.

Christian interweaves these sections with interviews, anecdotes, and historical case studies that breathe life into the technical and ethical complexities of AI alignment.

He also delivers numerous warnings, such as:

“As we’re on the cusp of using machine learning for rendering basically all kinds of consequential decisions about human beings in domains such as education, employment, advertising, health care and policing, it is important to understand why machine learning is not, by default, fair or just in any meaningful way.”

This observation underscores the important implications of deploying machine learning systems in critical areas of human life. When algorithms are used to make decisions about education, employment, or policing, the stakes are insanely high. These systems, often trained on historical data, can perpetuate or amplify societal biases, leading to unfair outcomes. This calls for deliberate oversight and careful design to ensure these technologies promote equity and justice rather than exacerbate existing inequalities. (Boy, oh boy — fat chance of that in light of current events in January 2025)

Christian also highlights some of the strengths of machine learning. These systems can detect patterns in data that are invisible to human eyes, uncovering insights that were previously thought impossible. For example:

“They (doctors) were in for an enormous shock. The network could almost perfectly tell a patient’s age and sex from nothing but an image of their retina. The doctors on the team didn’t believe the results were genuine. ‘You show that to someone,’ says Poplin, ‘and they say to you, “You must have a bug in your model. ‘Cause there’s no way you can predict that with such high accuracy.” . . . As we dug more and more into it, we discovered that this wasn’t a bug in the model. It was actually a real prediction.”

Examples like this show the real-world potential of machine learning to revolutionize fields such as healthcare by identifying patterns that humans might overlook. However, these benefits are accompanied by significant challenges, such as the “black box” nature of AI decision-making, where it remains difficult to determine what features a model is actually using.

Christian shows how understanding these technical challenges, alongside ethical frameworks, can lead to more robust and equitable AI systems. These considerations emphasize the nature of AI safety, which requires combining insights from cognitive science, social systems, and technical innovations to address both immediate and long-term risks.

While the book is dense (very dense!) and information-rich, this strength can also be a drawback. Some sections felt overly detailed, and the pacing, especially in the latter half, left me feeling fatigued.

Despite this, The Alignment Problem remains a compelling and optimistic exploration of how researchers are tackling AI safety challenges. I think this book is an insightful read for anyone interested in AI and will leave you thinking about our future AI overlords long after you’ve turned the last page.

Comparing reasoning in open-source LLMs

Alibaba recently released their “QwQ” model, which they claim is capable of chain-of-thought reasoning comparable to OpenAI’s o1-mini model. It’s pretty impressive — even more so because we can run this model on our own devices (provided you have enough RAM).

While testing the chain-of-thought reasoning abilities, I decided to compare my test prompt to Llama3.2 and was kind of shocked at how good it was. I had to come up with ever more ridiculous scenarios to try and break it.

That is pretty good, especially for a non chain-of-thought model. Okay, come on. How do we break it! Can we?

Alright, magical unicorns for the win.

“Nexus” by Yuval Noah Harari

Yuval Noah Harari’s latest book, Nexus: A Brief History of Information Networks from the Stone Age to AI, was a fascinating (if sometimes overwhelming) journey through human history that explores the power (and the peril) of information. From the first markings inscribed on stone walls to the potential all-seeing eye of artificial intelligence, Harari takes readers on a sweeping tour of how information and stories have shaped human networks — and, by extension, civilization.

The central idea in Nexus is that information is one of the key forces that connects people, enabling us to cooperate on a massive scale. Harari illustrates this point with a bunch of historical examples, from the canonization of the Bible to the use of propaganda under totalitarian regimes. He argues that information doesn’t merely represent reality; rather, it creates new realities through the power of shared stories, myths, and ideologies. This gives us some insight into the forces that have shaped society—sometimes for the better, sometimes for the worse.

One interesting part of the book is Harari’s thoughts on the relationship between information and truth. Harari references a Barack Obama speech in Shanghai in 2009, where Obama said, ‘I am a big believer in technology and I’m a big believer in openness when it comes to the flow of information. I think that the more freely information flows, the stronger the society becomes.

Harari calls this view naive, pointing out that while openness is important, the reality of how information is used is much more complicated. He argues that information isn’t inherently the same as truth; it’s been manipulated countless times throughout history to serve those in power. This kind of manipulation is especially evident in the recent rise of populism, which, as Harari explains, is all about the belief that there’s no objective truth and that power is the only reality.

He explains, ‘In its more extreme versions, populism posits that there is no objective truth at all and that everyone has “their own truth,” which they wield to vanquish rivals. According to this worldview, power is the only reality. All social interactions are power struggles, because humans are interested only in power. The claim to be interested in something else—like truth or justice—is nothing more than a ploy to gain power.

Harari warns that when populism uses information purely as a weapon, it ends up eroding the very concept of language itself. Words like ‘facts,’ ‘accurate,’ and ‘truthful’ lose their meaning, as any mention of ‘truth’ prompts the question, ‘Whose truth?’ This theme feels especially relevant today, with misinformation and propaganda shaping public opinion in big ways.

Harari gives a sobering take on the rise of AI and how it could impact our information networks. He says, “silicon chips can create spies that never sleep, financiers that never forget and despots that never die” and goes on to warn that AI, with its power for massive surveillance and data processing, could lead to levels of control and manipulation we’ve never seen before—potentially an existential threat we need to face.

For me, Nexus was a thought-provoking and engaging read, though at times it felt very alarmist. While Harari’s concerns are definitely worth thinking about, I think adaptation is key: these AI systems and tools are here, and we have to learn how to use them and live with them — like right now — today!

Overall, I’d give Nexus 4 out of 5 stars. Harari offers a sweeping narrative that makes you think about the role of information in our lives, and the choices we need to make as we stand on the brink of the AI era. It’s a worthy read for anyone interested in understanding the historical roots of our current information age and what it might mean for our future.