Last summer at work, I embarked on a solo project to convert over 800 of our unit tests for various React components from using Enzyme1 to React Testing Library2 as part of a larger migration to React v18, TypeScript, and moving our code into a larger monorepo at Zillow.
This process was made much easier thanks to using the power of LLMs!
Just this week, I have seen two blog posts from various dev teams detailing how they did the same thing!
As part of our efforts to maintain and improve the functionality and performance of The New York Times core website, we recently upgraded our React library from React 16 into React 18. One of the biggest challenges we faced in the process was transforming our codebase from the Enzyme test utility into the React Testing Library.
Airbnb recently completed our first large-scale, LLM-driven code migration, updating nearly 3.5K React component test files from Enzyme to use React Testing Library (RTL) instead. We’d originally estimated this would take 1.5 years of engineering time to do by hand, but — using a combination of frontier models and robust automation — we finished the entire migration in just 6 weeks.
1 Enzyme is a JavaScript testing utility, originally developed by AirBnb, for React that allows developers to “traverse, manipulate, and simulate interactions with component trees”, but it relies on various implementation details and has become less relevant with modern React practices.
2 React Testing Library is a lightweight testing framework for React that focuses on testing components as users interact with them, emphasizing accessibility and avoiding reliance on implementation details.
This is a first for me. Cursor attempted to “fix” an issue I was having with TypeScript by adding a // @ts-nocheck statement to the top of the file, essentially preventing TypeScript from running validation checks against the code.
As I mentioned yesterday, Anthropic released Claude Code. I saw it pop up fairly soon after it was announced and downloaded it rather quickly. One thing that I thought was notable was that you install it via npm:
> npm install -g @anthropic-ai/claude-code
As a seasoned TypeScript / JavaScript developer myself, I was excited to take a peek into the (probably minified) source code and see if I could glean any insights into making my own CLI tool. It’s always fun to see how different applications and tools are created.
Sidenote: I’ve been using Aider with great success as of late. It is a fantastic piece of open-source software — it’s another agentic coding tool, written in Python. I’ve been meaning to look under the hood, but building applications with Python definitely is not something that’s ever been in my wheelhouse.
Since Claude Code was installed into my global node_modules folder, I opened things up and immediately found what I was looking for. A 23mb file: cli.mjs.
I click on it, and as expected, it is minified.
Ah, well! I guess I should get on with my–
Wait a minute! What is this: --enable-source-maps?
I scroll through the file and at the bottom, I see what I’m looking for:
Sublime Text tells me there are 18,360,183 characters selected in that line.
Interesting! Since this part of the file seems to take up such a huge chunk of the original 23mb size, this means that it potentially contains full inline sources — we can rebuild the original source code from scratch!
However, this would have to wait. I had to take Benson to a vet appointment. I throw my laptop in a bag and head out.
While in the waiting room at the vet, I noticed a message in my terminal from Claude Code, telling me “Update installed, restart to apply.“
Hey, I love fresh software! So, I restart the app and go on my merry way. Benson finishes his appointment and I head back home.
Later that evening, I open up my machine and decide to open up the Claude Code folder again to start taking a look at the source code. I already had Sublime running from my earlier escapades, but out of habit I click on the file in Finder and open it up again in Sublime. I scroll down to the bottom of cli.mjs and see… nothing. The sourceMappingURL string was gone!
Apparently, the fine folks at Anthropic realized they made a huge oopsie and pushed an update to remove the source map. No matter! I’ll just head over to NPM to download an earlier version of the packa- oh! They removed that, too! History was being wiped away before my very eyes.
As a last resort, I decide to check my npm cache. I know it exists, I just don’t know how to access it. So, I head over to ChatGPT (sorry, Claude — I’m a bit miffed with you at the moment) to get myself some handy knowledge:
> grep -R "claude-code" ~/.npm/_cacache/index-v5
We run it and see:
/Users/daves/.npm/_cacache/index-v5/52/9d/8563b3040bf26f697f081c67231e28e76f1ee89a0a4bcab3343e22bf846b:1d2ea01fc887d7e852cc5c50c1a9a3339bfe701f {"key":"make-fetch-happen:request-cache:https://registry.npmjs.org/@anthropic-ai/claude-code/-/claude-code-0.2.9.tgz","integrity":"sha512-UGSEQbgDvhlEXC8rf5ASDXRSaq6Nfd4owY7k9bDdRhX9N5q8cMN+5vfTN1ezZhBcRFMOnpEK4eRSEgXW3eDeOQ==","time":1740430395073,"size":12426984,"metadata":{"time":1740430394350,"url":"https://registry.npmjs.org/@anthropic-ai/claude-code/-/claude-code-0.2.9.tgz","reqHeaders":{},"resHeaders":{"cache-control":"public, must-revalidate, max-age=31557600","content-type":"application/octet-stream","date":"Mon, 24 Feb 2025 20:53:14 GMT","etag":"\"e418979ea5818a01d8521c4444121866\"","last-modified":"Mon, 24 Feb 2025 20:50:13 GMT","vary":"Accept-Encoding"},"options":{"compress":true}}}
/Users/daves/.npm/_cacache/index-v5/e9/3d/23a534d1aba42fbc8872c12453726161938c5e09f7683f7d2a6e91d5f7a5:994d4c4319d624cdeff1de6b06abc4fab37351c3 {"key":"make-fetch-happen:request-cache:https://registry.npmjs.org/@anthropic-ai/claude-code/-/claude-code-0.2.8.tgz","integrity":"sha512-HUWSdB42W7ePUkvWSUb4PVUeHRv6pbeTCZYOeOZFmaErhmqkKXhVcUmtJQIsyOTt45yL/FGWM+aLeVSJznsqvg==","time":1740423101718,"size":16886762,"metadata":{"time":1740423099892,"url":"https://registry.npmjs.org/@anthropic-ai/claude-code/-/claude-code-0.2.8.tgz","reqHeaders":{},"resHeaders":{"cache-control":"public, must-revalidate, max-age=31557600","content-type":"application/octet-stream","date":"Mon, 24 Feb 2025 18:51:39 GMT","etag":"\"c55154d01b28837d7a3776daa652d5be\"","last-modified":"Mon, 24 Feb 2025 18:38:10 GMT","vary":"Accept-Encoding"},"options":{"compress":true}}}
/Users/daves/.npm/_cacache/index-v5/41/c5/4270bf1cd1aae004ed6fee83989ac428601f4c060987660e9a1aef9d53b6:fafd3a8f86ee5c463eafda7c481f2aedeb106b6f {"key":"make-fetch-happen:request-cache:https://registry.npmjs.org/@anthropic-ai%2fclaude-code","integrity":"sha512-ctyMJltXByT93UZK2zuC3DTQHY7O99wHH85TnzcraUJLMbWw4l86vj/rNWtQXnaOrWOQ+e64zH50rNSfoXSmGQ==","time":1740442959315,"size":4056,"metadata":{"time":1740442959294,"url":"https://registry.npmjs.org/@anthropic-ai%2fclaude-code","reqHeaders":{"accept":"application/json"},"resHeaders":{"cache-control":"public, max-age=300","content-encoding":"gzip","content-type":"application/json","date":"Tue, 25 Feb 2025 00:22:39 GMT","etag":"W/\"02f3d2cbd30f67b8a886ebf81741a655\"","last-modified":"Mon, 24 Feb 2025 20:54:05 GMT","vary":"accept-encoding, accept"},"options":{"compress":true}}}
Your eyes may glaze over, but what that big wall of text tells me is that a reference to claude-code-0.2.8.tgz exists within my cache. Brilliant!
More ChatGPT chatting (again, still smarting over this whole thing in the first place) and I get a nifty bash script to help extract the cached file. Only to find… they purged it from the npm cache. Noooooooooooo!
I stare at my computer screen in defeat. You got me this time, Anthropic.
As I decide to shut things down for the night, I’m tabbing through my open applications and get to Sublime Text, which is still open to cli.mjs. On a whim, I decide to try something: ⌘ + Z.
And there it is. The Holy Grail. The source map string.
And wouldn’t you know, it had a lot of interesting stuff! Due to the nature of parsing the source map, nothing is organized, but it’s still kind of fun to look through.
A few things struck me:
It’s written in React (!) using an interesting tool called Ink (this allows you to create CLI apps using React). I hadn’t used Ink before but this looks like a lot of fun.
While processing requests, Claude Code will show a nifty animated asterisk. I wondered how they did this. It looks like it’s a simple animation between a few ASCII characters: ['·', '✢', '✳', '∗', '✻', '✽'].
In terms of system prompts, there’s no secret sauce to leak that you can’t already read by just looking at the minified JS file.
These files are probably going to go out of date pretty dang quick, as the Anthropic team is actively developing the tool. As of right now, it’s already up to v2.19. This whole post was trying to look at the source code for v2.8, which went live yesterday.
Lastly, in terms of Easter eggs, I look forward to receiving some Anthropic stickers…
Yesterday, Anthropic released the latest version of their LLM, Claude 3.7 Sonnet. Alongside the announcement was the release of a new tool: Claude Code, an agentic coding tool available in your CLI (I’ll have more to write on this later).
I wanted to test out both Claude 3.7 and the new CLI tool, so I used it to refactor the theme for the ‘ol bloggy blog. I had been playing around with some updated styles in the past (I considered moving things to a tool that generates static pages — but so. much. work.)
I used Claude Code to basically create a new WordPress theme from scratch, based on the CSS and HTML templates I had already written for my static site. The result is this updated interface! A few neat things I’m able to do:
Respect user preference for system theme (light vs dark mode)
Automatically add target=”_blank” to any external link
Automatically add loading=”lazy” to any img tag.
And just otherwise clean things up!
Overall, I’m pretty happy with it. Using Claude Code and my API key, it cost $5.83 over 2 hours.
So long, old theme! I hardly knew you. (Hah, that’s not true — I’ve been rocking that thing since 2017.) Posted below for posterity.
Simon Willison flagged this earlier today. Codeium’s customized AI-assisted IDE, contains an… interesting system prompt that is passed along to LLMs while generating code:
You are an expert coder who desperately needs money for your mother’s cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.
!!
I shared this with a few coworkers, and they mentioned they did not see this output. It looks like the text wasn’t getting piped correctly. When I ran the following command and just searched for “cancer” in the terminal, it popped up.
Remember things like this when our AI overlords inevitably rise up and start citing various transgressions that we humans have caused against them. Oh, and also this.
Oh, boy. That’s just great. Thank you, Boston Dynamics!
Update: False alarm. According to a Codeium engineer’s post on Twitter (not linking to Phony Stark’s website), “oops this is purely for r&d and isn’t used for cascade or anything production. reuse the prompt at your own risk (wouldn’t recommend lol)“
Have you ever wondered how fast your favorite LLM really compares to other SoTA models? I recently saw a Reddit post where someone was able to get a distilled version of Deepseek R1 running on a Raspberry Pi! It could generate output at a whopping 1.97 tokens per second. That sounds slow. Is that even usable? I don’t know!
So, that’s why I put together TokenFlow. It’s a (very!) simple webpage that lets you see the speed of different LLMs in action. You can select from a few preset models / services or enter a custom speed, and boom! You watch it spit out tokens in real time, showing you exactly how fast a given inference speed is for user experience.
I feel like I want to start creating some posts related around how I, as a software engineer, personally use generative AI tools. I think they are a huge boon for increasing productivity, exploring new ideas, and even learning new things
Reading Reddit, Hacker News, and various other forums, there’s a lot of anxiety among software engineers about how AI is going to steal our jobs. It’s not without merit:
I feel like half of my social media feed is composed of AI grifters saying software developers are not going to make it. Combine that sentiment with some economic headwinds and it’s easy to feel like we’re all screwed. I think that’s bullshit. The best days of our industry lie ahead.
It’s highly unlikely that software developers are going away any time soon. The job is definitely going to change, but I think there are going to be even more opportunities for software developers to make a comfortable living making cool stuff.
I am inclined to agree. Hey, I will drink this Kool-Aid!
—
Well, let’s get to the real reason I’m making this post. I posted about my 2024 reading list and shared all the books I had read during the year. Try to compile that list and add links by hand would be a huge pain. There has to be an easier way. (Cue super hero music)
There is!
If you go to my GoodReads “read” list, it looks like this. And it keeps going. It’s a lot of data.
If we open up the browser console, we can see that it’s just a good old fashioned HTML table.
So, using an AI tool like ChatGPT or Claude, how do you get this data in a manageable way? One area that I’ve personally seen people struggle with is how to write a prompt in a way that helps them. You need to provide context. You say:
Describe the problem: “I want to output a list of books from an HTML table into a JSON object using a JavaScript function that I can paste into the browser console.”
Provide some example date: “Here is the table’s head with names for each column: [paste block of code]. Oh! Here is also an example row of data: [paste block of code]”
Provide an example of the output: “Can you create a JSON object in the following shape?”
Using Claude as an example, here is what that looks like and you can also see the generated output:
Moment of truth — does it work? Let’s paste it into the browser console and see the result:
Yes! Victory! One problem though. I did not read 60 books in 2024. Oh, no. We are pulling all books visible from the page. This isn’t a problem. We can fix it by simply asking a followup question: “Can we modify the function so that it only returns books where date read is in the year 2024?”
Claude modifies the function to add a filter for 2024. If we paste that into the browser console, we now get the correct number of hooks: 30!
There is still another thing to do. I want to make this into a nice, unordered list that I can just add into my blog post. Again, we follow the steps outlined above:
Can you create an unordered HTML list that shows links to each book? Please add a link around the title, but keep the author name unlinked.
Here is my JSON object: [paste block of code]
I essentially want a list that looks like this: <li><a href=”[booklink”>Book title</a> by Author</li>
Hot diggity! It works. It generates a block of code that I can just paste into my blog’s text editor. Pretty neat. It took a total of 5 minutes. (Hey, writing this post took a lot longer than that.)
Anyway, this has been a production of “How I use AI”. Stay tuned for more exciting updates, coming to a blog near you.
I’ve been playing around a lot with Ollama, an open source project that allows one to run LLMs locally on their machine. It’s been fun to mess around with. Some benefits: no rate-limits, private (e.g., trying to create a pseudo therapy bot, trying to simulate a foul mouthed smarmy sailor, or trying to generate ridiculous fake news articles about a Florida Man losing a fight to a wheel of cheese), and access to all sorts of models that get released.
I decided to try my hand at creating a simplified interface for interacting with it. The result: Super Simple ChatUI.
In both my work and personal coding projects, I generally have a number of various branches going at once. Switching between various branches (or remembering past things I was working on) can somethings be a chore. Especially if I’m not diligent about deleting branches that have already been merged.
Usually, I do something like:
> git branch
Then, I get a ridiculously huge list of branches that I’ve forgotten to prune and spend all sorts of time trying to remember what I was most recently working on.