Tag: coding

“Use git worktrees,” they said. “It’ll be fun!” they said.

Earlier this week, we had an “AI Days” event at work where a bunch of engineers got together to share our AI workflows with the wider engineering organization. I ran a small session on some AI workflows One thing that stuck out: a surprising number of us had independently come up with our own way to manage git worktrees. We all had different names for it. “Replace.” “Recycle.” “Warm worktrees.” But we were all describing the same thing. That felt like a blog post.

On top of that, there’s been a lot of buzz about running multiple AI coding agents in parallel. Simon Willison has been writing about agentic engineering patterns and even wrote about embracing the parallel coding agent lifestyle. The idea is simple: spin up multiple instances of Claude Code (or Codex, or whatever) across different branches, let them work simultaneously, and review the results when they’re done.

The enabling technology for all of this? Git worktrees.

For the uninitiated (hey, I didn’t know what a git worktree was 3 or 4 months ago): a git worktree lets you check out multiple branches of the same repo into separate directories, all sharing a single .git history. Instead of git stash && git checkout other-branch, you just cd into another folder. Each agent gets its own isolated workspace. No conflicts. No stashing. No context switching headaches.

In theory, it’s a superpower. In practice, at least in a large monorepo, it’s been one of my most frustrating developer experience problems as of late.

Every blog post and tutorial about git worktrees shows something like this:

git worktree add ../feature-branch feature/my-feature
cd ../feature-branch
# start coding!

And that works great if you’re in a small repo. The worktree itself is created almost instantly. Git is just setting up a new working directory that points at the same .git folder.

But I work on a large monorepo powered by Yarn workspaces. Our node_modules situation involves 750,000+ files. So the actual workflow looks more like this:

git worktree add ../feature-branch feature/my-feature
cd ../feature-branch
yarn install --immutable                # ~10 minutes
# ...go get coffee, check Slack, forget what you were doing, grow old

Ten minutes. Every time. For what sometimes might be a 5-minute Claude Code task.

This is the part that none of the “git worktrees for AI agents!” articles mention. They’re all written from the perspective of small-to-medium repos where dependencies aren’t a factor. When your dependency tree generates three quarters of a million files, the worktree itself isn’t the bottleneck. node_modules is.

So, this led me down a rabbit hole. I spent a solid couple of weeks trying to make worktree creation fast. Here’s my graveyard of failed approaches.

Symlinked node_modules:

My first instinct. Symlink all the node_modules directories from the main repo checkout into the new worktree. In a Yarn workspaces monorepo, that’s not just one node_modules folder. It’s the root one plus nested ones inside individual packages.

This sort of worked. Until I tried to run tests. Vitest and Vite both choked on the symlinked paths. Module resolution in Node follows symlinks and then gets confused about what’s where. After a bunch of debugging, I ripped it all out.

Yarn’s hardlinks-global mode:

Our .yarnrc.yml has nmMode: hardlinks-global configured. This tells Yarn to store packages in a global cache and hardlink them into each project’s node_modules. In theory, this should make yarn install much faster because you’re just creating hardlinks instead of copying files.

In practice? It’s still creating 750K+ filesystem entries. The number of bytes copied is lower, sure. But the bottleneck was never the bytes. It was the sheer number of file operations. Even with hardlinks, you’re asking the filesystem to create hundreds of thousands of directory entries, and that takes time.

APFS Copy-on-Write (cp -c):

This one felt clever. macOS’s APFS filesystem supports copy-on-write cloning. You can duplicate a file instantly at the filesystem level with zero extra disk usage until someone modifies it. The cp -c command does this.

But again: 750K files. Even a copy-on-write clone has to create all the directory entries and metadata for each file. The filesystem operation count is the bottleneck, not the bytes. A cp -c of the entire node_modules tree still took way too long to be practical.

The solution? recycled worktrees:

Here’s what I landed on. Instead of creating and destroying worktrees on demand, I keep a fixed pool of 6 worktree slots (tree-1 through tree-6). Each one already has node_modules installed. They sit there, detached from any branch, ready to go (…but taking up disk space).

When I need a worktree, I don’t create one. I activate one. Under the hood, this:

  1. Finds the oldest idle slot (detached HEAD, clean working tree)
  2. Checks out the new branch in that slot
  3. Checks if yarn.lock changed between the old HEAD and the new branch
  4. Only runs yarn install if the lockfile actually differs

That third step is the key insight. Most of my branches are based on a recent main. The yarn.lock rarely changes between them. So in the common case, “activating” a worktree means checking out a branch and… that’s it. Seconds, not minutes.

I built a little CLI called wt to manage all of this, and it’s become one of my favorite tools as of late.

Here’s what that looks like in practice, using wt create after I pickup a Jira ticket:

$ wt create daves/HP-123/some-feat

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Activating slot: tree-3
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

  Slot path: /Users/daves/workspace/rentals-js.worktrees/tree-3

[1/3] Checking out branch...
      git checkout -b daves/HP-123/some-feat main
      ✓ On branch: daves/HP-123/some-feat

[2/3] Checking dependencies...
      ✓ yarn.lock unchanged, skipping install

[3/3] Summary
      ✅ Slot ready!

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Next steps:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

  cd /Users/daves/workspace/rentals-js.worktrees/tree-3

  # Start working (deps ready)
  cd /Users/daves/workspace/rentals-js.worktrees/tree-3/apps/hotpads-web
  yarn dev

Current branch: daves/HP-123/some-feat
Slot path:      /Users/daves/workspace/rentals-js.worktrees/tree-3
Slot name:      tree-3

The whole thing takes a few seconds. No yarn install (sometimes…). No waiting. I’m in a fully functional worktree and ready to fire up Claude Code.

It also supports checking out existing branches (wt create --checkout daves/HP-6841) and branching from something other than main (wt create daves/HP-6841 --base some-other-branch).

When I’ve got a few active worktrees going, I don’t want to type out full paths. wt go uses fuzzy matching against directory names and branch names:

$ wt go daves/HP-345/other-feat
# cd's into /Users/daves/workspace/rentals-js.worktrees/tree-3

It matches partial strings, so wt go 6841 works just as well. If your query matches multiple worktrees, it tells you:

$ wt go daves
Ambiguous match for 'daves'. Did you mean:
  tree-2 (daves/HP-6839)
  tree-3 (daves/HP-6841)
  tree-5 (daves/HP-6850)

And if there’s no match, it lists what’s available:

$ wt go some-nonexistent-branch
No worktree matching 'some-nonexistent-branch'

Available worktrees:
  tree-2 (daves/HP-6839)
  tree-3 (daves/HP-6841)
  tree-5 (daves/HP-6850)

The tricky part with wt go is that a script can’t change your shell’s working directory. So it’s backed by a shell function in my .zshrc that catches the output path and runs cd for you. Without the shell function, it just prints the path and tells you to cd manually.

Another command that I added was wt list. It gives you a quick overview of all your worktrees:

$ wt list

Git Worktrees for rentals-js

  BRANCH                 LAST MODIFIED   PATH
  main                   2 hours ago     ~/workspace/rentals-js (main repo)
  daves/HP-6841          5 mins ago      ~/workspace/rentals-js.worktrees/tree-3
  daves/HP-6839          3 hours ago     ~/workspace/rentals-js.worktrees/tree-2
  (detached HEAD)        2 days ago      ~/workspace/rentals-js.worktrees/tree-1
  (detached HEAD)        5 days ago      ~/workspace/rentals-js.worktrees/tree-4
  (detached HEAD)        1 week ago      ~/workspace/rentals-js.worktrees/tree-5
  (detached HEAD)        2 weeks ago     ~/workspace/rentals-js.worktrees/tree-6

  7 worktrees total  ·  Use --full for git status and commit age

At a glance, I can see that tree-3 and tree-2 are active (they have branches), and tree-1, tree-4, tree-5, and tree-6 are idle (detached HEAD) and available for the next task. The list is sorted by most recent activity, so the stuff I’m actively working on floats to the top.

If you want more detail, wt list --full runs git status and git log across all worktrees (in parallel, so it’s still fast) and shows you clean/dirty status with colored indicators plus commit ages.

After a PR is merged and I’m done with a branch, I release the slot back to the pool with wt release:

$ wt release HP-6841

Releasing slot: tree-3
  Branch: daves/HP-6841

[1/2] Detaching HEAD...
      ✓ Slot is now idle

[2/2] Delete branch 'daves/HP-6841'?
  [y/N] y
      ✓ Branch deleted

✓ 'tree-3' returned to pool

The slot goes back to detached HEAD with its node_modules intact and ready for the next task. If you’ve got uncommitted changes, it warns you before proceeding.

There are a few more details that make this all feel smooth:

  • Tab completion: The shell integration includes zsh completions for wt go and wt release. It autocompletes branch names and slot directory names, so you rarely have to type the full thing.
  • No pool slots get recycled by accident: I also keep a couple of named worktrees around (like dev and review) that aren’t part of the tree-N pool. The recycler only considers slots matching the tree-* pattern, so my persistent worktrees are safe.
  • It’s locked to the monorepo: The tool checks that you’re in the right repo before doing anything. If you accidentally run it from your personal projects folder, it tells you to navigate to the monorepo. This might seem overly cautious, but when you’re automating things with AI agents, guardrails matter.

The pool approach doesn’t just save time on individual tasks. It makes entire categories of automated workflows feasible.

I’ve been building an orchestrator that processes a backlog of bug tickets through sequential Claude Code phases: root cause analysis, write tests, fix, verify, push. Each ticket gets assigned a worktree from the pool. Without the pool, the 10-minute yarn install overhead would make this kind of automation completely impractical. With the pool, each ticket just grabs a slot, does its work, and releases it when it’s done.

It also changes the economics of “should I spin up a parallel agent for this?” If creating a worktree takes 10 minutes, you’re only going to do it for substantial tasks. If it takes 5 seconds, you’ll start doing it for everything. Quick refactor? Throw it in a worktree. Lint fix? Worktree. Experiment with an approach you might throw away? Worktree. The overhead drops below the threshold where you even think about it.

Of course, there are still a few pain points still.

Disk space issues are one area the come to mind. Six copies of node_modules at 750K files each is… a lot. The hardlinks-global mode helps with actual disk usage (files in the Yarn cache are shared), but the filesystem metadata overhead is real.

Also, for worktrees that have been sitting idle for a while, the best approach is git pull origin main && yarn install before checking out a new branch. This keeps the slot’s dependencies current and avoids the lockfile-changed install. I do this periodically but it would be nice to automate.

That said, Ggt worktrees really are a superpower for agentic engineering. Running multiple Claude Code instances across isolated branches, firing off tasks in parallel, building automated pipelines… all of it requires worktree-level isolation.

But if you work in a large JavaScript monorepo, the naive approach of creating fresh worktrees on demand is going to be painful. The worktree itself is instant. The dependency install is not.

The pre-warmed pool pattern sidesteps the problem entirely. Keep a handful of worktrees alive with their dependencies intact, rotate branches through them, and only reinstall when the lockfile changes. It took a few weeks of failed experiments to get here and numerous discussions with fellow engineers, but I’m really happy with the workflow as of today.

claude-sounds: better notifications for claude code

I use Claude Code a lot, for both work and play. One thing I noticed was that I’d often start a long query and then get distracted, ultimately forgetting to check if Claude was waiting for input.

So naturally, instead of just setting a timer like a normal person, I decided to build a ridiculous solution: claude-sounds

It’s a stupidly simple bash script that plays random sound effects whenever Claude Code’s notification hook triggers. Set it up once, and now every time Claude finishes a response or starts using a tool, you get a little audio notification.

The setup is pretty straightforward:

  1. Clone the repo
  2. Add it to your PATH
  3. Configure it as a Claude Code notification hook in your settings

The sounds themselves were generated using ElevenLab’s Archer persona.

The whole thing is just a few lines of bash that randomly selects an MP3 file and plays it with afplay. Add your own sounds, remove the ones you don’t like, customize it however you want.

Is this necessary? Absolutely not. Is it fun to hear a little message when Claude finishes helping you debug something? Yes! (At least at first… I imagine this might drive someone insane after hearing these about 100 times. I’m still having fun with it though!)

Examples:

Project: Super Simple ChatUI

I’ve been playing around a lot with Ollama, an open source project that allows one to run LLMs locally on their machine. It’s been fun to mess around with. Some benefits: no rate-limits, private (e.g., trying to create a pseudo therapy bot, trying to simulate a foul mouthed smarmy sailor, or trying to generate ridiculous fake news articles about a Florida Man losing a fight to a wheel of cheese), and access to all sorts of models that get released.

I decided to try my hand at creating a simplified interface for interacting with it. The result: Super Simple ChatUI.

As if I need more side projects. So it goes!

TIL: List git branches by recent activity

In both my work and personal coding projects, I generally have a number of various branches going at once. Switching between various branches (or remembering past things I was working on) can somethings be a chore. Especially if I’m not diligent about deleting branches that have already been merged.

Usually, I do something like:

> git branch

Then, I get a ridiculously huge list of branches that I’ve forgotten to prune and spend all sorts of time trying to remember what I was most recently working on.

daves/XXXX-123/enable-clickstream
daves/XXXX-123/impression-events
daves/XXXX-123/tracking-fixes
daves/XXXX-123/broken-hdps
daves/XXXX-123/fix-contacts
daves/XXXX-123/listing-provider
daves/XXXX-123/revert-listing-wrapper-classname
daves/XXXX-123/typescript-models
daves/XXXX-123/inline-contact-form
daves/XXXX-123/clickstream_application_event
daves/XXXX-123/unused-file
daves/XXXX-123/convert-typescript
daves/XXXX-123/convert-typescript-v2
daves/XXXX-123/similar-impressions
daves/XXXX-123/update-node-version

At least 75% of those have already been merged and should have been pruned.

There has to be a better way, right?

Thanks to the power of the Google machine (and Stack Overflow), I found out, there is!

> git branch --sort=-committerdate

Hot diggity dog!

daves/XXXX-123/clickstream-filter-events
main
daves/XXXX-123/convert-typescript-v2
daves/XXXX-123/update-node-version
daves/XXXX-123/similar-impressions
daves/XXXX-123/convert-typescript
daves/XXXX-123/clickstream_application_event
daves/XXXX-123/unused-file
daves/XXXX-123/typescript-models
daves/XXXX-123/listing-provider
daves/XXXX-123/inline-contact-form
daves/XXXX-123/revert-listing-wrapper-classname

That list is now sorted by most recent activity on the branch.

Alright. Even though this is better, that’s still a lot of typing to remember. Fortunately, we can create an alias:

> git config --global alias.recent "branch --sort=-committerdate"

Now all I need to do is just type git recent and it works!

Nice.

TIL: How to change your default editor for git commits

A recent post on Hacker News highlighted the benefits of detailed commit messages in git.

Usually, my git commits look something like this:

> git commit -m "fix: component missing configuration file"

…which isn’t all that helpful. (Related: see XKCD on git commit messages)

I decided to try and utilize this newfound knowledge in my own git commits and I quickly ran into an obstacle. Simply using > git commit opens up vim. Which, I really don’t want to use. (I’m sorry!)

This is something I should already know how to do, but I had to do a Google search to learn more. It turns out, you can change the default editor in git. This makes it much more convenient! How do you do it?

git config --global core.editor "nano"

Replace “nano” with your preferred editor of choice. Now, running > git commit opens up your editor and you can make detailed commit messages to your heart’s content!

Upgrading Mr. RossBot’s image model and prompt template

My Mastodon landscape painting bot, Mr. RossBot keeps kicking along, generating some fun landscape art. It’s been powered by the AI Horde (the open source project behind ArtBot) and has tried to utilize whatever image models provided by the API to the best of its abilities.

For the most part, the code behind it is a bunch of spaghetti that looks like this:

An update to the AI Horde late last year added support for SDXL. However, the SDXL model on the Horde did not use a refiner. Because of this, images tended to come out a bit soft and lacked texture.

You can see examples of this in my announcement post about Mr. RossBot being back, here. See also:

More recently, the Horde added support for a new image model: AlbedoBaseXL. It’s an SDXL model that has a refiner baked in. Now images will come out a lot sharper looking.

Coincidentally, I was also playing around with various prompts and discovered I could get much better image results that look more painterly (rather than simple digital renderings) by utilizing the following prompt:

A beautiful oil painting of [LITERALLY_ANYTHING], with thick messy brush strokes.

And that is it! No more messy appending various junk to the end of the prompt to attempt to get what I want. The results speak for themselves and are pretty awesome, I think!

TIL: Local overrides in Chrome

I’ve been doing web development professionally for about 10 years now and just discovered something new. (I love it when this happens!)

Today, I learned about local overrides in Chrome. Local overrides are a powerful feature within Chrome’s Developer Tools that allow developers to make temporary changes to a web page’s files (CSS, JavaScript, images, etc.) directly within the browser.

These changes are saved to your local filesystem, allowing you to experiment with modifications without affecting the live website. This is especially useful for testing, debugging, and experimenting with different designs or functionalities.

Here’s how you can use local overrides in Chrome:

  1. Open Chrome Developer Tools:
    – Right-click on any webpage and select “Inspect” or press `Ctrl+Shift+I` (Windows/Linux) or `Cmd+Opt+I` (Mac).
  2. Enable Local Overrides:
    – Go to the “Sources” tab.
    – In the navigation pane, click on the “Overrides” tab (you may need to click on the “>>” to see it).
    – Click on “Select folder for overrides” and choose a folder on your local system. This is where your changes will be saved.
    – Allow Chrome to access the folder if prompted.
  3. Start Editing:
    – Find the file you want to edit in the page file navigator pane. You can navigate through the website’s file structure or find the file in the “Network” tab.
    – Right click on a file and select “Override content”
    – Once you open a file, you can modify it directly in the editor pane. Your changes will be reflected in real-time on the webpage.
  4. Save Changes:
    – After editing, press `Ctrl+S` or `Cmd+S` to save your changes. These changes are saved to the selected local folder and will override the network resource until you disable overrides or delete the local file.
  5. Disable Overrides:
    – To stop using local overrides, simply uncheck the “Enable Local Overrides” option in the Overrides tab.

Local overrides are a temporary way to experiment with web page modifications. They don’t affect the actual files on the web server, so other users won’t see these changes. This feature is highly useful for developers and designers to test changes without deploying them to a live server.

Interesting uses of a Steam Deck

My Steam Deck has to be one of my favorite gadgets in the last few years. Gaming aside, the fact that it’s running Linux opens up all sorts of interesting possibilities.

For example, let’s use it to add a new feature to ArtBot… while I’m on an airplane. The screen is tiny, but oh man, it actually worked.

ArtBot written up in PC World!

Hah! This is pretty awesome. My nifty side project, ArtBot, has been written up in PC World as part of a larger article about Stable Horde (the open source backend that powers my web app):

Stable Horde has a few front-end interfaces to use to create AI art, but my preferred choice is ArtBot, which taps into the Horde. (There’s also a separate client interface, with either a Web version or downloadable software.)

Interestingly enough, ArtBot just passed 2,000,000 images generated!