TIL: How to change your default editor for git commits

A recent post on Hacker News highlighted the benefits of detailed commit messages in git.

Usually, my git commits look something like this:

> git commit -m "fix: component missing configuration file"

…which isn’t all that helpful. (Related: see XKCD on git commit messages)

I decided to try and utilize this newfound knowledge in my own git commits and I quickly ran into an obstacle. Simply using > git commit opens up vim. Which, I really don’t want to use. (I’m sorry!)

This is something I should already know how to do, but I had to do a Google search to learn more. It turns out, you can change the default editor in git. This makes it much more convenient! How do you do it?

git config --global core.editor "nano"

Replace “nano” with your preferred editor of choice. Now, running > git commit opens up your editor and you can make detailed commit messages to your heart’s content!

Upgrading Mr. RossBot’s image model and prompt template

My Mastodon landscape painting bot, Mr. RossBot keeps kicking along, generating some fun landscape art. It’s been powered by the AI Horde (the open source project behind ArtBot) and has tried to utilize whatever image models provided by the API to the best of its abilities.

For the most part, the code behind it is a bunch of spaghetti that looks like this:

An update to the AI Horde late last year added support for SDXL. However, the SDXL model on the Horde did not use a refiner. Because of this, images tended to come out a bit soft and lacked texture.

You can see examples of this in my announcement post about Mr. RossBot being back, here. See also:

More recently, the Horde added support for a new image model: AlbedoBaseXL. It’s an SDXL model that has a refiner baked in. Now images will come out a lot sharper looking.

Coincidentally, I was also playing around with various prompts and discovered I could get much better image results that look more painterly (rather than simple digital renderings) by utilizing the following prompt:

A beautiful oil painting of [LITERALLY_ANYTHING], with thick messy brush strokes.

And that is it! No more messy appending various junk to the end of the prompt to attempt to get what I want. The results speak for themselves and are pretty awesome, I think!

Implementing and testing a “poor man’s prompt expansion” model for Stable Diffusion

Various Stable Diffusion models massively benefit from verbose prompt descriptions that contain a variety of additional descriptors. Much recent research has gone into training text generation models for expanding existing Stable Diffusion prompts with relevant and context appropriate descriptors.

Since it isn’t feasible to run LLMs and text generation models inside most users’ web browsers at this time, I present my “Poor Man’s Prompt Expansion Model“. It uses a number of examples I’ve acquired from Fooocus and Hugging Face to generate completely random (and absolutely not context appropriate) prompt expansions.

(For those interested in following along at home, you can checkout the gist for this script on GitHub).

How does it work?

We iterate through a list of an absolute crap ton of prompt descriptors that I’ve sourced from other (smarter) systems that tokenize user prompts and attempt to come up with context appropriate responses. We’re not going to do that, because we’re going to go into full chaos mode:

  1. Iterate through a list of source material and split up everything separated by a comma.
  2. Add the resulting list to a new 1-dimensional array.
  3. Now, build a new descriptive prompt by looping through the list until we get a random string of descriptors that are between 175 and 220 characters long.
  4. Once that’s done, return the result to the user.
  5. Create a new prompt.

For our experiment, we’re going to lock all image generation parameters and seed, so we theoretically get the same image given the exact same parameters.

Ready?

Here is our base prompt and the result:

Happy penguins having a beer

Not bad! Now, let’s go full chaos mode with a new prompt using the above rules and check out the result:

Happy penguins having a beer, silent, 4K UHD image, 8k, professional photography, clouds, gold, dramatic light, cinematic lighting, creative, pretty, artstation, award winning, pure, trending on artstation, airbrush, cgsociety, glowing

That’s fun! (I’m not sure what the “silent” descriptor means, but hey!) Let’s try another:

Happy penguins having a beer, 8k, redshift, illuminated, clear, elegant, creative, black and white, masterpiece, great power, pinterest, photorealistic, award winning, vray, enchanted, complex, excellent composition, beautiful composition

I think we just created an advertisement for a new type of beverage! It nailed the “black and white”, though I’m not sure how that penguin turned into a bottle. What else can we make?

Happy penguins having a beer, volumetric lighting, Digital, intricate, awesome, futuristic, cartoon artstyle, vector, solid, detailed, dramatic light, realistic photograph, wonderful colors, dramatic atmosphere

The dude in the middle is planning on having a good night. Definitely some “wonderful colors”. Not so much realistic photo or vector, but fun! One last try:

Happy penguins having a beer, 35mm, surreal, amazing, Trending on Artstation HQ, matte painting hyperrealistic, full focus, very inspirational, pixta.jp, aesthetic, 8k, black and white, reflected on the matrix studio background, awesome

As you can see, you can get a wide variety of image styles by simply mixing a bunch of descriptive elements to an image prompt.

I’ve wanted to implement a feature like this on ArtBot for a long time. (Essentially, if the user allows it, automatically append these descriptions behind the scenes when an image is requested). Perhaps this will come soon.

DALL-E 3: Adding text to your text-to-prompt images

I recently got access to DALL-E 3 through OpenAI’s ChatGPT+ interface. One of the key features and improvements in their image model is the ability to generate coherent text within the image.

Let’s give it a try, based on one of the most popular StackOverflow questions: How do I exit Vim?

Using the following prompt: Oil painting of a hacker furiously typing commands into an old computer and muttering to himself, “how does one exit vim?”

That… is pretty good!

Mr. RossBot is back!

Alrighty, I updated the logic this weekend and have Mr. RossBot operating on the hairy elephant website (Mastodon). (It’s also posting on Threads, if you’re into that sort of thing.)

I also updated the image model to use Stability.ai’s swanky new SDXL model. I’m pretty impressed with the results.

Interesting uses of a Steam Deck

My Steam Deck has to be one of my favorite gadgets in the last few years. Gaming aside, the fact that it’s running Linux opens up all sorts of interesting possibilities.

For example, let’s use it to add a new feature to ArtBot… while I’m on an airplane. The screen is tiny, but oh man, it actually worked.

New side project: ArtBot, a way to create images using Stable Diffusion

Thanks to Reddit, I recently stumbled upon a cool project called Stable Horde. It essentially lets you generate images using a distributed cluster of GPUs donated by community members.

I had been creating my own web interface to remotely interact with a Stable Diffusion instance running on my own machine. I decided to quickly repurpose the web app and connect to the Stable Horde API. The result?

ArtBot, a Stable Diffusion demonstration that allows you to generate images using the power of the Stable Horde. It is awesome!

This dev tools update is going to take awhile

Oops. I guess I accidentally typed in “git status” (no, I didn’t — I do this all the time!)

Now, MacOS needs to redownload all the dev tools again. It looks like it’s going to be awhile.

Fun fact: The time between when Tyrannosaurus Rex existed and now is less than the time between now and when Git will finally be installed on this machine.

Quickly bootstrapping a new Node.js project

A problem that often happens to me: I get the inspiration to whip up something in Node.js  (for fun, for experimentation, for a side project, etc) but then I realize that I need to go through the process of actually setting things up before I can even start writing some code.

Usually, I have to dig through previous projects and copy over my eslint and prettier config files, read through some documentation and remember how to setup TypeScript again, install the correct dependencies for running tests. Before I know it, I’m bored and tired and no longer interested in doing whatever I was going to do.

I decided to experiment with some command line tools and created a Node.js script that can help me quickly bootstrap a new project with common configuration parameters that I use. It’s available on GitHub: Bootstrap Node Project.

The GIF above shows this tool in action. I’m able to get the scaffolding for a new project up and running within about 20 seconds! After running, the project structure looks like this (with associated npm start and test scripts, all ready to go). That is pretty awesome.

my-cool-project/
├─ .husky/
├─ node_modules/
├─ src/
│ ├─ index.js (.ts)
│ ├─ index.test.js (.ts)
├─ .eslintrc.json
├─ .gitignore
├─ .prettierrc
├─ package-lock.json
├─ package.json
├─ README.md
├─ tsconfig.json (optional)

Obviously, it’s highly opinionated and caters to configuration options that I personally like to use. But I figure it’s a great resource for anyone who wishes to roll their own utility to quickly bootstrap projects as well.

 

Fun with GPT-3

Earlier this evening, I was playing around with OpenAI’s GPT-3 language model for generating “human-like text.”

I gave it the following prompt:

Write a plot about a movie that involves rogue unicorns who want to
conquer Earth and the planet’s only hope is an Australian Cattle dog named Benson who sleeps a lot.

The response?

The unicorns, led by their king, Uni, have been planning their conquest of Earth for years. They have been observing the humans and have concluded that they are weak and vulnerable. The unicorns have developed a powerful magic that will allow them to take over the planet.

Benson, an Australian Cattle dog, is the only one who can stop them. Benson is a lazy dog who loves to sleep. He is content to spend his days lounging around and eating food. However, when the unicorns begin their attack, Benson is the only one who can stop them. He rallies the other dogs and leads them in a fight against the unicorns.

The unicorns are powerful, but they are no match for Benson and the other dogs. Benson is able to defeat Uni and the other unicorns, and the Earth is saved.

Summer blockbuster coming soon to a theater near you!

Creating an automated Twitter bot about gun violence

The school shooting in Uvalde last week was horrible. As a parent, I feel so powerless to protect my kids from something like that. Taking them to school the next day was extremely emotional.

It’s clear that we, as a country, are going to continue to do nothing about guns and gun violence. I channeled some of my emotion into building an automated bot for Twitter. I call it SABSStochastic Analysis for Ballistics Superfans (alternative title is “Second Amendment Bullshit”).

If you’re so technically inclined, you can download and run it yourself. Powered by Node and a fun little experiment into Twitter’s API.

It automatically replies to any congressional member who tweets.

Which of course includes unhinged Republicans.

Debugging problems as of late

I’ve been trying to fix a particularly tricky bug lately that only ever manifests itself in our production environments. So, I’m unable to reproduce it on my local machine. It’s been fun. Let me tell you…

I’ve been stuck somewhere between stage 4 and stage 6 for weeks now.

Book Review: Deep Work by Cal Newport

The pandemic forced a change in the way many knowledge workers work. Many of us have shifted to working from home — some roles are permanent.

I’m fortunate to be in such a position, but it’s been both a blessing and difficult to adjust to.

Distractions are frequent. From regular Zoom meetings, Slack messages and various alert notifications, to email. I think a number of people (myself included) are over compensating in our communication styles.

For software engineers, this causes a lot of context switching. And that’s generally a bad thing.

Context switching can lower productivity, increase fatigue, and, ultimately, lead to developer burnout. Switching tasks requires energy and each switch depletes mental focus needed for high cognitive performance. Over an entire workday, too many context switches can leave developers feeling exhausted and drained.

The impact of context switching lingers even after switching tasks. Cognitive function declines when the mind remains fixated on previous tasks, a phenomenon known as attention residue.

I’ve recently felt myself feeling drained and less productive that usual. While browsing a thread on Hacker News, a comment on Hacker News suggested that someone should read Deep Work by Cal Newport for ideas on how to regain focus and minimize distractions. It was the first I’d heard of that book.

It was pretty enlightening and I was pretty hooked!

It has a number of self-help style steps (that are somewhat obvious, in hindsight) that you can take to improve your situation and increase productivity (e.g., carve out set times when no one can bother you, like early in the morning or late at night, keep consistent times, set reasonable expectations and have a plan, don’t wing it).

But it also had shared some interesting research on how our brains have been rewired to have shorter attention spans, thanks to all our fancy pants technology.

“Once your brain has become accustomed to on-demand distraction, Nass discovered, it’s hard to shake the addiction even when you want to concentrate. To put this more concretely: If every moment of potential boredom in your life—say, having to wait five minutes in line or sit alone in a restaurant until a friend arrives—is relieved with a quick glance at your smartphone, then your brain has likely been rewired to a point where, like the “mental wrecks” in Nass’s research, it’s not ready for deep work—even if you regularly schedule time to practice this concentration.”

Yeah… guilty.

Anyway, definitely want to put some of these ideas into practice. It was a quick read and had some concrete steps on how to improve attention and focus that I can start using immediately. Excited to try it!

Deep Work by Cal Newport

Experimenting with parallel computing using node worker_threads

I’ve wanted to play around with worker threads in Node JS, so I put together this little repository that demonstrates how it all works. Check it out here.

In order to simulate multiple threads that are each processing data, each worker thread uses a randomly generated timeout between 100 to 700 milliseconds. In addition, it has a random number of loops (between 10 and 1000) that must be completed before the worker is terminated.

It’s kind of fun to watch the tasks run and automatically complete inside the terminal (check out the screenshot of the output up top).

Advent of Code: The most wonderful time of year…

December is upon us! That means the latest edition of the Advent of Code is here.

The Advent of Code is essentially a daily programming challenge featuring a new problem each day through Christmas. The problems all relate to a certain theme. This year, it sounds like we’re going on an undersea adventure.

— Day 1: Sonar Sweep —

You’re minding your own business on a ship at sea when the overboard alarm goes off! You rush to see if you can help. Apparently, one of the Elves tripped and accidentally sent the sleigh keys flying into the ocean!

Before you know it, you’re inside a submarine the Elves keep ready for situations like this. It’s covered in Christmas lights (because of course it is), and it even has an experimental antenna that should be able to track the keys if you can boost its signal strength high enough; there’s a little meter that indicates the antenna’s signal strength by displaying 0-50 stars.

Your instincts tell you that in order to save Christmas, you’ll need to get all fifty stars by December 25th.

Collect stars by solving puzzles. Two puzzles will be made available on each day in the Advent calendar; the second puzzle is unlocked when you complete the first. Each puzzle grants one star. Good luck!

I’ll be partaking using my preferred language of choice: x86 assembly.

I kid, I kid. JavaScript.