Exploring Mount St. Helens blast zone using Google Earth

May 18th marked the 44th anniversary of the 1980 eruption of Mount St. Helens. Over on Threads, someone started an account that posted pseudo-realtime updates leading up to the eruption and its aftermath. It’s been really fascinating to follow and it stoked my interest in learning more about the eruption (no surprise, given my past geology background, eh?).

Like most things that I start digging into, I ended up finding  a book!

Eruption: The Untold Story of Mount St. Helens by Steve Olson. It details events surrounding the eruption and explores how a number of victims ended up around the mountain on the fateful Sunday morning. Reading it sent me down a rabbit hole of Wikipedia entries, USGS reports and Google Earth sleuthing…

In the summer of 2009, I visited Johnston Ridge Observatory and was able to see the volcano first hand (see image below). Johnston Ridge Observatory is located on the site of the Coldwater II observation post — where volcanologist David Johnston famously radioed his last words before the lateral blast swept over the ridge, destroying his encampment (Johnston’s body was never found): “Vancouver, Vancouver! This is it!”

Source: Me

The lateral blast, the result of a M5.1 earthquake that triggered the largest landslide in recorded history (sheering 1,300 feet off the top of the mountain), sent a violent pyroclastic blast northward, scouring the landscape for miles. You can still see the results of the blast to this day.

When we visited in 2009 — 29 years after the blast, evidence of the lateral blast was evident in obvious signs of tree fall (below image) — gigantic trees snapped over in the direction of the blast as if they were toothpicks.

Source: Me

Johnston Ridge (and the site of the Coldwater II Observation Post) sit about 5 miles from the Mount St. Helens. Looking out over this grand vista, your sense of scale is completely messed up. The mountain is so huge that it looks like you can reach out and touch it — you swear to yourself that it’s just right there, a short hop and skip away.

“I’m going to go on a quick hike to the volcano. I’ll be back by lunchtime,” you say.

Everyone else: “lol”

The shockwave and pyroclastic blast that resulted from the lateral blast were estimated to have reached upwards of 670 miles per hour. At that speed, it would have taken 30 seconds to travel from the volcano to overtopping the ridge.

Looking at my own photos from the observation post, you can’t help but wonder what David Johnston was thinking as he saw the shockwave and pyroclastic blast rapidly spread across the valley below, approaching his location. It was probably an awesome sight to see, quickly followed by “Oh. Shit.”

Thanks to the wonders of modern technology, we have some fantastic exploration tools. I loaded up the Google Earth web app and set about exploring the area.

One of the first things I notice is how huge the mountain is (err… was?) and how small and insignificant Johnston Ridge seems, especially in the face of the resulting landslide and pyroclastic blast.

Via Google Earth

Zooming in on the Spirit Lake area, you can still see floating tree trunks grouped together, covering the northern part of the lake (I assume due to prevailing southerly winds in the area).

Via Google Earth

If we turn toward the west and look at Johnston Ridge, you can see deposits left over as the pyroclastic blast topped the ridge. They are the lighter grey outcrops you see around the map. (I’ve attempted to poorly outline them below).

Via Google Earth

Let’s pop over the the valley just to the north of Johnston Ridge (where Spirit Lake Highway runs). We can zoom in and see a mess of tangled tree trunks along the banks of South Coldwater Creek.

Via Google Earth

At the top of that valley, we can see more evidence of pyroclastic blast deposits. Like the image of Johnston Ridge above, look for the light grey outcrops and exposures.

Via Google Earth

Alright, let’s check out how far the effects of the lateral blast were felt. If we zoom out a bit and go to the top of the ridge (the next ridge north of Johnston Ridge — I am unsure of the name), we see more evidence of blast zone tree fall. At this point, we’re about 6.5 miles from the volcano.

Via Google Earth

If we skip north across the next valley that contains Coldwater Lake, we get to the third ridge we’re going to look at. Again, at the top, we see evidence of blast zone tree fall. This is 8 miles from the volcano.

Via Google Earth

Now that we’re getting a sense of the scale of the blast, we can zoom out and start putting things together. Wherever this sort of tree fall exists, it almost looks like the landscape was scoured (it was!).

Let’s see if we can find anything else interesting. We zoom out and see some scour marks on ridges way off to the north.

Via Google Earth

The area I circled looks interesting. It’s called Goat Mountain and it’s nearly 12 miles from the volcano. Let’s zoom in… ah, yes. There is the distinct “hash mark” pattern we keep seeing, that represents the blast zone tree fall.

Via Google Earth

From our computer screen, it’s hard to get a proper sense of scale. If we use Google Earth to measure the length of one of these “match sticks” (a big dead tree!), we get about 33 feet!

Via Google Earth

A USGS report on the lateral blast showed evidence of 100 foot tall trees knocked over that were located 19 miles from the volcano! Try as I might, I am unable to find evidence of this via Google Earth, as the margins of the blast zone seem to merge with areas where loggers have clear cut the forest.

Below is an example of a clear cut logging area about 30 miles away from the volcano (this was not affected by the blast zone).

Via Google Earth

“But Dave,” I hear you say, “how do you know some of those are from the blast and some are from logging?”

You’re right! In a way, I don’t.  However, one potentially easy way to tell is by the presence of logging roads. In my example from Goat Mountain above (12 miles from the volcano), the tree fall was located on a ridge, away from any sort of easily accessible logging road.

There was one section of Steve Olson’s book that I found particularly fascinating, especially because I hadn’t heard about it before. At the exact time the mountain erupted, a small plane was flying overhead with two geologists as passengers — Keith and Dorothy Stoffel.

They were on their fourth pass over the north rim of the crater, flying west to east, when Keith noticed something moving. “Look,” he said, “the crater.” Judson tipped the Cessna’s right wing so they could get a better view. Some of the snow on the south-facing side of the crater had started to move. Then, as they looked out the plane’s windows, an incredible thing happened. A gigantic east-west crack appeared across the top of the mountain, splitting the volcano in two. The ground on the northern half of the crack began to ripple and churn, like a pan of milk just beginning to boil. Suddenly, without a sound, the northern portion of the mountain began to slide downward, toward the north fork of the Toutle River and Spirit Lake. The landslide included the bulge but was much larger. The whole northern portion of the mountain was collapsing. The Stoffels were seeing something that no other geologist had ever seen.

A few seconds later, an angry gray cloud emerged from the middle of the landslide, and a similar, darker cloud leapt from near the top of the mountain. They were strange clouds, gnarled and bulbous; they looked more biological than geophysical. The two clouds rapidly expanded and coalesced, growing so large that they covered the ongoing landslide. “Let’s get out of here,” shouted Keith as the roiling cloud reached toward their plane.

Excerpt From Eruption by Steve Olson

Now, wait a minute! You’re telling me that at the exact time the volcano erupted, there were people flying overhead? I know this happened in 1980, but there just has to be photos of this, right?

Yes, there are photos!

Via Dorothy Stoffel

Via Dorothy Stoffel

Via Dorothy Stoffel

Via Dorothy Stoffel

The photos correlate well to a famous series of images captured by Gary Rosenquist as the initial moments of the landslide and eruption unfolded.

Via USGS / Gary Rosenquist

Here’s a fun aside (if you can call something related to an epic natural disaster “fun“). A YouTuber took the series captured by Rosenquist and ran some magical AI frame interpolation on them (essentially — an AI tries to generate content to fill in missing information between frames of a video). The result is a near real-time simulation of what those initial moments of the blast may have looked like.

After taking the photos, Rosenquist and his fellow friends correctly decided it was time to leave. Immediately.

He took one last photo (this is another one I don’t remember seeing before).

Via Gary Rosenquist

Do you like geology? Want more? Here’s a post I wrote in 2010 that took a deep dive into earthquake frequency.

Project: Super Simple ChatUI

I’ve been playing around a lot with Ollama, an open source project that allows one to run LLMs locally on their machine. It’s been fun to mess around with. Some benefits: no rate-limits, private (e.g., trying to create a pseudo therapy bot, trying to simulate a foul mouthed smarmy sailor, or trying to generate ridiculous fake news articles about a Florida Man losing a fight to a wheel of cheese), and access to all sorts of models that get released.

I decided to try my hand at creating a simplified interface for interacting with it. The result: Super Simple ChatUI.

As if I need more side projects. So it goes!

Adventures in topology: The Cuckoo’s Egg and meeting Cliff Stoll

I recently finished up reading “The Cuckoo’s Egg” by Cliff Stoll. It was a fascinating story that details some of the first examples of computer hacking and computer forensics.

This post isn’t a review of his book, however! It’s more to document some adventures that resulted after reading it.

First, a quick summary:

In 1986, Cliff Stoll was  an astronomer working at Lawrence Berkeley Laboratory when he was tasked to look into a $0.75 discrepancy in compute time billed to physicists and other scientists who remotely connected to their machines.

What resulted was a year long wild-goose chase that ended up in the arrest of a KGB operative in Germany who remotely connected to university computers in the United States in order to gain access to military networks through ARPANET (precursor to the Internet of today).

Cliff wrote a book about his experience that went on to become a best seller. For fans of esoteric computer history, this was one of the first documented examples of hacking and marked the beginning of computer forensics. This book was published 35 years ago and deals with (now) antiquated technology that the young ones around here know nothing about — but oh wow, did I thoroughly enjoy this!

Anyway! That’s not why I’m here. I’m here, because I keep seeing his name pop up in various places (more recently Hacker News). A post mentioned his TED talk in 2008. It’s a hoot — and pretty inspiring, too!

One person mentioned that he makes Klein Bottles (an interesting manifold that ends up being a container with zero volume, as it only has a single surface) out of his home in… North Oakland. Oh, he also enjoys visitors.

Oh, really?!

The Klein Bottles are a really interesting object and have been a fun talking point with friends. I ended up purchasing a Klein Bottle from Cliff and asked if I could pick it up, since I live nearby. He happily obliged.

I ended up bringing our oldest kiddo and we had an absolute blast. He spent an hour with us, showing some of the artistic stuff he’s been working on (mathematical quilts!), showing off various gadgets he’s made (a fun device that draws images on his shipping boxes using Sharpies — an automated personal touch), and letting my kiddo drive the remote controlled robot he built that runs under his crawl space (!).

Just an absolutely memorable time. Thanks so much, Cliff!

Tracking the total eclipse shadow

I didn’t get a chance to make it out to see the total eclipse in person this time. (Really bummed… 2017 turned me into a legit umbraphile!)

Earlier today, I pulled down a number of images from NOAA’s GOES-East satellite and compiled this video. It takes a photo every 10 minutes. You can clearly see the Moon’s shadow as it makes its way across North America.

(Protip: Set the image quality to 720p. YouTube’s compression makes that video look like garbage otherwise!)

Pretty awesome!

Somewhat related — in 2020, I compiled a bunch of NOAA imagery that encompassed 3 weeks. I need to get that project up and running again…

Ever changing communication

There was a time (really, the past 15 years or so) where responding to things with an animated GIF was so perfect and encapsulated so much (e.g., if a picture is worth 1,000 words, what is a series of pixelated images moving a 8 frames per second worth?).

For example. see the rise of services like Giphy. I even have a random 10 year old project myself that involves animated GIFs!

Now though, it’s becoming generative AI all the way down.

For example, I just received a meeting invite that increases the frequency of meetings I’m having related to a certain project to… every single day.

Me: Hey, robot! Please create a meme image of a programmer jumping up on a desk and excitedly cheering “MOAR MEETINGS!”

Robot:

Now to figure out a way to send it in my place…

Hometown tidbits: The first modern hydroelectric plant

I’m currently reading California: An American History, by Jack Mack Faragher. There is an interesting historical tidbit that calls out the area where I grew up.

A robust economy pulled migrants to California. That had not always been the case. The economy had grown slowly in the last quarter of the nineteenth century, held back in part by the absence of coal deposits on the Pacific coast. In the 1890s, however, Californians began exploiting other forms of energy that would power a takeoff into sustained economic development.

They first harnessed the power of the water that coursed down the watercourses draining the state’s many mountain ranges. In 1893, utilizing technology developed for the mining industry, the first modern hydroelectric plant in the nation began operation on a fast-flowing creek near the southern California town of Redlands. Local orange growers needed a source of power that would enable them to pump water up into the hills, where they wanted to lay out more groves. The Redlands generating station became the model for dozens of others, many in the Sierra Nevada, designed to provide power for both domestic and industrial use.

Hey, that’s neat! I grew up on a property with a creek near the town of Redlands (and have even done a small bit of research on it back in the ‘ol university days).

I wonder… is it the same creek (or rather the bigger creek near this small creek I grew up on). To the Google machine!

Search: “redlands first hydroelectric plant

Yup!

Built by the Redlands Electric Light and Power Company, the Mill Creek hydroelectric generating plant began operating on 7 September 1893. This powerhouse was foremost in the use of three-phase alternating current power for commercial application and was influential in the widespread adoption of three-phase power throughout the United States.

[…]

The success of the 3-phase generators at the Mill Creek No. 1 was apparent, for these original generators were used until 1934. Although the original units have been replaced, this plant is still in operation to this day. Today, more than 100 years after Mill Creek’ completion. 3- phase generators are still the primary form of power generation around the world.

Hah, that is pretty cool! I distinctly remember this building from playing nearby and exploring the “wash” (as we called the area). You can see it via Google Street View, here, just to the north of Highway 38.

This is just one of the many wonders about this area.

See also:

DNS issues days after moving domain registrars

(Writing this for my future self and for future people that might have similiar problems)

Quite awhile ago, I made the decision to move all my domains from GoDaddy to a mix of Google Domains and Name.com. I enjoyed managing my domains through the Google interface and thought it was one of the better UIs available. It made things easy!

Sadly, like most beloved Google projects (RIP Google Reader), they decided to shut it down and transfer all domains to Squarespace. Well, I didn’t really want to use them. So, I decided to transfer many of my domains to Name.com.

The process to transfer was pretty easy. I figured I’d have to wait a day or two before I could see the changes.

One day goes by. Two days go by. Three days go by. It’s been four days and I’m still getting this when attempting to view my blog and a few other domains of mine from my home network.

What the heck is going on! If we check some domain propogation tools, I see that my site is pretty much unreachable throughout most of the world.

Interestingly, if I popped off my home network and used my phone, I could reach the site.

Okay! Now we’re getting somewhere. Sort of.

After much Googling, I found a post on the Cloudflare forums where someone had a similiar issue. The solution was that the “DNSSEC” settings were incorrect. I don’t use Cloudflare, but it seemed like something that was in the right direction.

2-3 days ago I changed the nameservers towards those of cloudflare, and since I cannot reach the website anymore. I’ve added all the DNS records that should be relevant. However in the dashboard it keeps saying pending nameserver update and the website can’t be reached. After 2-3 days still… The hosting company says that the NS points to those of cloudflare, and I’m at a loss at what I did wrong to make it go through cloudflare…and be able to be reached again. 

A solution to this issue stated:

Your domain’s DNSSEC setup is broken

Interesting. I hadn’t touched anything related to DNSSEC settings at all, as Name.com said it would auto import all settings during the transfer process. In fact, I couldn’t see anything related to DNSSEC management at all.

Oh, wait. It turns out, it’s waaaaaayyyy down at the bottom of the page when managing your domain!

Let’s see what happens if we click on it.

Oh! There is a value there:

Interestingly, there is an option to remove this entry. Let’s see what happens. I mean, the site is already broken and unreachable right. So, I click remove and wait a few minutes.

And then…

the websites are accessible again!

Wow. Lesson learned — double check everything when transferring domain registrars.

Implementing and testing a “poor man’s prompt expansion” model for Stable Diffusion

Various Stable Diffusion models massively benefit from verbose prompt descriptions that contain a variety of additional descriptors. Much recent research has gone into training text generation models for expanding existing Stable Diffusion prompts with relevant and context appropriate descriptors.

Since it isn’t feasible to run LLMs and text generation models inside most users’ web browsers at this time, I present my “Poor Man’s Prompt Expansion Model“. It uses a number of examples I’ve acquired from Fooocus and Hugging Face to generate completely random (and absolutely not context appropriate) prompt expansions.

(For those interested in following along at home, you can checkout the gist for this script on GitHub).

How does it work?

We iterate through a list of an absolute crap ton of prompt descriptors that I’ve sourced from other (smarter) systems that tokenize user prompts and attempt to come up with context appropriate responses. We’re not going to do that, because we’re going to go into full chaos mode:

  1. Iterate through a list of source material and split up everything separated by a comma.
  2. Add the resulting list to a new 1-dimensional array.
  3. Now, build a new descriptive prompt by looping through the list until we get a random string of descriptors that are between 175 and 220 characters long.
  4. Once that’s done, return the result to the user.
  5. Create a new prompt.

For our experiment, we’re going to lock all image generation parameters and seed, so we theoretically get the same image given the exact same parameters.

Ready?

Here is our base prompt and the result:

Happy penguins having a beer

Not bad! Now, let’s go full chaos mode with a new prompt using the above rules and check out the result:

Happy penguins having a beer, silent, 4K UHD image, 8k, professional photography, clouds, gold, dramatic light, cinematic lighting, creative, pretty, artstation, award winning, pure, trending on artstation, airbrush, cgsociety, glowing

That’s fun! (I’m not sure what the “silent” descriptor means, but hey!) Let’s try another:

Happy penguins having a beer, 8k, redshift, illuminated, clear, elegant, creative, black and white, masterpiece, great power, pinterest, photorealistic, award winning, vray, enchanted, complex, excellent composition, beautiful composition

I think we just created an advertisement for a new type of beverage! It nailed the “black and white”, though I’m not sure how that penguin turned into a bottle. What else can we make?

Happy penguins having a beer, volumetric lighting, Digital, intricate, awesome, futuristic, cartoon artstyle, vector, solid, detailed, dramatic light, realistic photograph, wonderful colors, dramatic atmosphere

The dude in the middle is planning on having a good night. Definitely some “wonderful colors”. Not so much realistic photo or vector, but fun! One last try:

Happy penguins having a beer, 35mm, surreal, amazing, Trending on Artstation HQ, matte painting hyperrealistic, full focus, very inspirational, pixta.jp, aesthetic, 8k, black and white, reflected on the matrix studio background, awesome

As you can see, you can get a wide variety of image styles by simply mixing a bunch of descriptive elements to an image prompt.

I’ve wanted to implement a feature like this on ArtBot for a long time. (Essentially, if the user allows it, automatically append these descriptions behind the scenes when an image is requested). Perhaps this will come soon.

Banned from Facebook Marketplace without a reason and without recourse

As much as technology improves our lives (and is integrated into literally everything we do), it really fucking sucks when the algorithm gets it wrong.

Earlier this summer, I posted a shop vac for sale, as I’ve done a number of times before (err, posting things for sale, not specifically shop vacs).

Soon after, I was banned for “violating community standards.” I have literally no idea what happened. But! Apparently you could appeal the decision if you felt it was incorrect.

So I did.

And was rejected.

So I appealed again.

And was rejected.

I appealed again. And now it looks like I am permanently banned from Facebook Marketplace. And there’s no way to appeal the decision. No way to contact customer support. Cool.

 

Anyway, here’s an image of Mark Zuckerberg wearing clown makeup, created using Stable Diffusion.

ArtBot mentioned again in PC World!

ArtBot got another callout in PC World in the article: “The best AI art generators: Bring your wildest dreams to life.”

Though a bit of (fair) criticism at the end of the blurb though:

Why use Artbot? The vast number of AI models, and the variance in style those images produce. Otherwise, generating images via Artbot can be a bit of a crapshoot, and you may expend a great number of kudos simply exploring all the options. Since there’s no real setup besides figuring out the API key, Stable Horde (Artbot) can be worth a try.

Hey, I’ll take it!

ArtBot written up in PC World!

Hah! This is pretty awesome. My nifty side project, ArtBot, has been written up in PC World as part of a larger article about Stable Horde (the open source backend that powers my web app):

Stable Horde has a few front-end interfaces to use to create AI art, but my preferred choice is ArtBot, which taps into the Horde. (There’s also a separate client interface, with either a Web version or downloadable software.)

Interestingly enough, ArtBot just passed 2,000,000 images generated!

Woe is Twitter…

To the tune of R.E.M’s “End of the World”:

“It’s the end of the (Twitter) as we know it, and I feel fiiiiiiinnnnneeee!” via… me.

I don’t have high hopes for the future of Twitter, pending Elon’s acquisition. It’s a service I’ve long loved, been frustrated with, but also found immense value in.

I’ve gotten jobs because of it, made new friends because of it, learned a lot because of it. Granted, it’s gotten much more toxic and I long for the days when it was fun.

But I don’t think having this service in control of a self-proclaimed internet troll who has lurched evermore rightward is going to improve things. Alas.

Punk Rock Obama

I think it’s time to end my AI art career on this high note. Generated with Stable Diffusion, running on my local machine.

The prompt:
“beautiful portrait painting of Barack Obama with a purple mohawk on top of his head shredding on an electric guitar at a punk rock show, concept art, makoto shinkai, takashi takeuchi, trending on artstation, 8k, very sharp, extremely detailed, volumetric, beautiful lighting, wet-on-wet”

Punk Rock Obama

MidJourney – AI Art Madness

A few short weeks ago, I had downloaded a simplified model for generating AI-created images on your local machine. The internet (myself included) had a lot of fun with it, but the quality was definitely lacking, especially when compared to the more serious AI image platforms being created by some big companies.

I recently received my invite to the MidJourney beta and I am just blown away!

For now, I’ve just been putting in ridiculous prompts that simulate styles for various artists (oh, man. I have a feeling this is going to piss off a lot of artists in the future…)

For example: “Apocalyptic wasteland with crumbling buildings and debris, thomas kinkade painting”

The potential here is pretty crazy — for people who aren’t artistically inclined, they can start generating images and scenes based on what they come up with. Some people can probably use this as a base to get to rapidly start iterating on new ideas. And of course, others are going to be mad.

A lot of the detail in creating these images is how you create the prompt. You’re already seeing the phrase “prompt engineering” being used in various places — check out this Twitter search.

For me though, I’m excited about this new technology and it’s something I’ve been eager to play with.

Generating art using AI

Earlier this year, OpenAI announced DALL-E 2, the latest version of their AI tool that can generate images by simply providing text input.

For example, “people in togas taking a selfie in front of a volcano”, and it will get to work attempting to create an image that includes all these elements.

The Verge has an interesting article with more details. You can see an example of what is possible on the DALL-E 2 subreddit. It’s honestly insane.

For now (sadly), the service is invite only.

More recently, an ambitious engineer named Boris Dayma created an open source version of the service called DALL-E mini. While it isn’t able to generate results as impressive as DALL-E 2, it’s still pretty crazy!

It’s recently taken the internet by storm and you can see people post DALLE-mini generated images and memes everywhere. The official website has been under heavy load, so it’s been pretty tough to try out the service.

Fortunately, you can download the model from Github and get the service setup on your local machine (providing you have a graphics card beefy enough to run the models).

Who has two thumbs and a graphics card just begging to be used? Hello.

I was able to get the service setup on my machine and start playing around with it.

In this example, I used a prompt to essentially create a Bob Ross painting generator. “Alpine forest with river running through the middle, snow capped peaks in the background, Bob Ross style painting.”

Dalle mini forest

Pretty neat! The images that services like DALL-E 2 and Midjourney can create are miles better and I’ve applied to both services.

While I anxiously await my acceptance, I’ll have to continue generating various memes on my own machine.

Monkeys

Book Review: Deep Work by Cal Newport

The pandemic forced a change in the way many knowledge workers work. Many of us have shifted to working from home — some roles are permanent.

I’m fortunate to be in such a position, but it’s been both a blessing and difficult to adjust to.

Distractions are frequent. From regular Zoom meetings, Slack messages and various alert notifications, to email. I think a number of people (myself included) are over compensating in our communication styles.

For software engineers, this causes a lot of context switching. And that’s generally a bad thing.

Context switching can lower productivity, increase fatigue, and, ultimately, lead to developer burnout. Switching tasks requires energy and each switch depletes mental focus needed for high cognitive performance. Over an entire workday, too many context switches can leave developers feeling exhausted and drained.

The impact of context switching lingers even after switching tasks. Cognitive function declines when the mind remains fixated on previous tasks, a phenomenon known as attention residue.

I’ve recently felt myself feeling drained and less productive that usual. While browsing a thread on Hacker News, a comment on Hacker News suggested that someone should read Deep Work by Cal Newport for ideas on how to regain focus and minimize distractions. It was the first I’d heard of that book.

It was pretty enlightening and I was pretty hooked!

It has a number of self-help style steps (that are somewhat obvious, in hindsight) that you can take to improve your situation and increase productivity (e.g., carve out set times when no one can bother you, like early in the morning or late at night, keep consistent times, set reasonable expectations and have a plan, don’t wing it).

But it also had shared some interesting research on how our brains have been rewired to have shorter attention spans, thanks to all our fancy pants technology.

“Once your brain has become accustomed to on-demand distraction, Nass discovered, it’s hard to shake the addiction even when you want to concentrate. To put this more concretely: If every moment of potential boredom in your life—say, having to wait five minutes in line or sit alone in a restaurant until a friend arrives—is relieved with a quick glance at your smartphone, then your brain has likely been rewired to a point where, like the “mental wrecks” in Nass’s research, it’s not ready for deep work—even if you regularly schedule time to practice this concentration.”

Yeah… guilty.

Anyway, definitely want to put some of these ideas into practice. It was a quick read and had some concrete steps on how to improve attention and focus that I can start using immediately. Excited to try it!

Deep Work by Cal Newport

Experimenting with parallel computing using node worker_threads

I’ve wanted to play around with worker threads in Node JS, so I put together this little repository that demonstrates how it all works. Check it out here.

In order to simulate multiple threads that are each processing data, each worker thread uses a randomly generated timeout between 100 to 700 milliseconds. In addition, it has a random number of loops (between 10 and 1000) that must be completed before the worker is terminated.

It’s kind of fun to watch the tasks run and automatically complete inside the terminal (check out the screenshot of the output up top).