Buy. Buy. Buy. Buy. Buy. Buy.
It is your duty to consume. Buy stuff. More stuff. Please.
(Ugh. I thought this sort of push notification spam was against Apple guidelines if permission was never granted?)
life, coding, technology, outdoors, photography
Buy. Buy. Buy. Buy. Buy. Buy.
It is your duty to consume. Buy stuff. More stuff. Please.
(Ugh. I thought this sort of push notification spam was against Apple guidelines if permission was never granted?)
As much as technology improves our lives (and is integrated into literally everything we do), it really fucking sucks when the algorithm gets it wrong.
Earlier this summer, I posted a shop vac for sale, as I’ve done a number of times before (err, posting things for sale, not specifically shop vacs).
Soon after, I was banned for “violating community standards.” I have literally no idea what happened. But! Apparently you could appeal the decision if you felt it was incorrect.
So I did.
And was rejected.
So I appealed again.
And was rejected.
I appealed again. And now it looks like I am permanently banned from Facebook Marketplace. And there’s no way to appeal the decision. No way to contact customer support. Cool.
Anyway, here’s an image of Mark Zuckerberg wearing clown makeup, created using Stable Diffusion.
Just got the email from Phony Stark that my API access to Twitter has been suspended. It was a fun dozen years or so.
Time to find Mr. RossBot and JustTriangles a new home.
Just finished up our solar installation. Here’s to reducing the amount of money we give to the fiasco known as PG&E.
I cannot stop staring at this screen. Give me all the photons!
To the tune of R.E.M’s “End of the World”:
“It’s the end of the (Twitter) as we know it, and I feel fiiiiiiinnnnneeee!” via… me.
I don’t have high hopes for the future of Twitter, pending Elon’s acquisition. It’s a service I’ve long loved, been frustrated with, but also found immense value in.
I’ve gotten jobs because of it, made new friends because of it, learned a lot because of it. Granted, it’s gotten much more toxic and I long for the days when it was fun.
But I don’t think having this service in control of a self-proclaimed internet troll who has lurched evermore rightward is going to improve things. Alas.
I think it’s time to end my AI art career on this high note. Generated with Stable Diffusion, running on my local machine.
The prompt:
“beautiful portrait painting of Barack Obama with a purple mohawk on top of his head shredding on an electric guitar at a punk rock show, concept art, makoto shinkai, takashi takeuchi, trending on artstation, 8k, very sharp, extremely detailed, volumetric, beautiful lighting, wet-on-wet”
A few short weeks ago, I had downloaded a simplified model for generating AI-created images on your local machine. The internet (myself included) had a lot of fun with it, but the quality was definitely lacking, especially when compared to the more serious AI image platforms being created by some big companies.
I recently received my invite to the MidJourney beta and I am just blown away!
For now, I’ve just been putting in ridiculous prompts that simulate styles for various artists (oh, man. I have a feeling this is going to piss off a lot of artists in the future…)
For example: “Apocalyptic wasteland with crumbling buildings and debris, thomas kinkade painting”
The potential here is pretty crazy — for people who aren’t artistically inclined, they can start generating images and scenes based on what they come up with. Some people can probably use this as a base to get to rapidly start iterating on new ideas. And of course, others are going to be mad.
A lot of the detail in creating these images is how you create the prompt. You’re already seeing the phrase “prompt engineering” being used in various places — check out this Twitter search.
For me though, I’m excited about this new technology and it’s something I’ve been eager to play with.
Earlier this year, OpenAI announced DALL-E 2, the latest version of their AI tool that can generate images by simply providing text input.
For example, “people in togas taking a selfie in front of a volcano”, and it will get to work attempting to create an image that includes all these elements.
The Verge has an interesting article with more details. You can see an example of what is possible on the DALL-E 2 subreddit. It’s honestly insane.
For now (sadly), the service is invite only.
More recently, an ambitious engineer named Boris Dayma created an open source version of the service called DALL-E mini. While it isn’t able to generate results as impressive as DALL-E 2, it’s still pretty crazy!
It’s recently taken the internet by storm and you can see people post DALLE-mini generated images and memes everywhere. The official website has been under heavy load, so it’s been pretty tough to try out the service.
Fortunately, you can download the model from Github and get the service setup on your local machine (providing you have a graphics card beefy enough to run the models).
Who has two thumbs and a graphics card just begging to be used? Hello.
I was able to get the service setup on my machine and start playing around with it.
In this example, I used a prompt to essentially create a Bob Ross painting generator. “Alpine forest with river running through the middle, snow capped peaks in the background, Bob Ross style painting.”
Pretty neat! The images that services like DALL-E 2 and Midjourney can create are miles better and I’ve applied to both services.
While I anxiously await my acceptance, I’ll have to continue generating various memes on my own machine.
The pandemic forced a change in the way many knowledge workers work. Many of us have shifted to working from home — some roles are permanent.
I’m fortunate to be in such a position, but it’s been both a blessing and difficult to adjust to.
Distractions are frequent. From regular Zoom meetings, Slack messages and various alert notifications, to email. I think a number of people (myself included) are over compensating in our communication styles.
For software engineers, this causes a lot of context switching. And that’s generally a bad thing.
Context switching can lower productivity, increase fatigue, and, ultimately, lead to developer burnout. Switching tasks requires energy and each switch depletes mental focus needed for high cognitive performance. Over an entire workday, too many context switches can leave developers feeling exhausted and drained.
The impact of context switching lingers even after switching tasks. Cognitive function declines when the mind remains fixated on previous tasks, a phenomenon known as attention residue.
I’ve recently felt myself feeling drained and less productive that usual. While browsing a thread on Hacker News, a comment on Hacker News suggested that someone should read Deep Work by Cal Newport for ideas on how to regain focus and minimize distractions. It was the first I’d heard of that book.
It was pretty enlightening and I was pretty hooked!
It has a number of self-help style steps (that are somewhat obvious, in hindsight) that you can take to improve your situation and increase productivity (e.g., carve out set times when no one can bother you, like early in the morning or late at night, keep consistent times, set reasonable expectations and have a plan, don’t wing it).
But it also had shared some interesting research on how our brains have been rewired to have shorter attention spans, thanks to all our fancy pants technology.
“Once your brain has become accustomed to on-demand distraction, Nass discovered, it’s hard to shake the addiction even when you want to concentrate. To put this more concretely: If every moment of potential boredom in your life—say, having to wait five minutes in line or sit alone in a restaurant until a friend arrives—is relieved with a quick glance at your smartphone, then your brain has likely been rewired to a point where, like the “mental wrecks” in Nass’s research, it’s not ready for deep work—even if you regularly schedule time to practice this concentration.”
Yeah… guilty.
Anyway, definitely want to put some of these ideas into practice. It was a quick read and had some concrete steps on how to improve attention and focus that I can start using immediately. Excited to try it!
I’ve wanted to play around with worker threads in Node JS, so I put together this little repository that demonstrates how it all works. Check it out here.
In order to simulate multiple threads that are each processing data, each worker thread uses a randomly generated timeout between 100 to 700 milliseconds. In addition, it has a random number of loops (between 10 and 1000) that must be completed before the worker is terminated.
It’s kind of fun to watch the tasks run and automatically complete inside the terminal (check out the screenshot of the output up top).