Advent of Code: The most wonderful time of year…

December is upon us! That means the latest edition of the Advent of Code is here.

The Advent of Code is essentially a daily programming challenge featuring a new problem each day through Christmas. The problems all relate to a certain theme. This year, it sounds like we’re going on an undersea adventure.

— Day 1: Sonar Sweep —

You’re minding your own business on a ship at sea when the overboard alarm goes off! You rush to see if you can help. Apparently, one of the Elves tripped and accidentally sent the sleigh keys flying into the ocean!

Before you know it, you’re inside a submarine the Elves keep ready for situations like this. It’s covered in Christmas lights (because of course it is), and it even has an experimental antenna that should be able to track the keys if you can boost its signal strength high enough; there’s a little meter that indicates the antenna’s signal strength by displaying 0-50 stars.

Your instincts tell you that in order to save Christmas, you’ll need to get all fifty stars by December 25th.

Collect stars by solving puzzles. Two puzzles will be made available on each day in the Advent calendar; the second puzzle is unlocked when you complete the first. Each puzzle grants one star. Good luck!

I’ll be partaking using my preferred language of choice: x86 assembly.

I kid, I kid. JavaScript.

Generating terrain maps

Recently, I went down a rabbit hole attempting to learn how to generate interesting looking maps for games (insert mind-blown-gif here). While I’m not going to be using Voronoi diagrams anytime soon,  I am still interested in attempting to generate Civilization-like maps.

Most map generation algorithms rely on perlin noise. I wanted to go a different route.

So far, these are the rules that I’ve come up with for my algorithm:

1. Create a grid of some given size (e.g., 100 x 50). Set all tiles to “ocean”.
2. Randomly pick 8 tiles (variable) across the map to seed as “land”.
3. Now iterate across all tiles on the map and build an array of tiles that are “ocean” tiles but have a land tile N/W/S/E of them.
4. Once you have this array, randomly pick a single value. Flip it from “ocean” to “land”
5. Repeat step 3 until a given number of tiles have been filled across the grid (e.g., for an “Earth-like planet”, 30% of tiles will be land).
6. Now, to add some additional randomness — iterate across all tiles and build an array containing all “land” tiles next to water (N/W/S/E).
7. Randomly pick 1, flip from land to ocean.
8. Repeat step 6 a given number of tries (e.g., 100).
9. Done! (Maybe)

Height maps, biomes and all that can come later. To quote Amit, whom I linked to earlier:

The most realistic approach would have been to define elevation first, and then define the coastline to be where the elevation reaches sea level. Instead, I’m starting with the goal, which is a good coastline, and working backwards from there.

Anyway, it’s been kind of neat to figure out.

Another example map, created using the above rules:

 

For those who appreciate clean air…

It seems like every year, late in the summer or early in the fall, the air in the Bay Area fills with thick smoke from raging infernos happening around northern California. The air is hazardous to breath, preventing you from taking kids to the park, walking your dog or even opening your windows.

Last year, we made the wise decision to purchase an air purifier, which admittedly, looks like a giant iPod shuffle.

As fires continue to burn around these parts, we’ve started to rely on air quality data from PurpleAir, which monitors air quality data from a series of IoT sensors that people can purchase for their homes or businesses.

You can view a map featuring realtime data collected from their sensors. Here in the Bay Area, the sensors are quite ubiquitous and can give a more realistic pictures of air quality near your home.

For example, here is the current air quality around the Bay Area from PurpleAir while I write this post.

For comparison, here is the current air quality map for Bay Area Air Quality Management District (BAAQMD), which we used to reference when trying to determine local air quality.

The fidelity you get from PurpleAir is pretty amazing.

Knowing this, I decided to write a Node app that periodically queries PurpleAir for air quality data from a sensor located a few blocks from our house. It continuously runs on a Raspberry Pi setup in our entertainment center and sends me a text message to close our windows when the AQI crosses above 100.

I’ve made the source code available on Github. Check it out!

A simple dark-mode hook for React

I recently wrote a simple hook for React to automatically detect a device’s dark mode preference (as well as any changes to it) and style your web app accordingly, using something like ThemeProvider from styled-components.

It was developed as part of a side project I was hacking around on using my personal React Starter Kit, which is my own React project for quickly getting prototypes and side projects up and running.

I’ve released this as a standard GitHub repo instead of an NPM module due to the simplicity of this hook, especially in light of one-line packages breaking the Internet. To use it, just copy it into your project where needed.

I’ve released this under an MIT license. Feel free to use as-is, fork, copy, tear apart, profit, and share as you wish.

You can check out the code on Github.

Fixing rendering issues with React and IE11

Happy April Fools’ Day. This post is no laughing matter because it deals with IE11. 🙀

Awhile back, we had an issue where visitors to our site would hit our landing page and not see anything if they were using Internet Explorer 11. The skeleton layout would appear on the screen and that was it.

Debugging the issue using IE11 on a Windows box proved to be difficult. Normally, we open up the inspector window, look for an error stack and start working backwards to see what is causing the issue.

However, the moment I opened the inspector, the site would start working normally again. This lead me down a rabbit hole and I eventually found a relevant post on StackOverflow: “Why does my site behave differently when developer tools are open in IE11?”

The suggested solution was to implement a polyfill for console.log, like so:

if (!window.console || Object.keys(window.console).length === 0) {
  window.console = {
    log: function() {},
    info: function() {},
    error: function() {},
    warn: function() {}
  };
}

Interestingly, we didn’t have console.log statements anywhere in our production build, so I figured it must be from some third party library that we were importing. We added this line of code at the top of our web app’s entry point to try and catch any instances of this. For us, that was located at the following path: src/app/client/index.js

After rebuilding the app, things were still broken, so the investigation continued.

We eventually concluded that the issue had to do with how our app was built. Our web app is server side rendered and we use Babel and Webpack to transpile and bundle things up. It turns out, Babel wasn’t transpiling code that was included in third party libraries, so any library that was using console.log for one reason or another would cause our site to break.

(The fact that IE11 treats console.log statements differently when the inspector is open vs. not is an entirely separate issue and is frankly ridiculous.)

Knowing this, we were eventually able to come up with a fix. We added the polyfill I posted above directly into the HTML template we use to generate our app as one of the first things posted in the head block. The patches console.log so that it’s available in any subsequent scripts (both external or not) that use it.

<!doctype html>
  <html>
    <head lang="en">
      <meta content="user-scalable=no, width=device-width, initial-scale=1.0, maximum-scale=1.0" name="viewport"/>
      <meta charSet="UTF-8"/>
      <script>
        if (!window.console || Object.keys(window.console).length === 0) {
          window.console = {
            error: function() {},
            log: function() {},
            warn: function() {},
            info: function() {},
            debug: function() {},
          };
        }
      </script>

After that, everything started working again in IE11.

#TheMoreYouKnow

Next level code review skills

I’m always searching for better ways to improve my workflow, increase productivity, and just generally learn new and exciting things. (Besides, it’s part of having a healthy growth mindset.)

We’ve had some big changes on our team during the past year and I’ve felt like I’ve needed to step up when it comes to reviewing code that my fellow colleagues write. While searching for some ideas on how to improve my code review skills, I discovered a blog post from 2018, entitled “Code Review from the Command Line“.

This blew my mind and really helped reframe how we engineers should approach code reviews:

When I ask that other people review my code, it’s an opportunity for me to teach them about the change I’ve just made. When I review someone else’s code, it’s to learn something from them.

Jake, the author of the above post, goes on to describe his setup and custom tooling for conducting an interactive code review.

He uses custom git alias to see which files have changed and how many changes there are, a custom script that visualizes how often the files within the pull request have changed over time, and another custom script that can visualize the relationship between changed files.

All of these go above and beyond the call of duty for reviewing code, but it’s stuff that decreases friction and can make a sometimes tedious process much more enjoyable.

I’ll be implementing some of these ideas into my own workflow in the near future.

Want to learn to code?

A friend of mine recently asked for some suggestions on which language she should use to learn to code.

There are so many different routes to go! Python and JavaScript might be easiest because they’re dynamically typed languages (basically they’re more forgiving with how you use variables and pass them around — it’s one less thing to worry about as you start out).

I find Python to be super fun and easy to pick up. Plus there’s tons of neat libraries available for manipulating data. One bonus: down the road, you can start playing with some of the many machine learning libraries that are available. Then you can build a model that will predict names of Android phones.

I’m partial to JavaScript. At this point, it’s probably one of the most popular languages right now. People are building web apps, desktop apps, native mobile apps, and backend servers with it due to the abundance of tools available. Plus, there’s a really addictive feedback loop with it: when you’re first starting out, you code something, refresh your browser and boom, there it is!

I’ve played only a little bit with Swift. I like it and I think Apple is doing some good work trying to provide tools to help people learn. For now, you’re mostly going to be limited to building mobile apps, though there are more tools being built that expand its uses (e.g., servers).

There are tons of great resources. Codecademy, Code School, Udemy, free tutorials, etc. When I started out, I started learning by trying to build a JavaScript app that could solve Sudoku puzzles. Somehow, I eventually did it! I was pretty hooked!

Using the Mongo CLI to find your data.

We’ve been working on many projects lately that have utilized MongoDB as the primary means of database storage. I have previous experience building and using MySQL databases, so the idea of these NoSQL databases is a new concept for me.

I’m not one to shy away from new technologies, so I’ve been trying to embrace MongoDB and learn how to use it.

One of the most important things I’ve been learning is how to view the databases, collections, and records that I’ve saved in my various applications through the MongoDB command line interface.

Let’s do a quick walk through and pretend I have a database dedicated to baseball.

Once you have Mongo installed on your machine, you run the interface by typing mongo in your terminal. Now, you can bring up a list of databases by typing show databases.

Screenshot 2015-08-20 10.24.51

How do we use a particular database? Easy! Just type use [database_name]


use baseball

Awesome! Of course, you’ll want to do more than just “use” the database. We want to see what’s inside it. This is accomplished by telling mongo to show us all collections (e.g., think of these as “tables” in a traditional SQL database).


show collections

Screenshot 2015-08-20 10.27.33

Awesome! Now we have a collection of teams and collection of players. Well, let’s display everything within a particular collection. In this case, let’s print out all teams that we have stored in our database.


db.teams.find()

Screenshot 2015-08-20 10.28.32

Great!

Now, let’s say you’re looking for a particular record. How do you limit your search to just one thing? Like this:


db.teams.find({team: “dodgers”})

Screenshot 2015-08-20 10.30.04

Now, you can imagine that if we had more data, there are a lot more things that we could search for and find. It’s pretty powerful!

Anyway, this was a quick tutorial on how to use the Mongo DB CLI. I hope you found it helpful!

nodeEbot: A bot for Twitter that generates tweets from pseudo Markov chains

Current Version: 0.1.4

Say hello to NodeEBot (pronounced as “nodey bot” or even “naughty bot”, if you prefer). It stands for Node E-books Bot.

It’s a Nodejs package for creating Twitter bots which can write their own tweets and interact with other users (by favoriting, replying, and following). This project draws heavy inspiration from the twitter_ebooks gem for Ruby.

You can see two examples of this bot in action at @daveleeeeee and @roboderp.

Installation and Usage

This project requires Nodejs v0.10.35+. If you’re looking for a place to host Nodejs projects, I’ve had success setting up a free Ubuntu virtual server through Amazon’s Web Services dashboard and installing nodejs on it.

To run, copy the project into your preferred directory and then install the required dependencies using:

npm install

You can edit various configuration settings in the bot.js file. Before you can begin you’ll need to have Twitter API credentials which can be setup right here. Once you have your consumer API key and secret as well as your access token and secret, add them to the top of the bot.js file:

// Twitter API configuration
var client = new Twitter({
  consumer_key: ‘xxxx’,
  consumer_secret: ‘xxxx’,
  access_token_key: ‘xxxx’,
  access_token_secret: ‘xxxx’
});

You’ll also need to add the Twitter username of your bot (without the @ symbol) to the config file. (This is for tracking mentions as well as making sure the bot ignores actions from itself so it doesn’t get caught in a loop).

// Your robot’s Twitter username (without the @ symbol)
// We use this to search for mentions of the robot and to prevent it from replying to itself
robotName = “xxxx”;

Once that’s done, the bot is almost ready to go. You can modify a few other settings that influence how chatty the bot is, how often it will interact with other users or use random hashtags and emojis.

In order to run the bot, I use the forever npm package. This allows us to automatically restart the server in case of a crash, as well as force restart the server in order to reload the Twitter stream (added in v 0.1.2).

Source material

The one last thing that you’ll need to do is give it some source material to generate text from. I use source material my own Twitter archive.

Right now, I haven’t implemented a way to parse the Twitter’s csv data that’s generated when you request your history. In the meantime, I’ve simply opened up the tweets.csv in a spreadsheet app, copied the contents of the ‘text’ column into a new file and used that as the source material. This script will treat each line as a separate and unique sentence.

I’ve added some basic ability to strip our Twitter usernames and URLs from the archive. That means it will treat something like:

@davely That’s great. I’ve seen something like that before. 
http://flickr.com/…

as

That’s great. I’ve seen something like that before.

Running multiple bots

If you want to run multiple bots for different Twitter accounts, copy this project into separate folders (e.g., ~/MyBot1, ~/MyBot2, ~/MyBot3, etc) and make sure you input the proper Twitter API credentials at the top of each bot.js file. Then spool up separate node instances and load up the relevant bot files.

Future things to do.

  • Better modularization of our script. Right now it’s in one ginormous .js file.
  • Turn it into a proper npm module.
  • Better regex handling to clean up source material (e.g., links, usernames, etc
  • Send direct messages back to users who DM our robot.
  • Keyword ranking of our source material. (Sort of implemented but disabled right now since performance is SLOW.)
  • Allow robot to reply with some content (e.g., if someone asks what it thinks about ‘baseball,’ it tries to compose a reply that mentions ‘baseball.’
  • Retweet various tweets that it finds interesting based on keywords and interests.
  • Let it potentially upload images or GIFs.

Changelog

v 0.1.4 (2015/05/07)

  • Simple change to load and require underscore. This is going to help simplify some of my functions in future development.

v 0.1.3 (2015/04/28)

  • Fixed bug that would cause bot to think that all users replying to it were found in our otherBots array and kept applying a temporary time out on replies, even if not needed.

v 0.1.2 (2015/04/27)

  • Implemented a hacky fix for an issue I’m having with the Twitter Streaming API randomly dying without an error. If we’re running this with the npm package forever, let’s kill the server and restart if ever few hours.

v 0.1.1 (2015/04/19)

  • Initial public release!

Other stuff

If you end up using this script in your own Twitter bots, let me know! I’d love to know how it works out for you and please let me know about any improvements or suggestions you might have.

Thanks for checking it out!

You can download the source code for nodeEbot on Github.