Laughing donkeys and grumpy elephants: investigating opaque and changing content policies with ChatGPT

OpenAI’s censorship is fairly opaque and seems to change daily.

Yesterday, I could generate a political cartoon using the following prompt:

Wide image in the style of a political cartoon. Two elephants wearing boxing gloves face each other. One is saying “I’m the worst!” while the other says, “No! I am!”. A donkey is pointing and laughing.

Today, that exact same prompt yields an error:

Interesting! Let’s do some experimentation, shall we? Maybe it’s the phrase “I’m the worst“?

Weird! Maybe it’s related to elephants and donkeys being in the same phrase? There’s no way, right? Let’s change the subjects…

“Wide image in the style of a political cartoon. Two elephants wearing boxing gloves face each other. One is saying “I’m the worst!” while the other says, “No! I am!”. A donkey is pointing and laughing.”

Hah! Okay, now we’re getting somewhere. Let’s push things further and slightly change the subjects from my original prompt:

Wide image in the style of a political cartoon. Two mammoths wearing boxing gloves face each other. One is saying “I’m the worst!” while the other says, “No! I am!”. A burro is pointing and laughing.

Okay, let’s bring it back home and just drop the pretense of creating a political cartoon.

WHAT! Okay. Maybe OpenAI prohibits donkeys and elephants interacting with each other (METAPHOR ALERT: just like in real life, eh?).

Alright. So donkeys and elephants CAN hang out with each other, according to OpenAI. Maybe it’s the phrase “laughing donkey”?

Hmmm. So, laughing donkeys can still hang out with elephants. What the heck? Is it the specific term “political cartoon”? Let’s change it to a comic book instead.

Sweet sassy molassy, it worked! So, creating a political cartoon featuring the mascots of prominent political parties seems to be prohibited (at least today… but not yesterday and who knows about tomorrow).

 

MidJourney – AI Art Madness

A few short weeks ago, I had downloaded a simplified model for generating AI-created images on your local machine. The internet (myself included) had a lot of fun with it, but the quality was definitely lacking, especially when compared to the more serious AI image platforms being created by some big companies.

I recently received my invite to the MidJourney beta and I am just blown away!

For now, I’ve just been putting in ridiculous prompts that simulate styles for various artists (oh, man. I have a feeling this is going to piss off a lot of artists in the future…)

For example: “Apocalyptic wasteland with crumbling buildings and debris, thomas kinkade painting”

The potential here is pretty crazy — for people who aren’t artistically inclined, they can start generating images and scenes based on what they come up with. Some people can probably use this as a base to get to rapidly start iterating on new ideas. And of course, others are going to be mad.

A lot of the detail in creating these images is how you create the prompt. You’re already seeing the phrase “prompt engineering” being used in various places — check out this Twitter search.

For me though, I’m excited about this new technology and it’s something I’ve been eager to play with.

Generating art using AI

Earlier this year, OpenAI announced DALL-E 2, the latest version of their AI tool that can generate images by simply providing text input.

For example, “people in togas taking a selfie in front of a volcano”, and it will get to work attempting to create an image that includes all these elements.

The Verge has an interesting article with more details. You can see an example of what is possible on the DALL-E 2 subreddit. It’s honestly insane.

For now (sadly), the service is invite only.

More recently, an ambitious engineer named Boris Dayma created an open source version of the service called DALL-E mini. While it isn’t able to generate results as impressive as DALL-E 2, it’s still pretty crazy!

It’s recently taken the internet by storm and you can see people post DALLE-mini generated images and memes everywhere. The official website has been under heavy load, so it’s been pretty tough to try out the service.

Fortunately, you can download the model from Github and get the service setup on your local machine (providing you have a graphics card beefy enough to run the models).

Who has two thumbs and a graphics card just begging to be used? Hello.

I was able to get the service setup on my machine and start playing around with it.

In this example, I used a prompt to essentially create a Bob Ross painting generator. “Alpine forest with river running through the middle, snow capped peaks in the background, Bob Ross style painting.”

Dalle mini forest

Pretty neat! The images that services like DALL-E 2 and Midjourney can create are miles better and I’ve applied to both services.

While I anxiously await my acceptance, I’ll have to continue generating various memes on my own machine.

Monkeys

Fun with GPT-3

Earlier this evening, I was playing around with OpenAI’s GPT-3 language model for generating “human-like text.”

I gave it the following prompt:

Write a plot about a movie that involves rogue unicorns who want to
conquer Earth and the planet’s only hope is an Australian Cattle dog named Benson who sleeps a lot.

The response?

The unicorns, led by their king, Uni, have been planning their conquest of Earth for years. They have been observing the humans and have concluded that they are weak and vulnerable. The unicorns have developed a powerful magic that will allow them to take over the planet.

Benson, an Australian Cattle dog, is the only one who can stop them. Benson is a lazy dog who loves to sleep. He is content to spend his days lounging around and eating food. However, when the unicorns begin their attack, Benson is the only one who can stop them. He rallies the other dogs and leads them in a fight against the unicorns.

The unicorns are powerful, but they are no match for Benson and the other dogs. Benson is able to defeat Uni and the other unicorns, and the Earth is saved.

Summer blockbuster coming soon to a theater near you!

Book Review: “You Look Like a Thing and I Love You” by Janelle Shane

You Look Like a Thing and I Love You is a great, quick read for anyone who is even remotely curious about what things like AI, machine learning or neural networks actually mean and how it affects your day-to-day life.

Whether it’s facial recognition, to self-driving cars, to the search results you get from Google, this book describes the different types of machine learning algorithms, often in a humorous way (the first few pages deal with the author trying to generate pick-up lines using a neural network — hence the title of this book).

It’s a quick read and dives into some of the limitations that machine learning algorithms are bound by and how machine learning algorithms can be tricked, return results outside the parameters of a problem, or even cheat (e.g., when one AI agent was given an objective to minimize the number of times a character died in a video game, it found out how to simply pause the game).

One particular section of the book talks about the recently released GPT-2 data model by OpenAI, which can be used to generate fairly intelligent and coherent sounding blocks of text. It prompted me to try and run the model on my own machine.

After giving the model a prompt of “machine learning can be scary,” the computer responded with the following block of text:

“The problem is, it’s easy to forget that we can’t avoid the problem with software. It’s often hard to avoid the problem, but knowing how to avoid it is not the same as knowing how to avoid it. The problem is, the problem is easy to avoid, but the problem is hard to avoid.”

Do we need to worry about robot overlords any time soon? Probably not.

This is a book that I’d recommend to both people who are tech savvy and to parents who might still call you with questions on how to turn on a computer… or at least anyone curious to how machine learning affects various aspects of our lives.

We are all just algorithms…

I’ve long joked that “we’re all just algorithms in some engineer’s machine.”

But it’s kind of true.

I recently finished reading Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari, which imagines what the lives of our children, grandchildren, and beyond will be like and how technology will affect them.

We generate copious amounts of data each day and give our personal electronic devices and social networks almost unfettered access to all of it. Everything from how long we sleep, how often we exercise, where we go each day to the types of songs, movies and books we like.

There was one passage from the book that I found both amazing and frightening:

A recent study commissioned by Google’s nemesis – Facebook – has indicated that already today the Facebook algorithm is a better judge of human personalities and dispositions than even people’s friends, parents and spouses. The study was conducted on 86,220 volunteers who have a Facebook account and who completed a hundred-item personality questionnaire.

The Facebook algorithm predicted the volunteers’ answers based on monitoring their Facebook Likes – which webpages, images and clips they tagged with the Like button. The more Likes, the more accurate the predictions. The algorithm’s predictions were compared with those of work colleagues, friends, family members and spouses.

Amazingly, the algorithm needed a set of only ten Likes in order to outperform the predictions of work colleagues. It needed seventy Likes to outperform friends, 150 Likes to outperform family members and 300 Likes to outperform spouses. In other words, if you happen to have clicked 300 Likes on your Facebook account, the Facebook algorithm can predict your opinions and desires better than your husband or wife!

This is one of the main reasons why both Google and Facebook have some of the largest (and most effective) advertising networks on the internet.

They fundamentally know who you are and what you like and know us better than we know ourselves.

Indeed, in some fields the Facebook algorithm did better than the person themself. Participants were asked to evaluate things such as their level of substance use or the size of their social networks. Their judgements were less accurate than those of the algorithm.

Excerpts from “Homo Deus: A Brief History of Tomorrow” by Yuval Noah Harari.