Tag: ai-music

Using Suno AI to cover your own music

One of the things that is pretty cool about being a human is that we get to express ourselves through a wide variety of creative outlets: writing, music, drawing, painting, sculpting and all sorts of arts forms.

Like everything else though, AI is coming for our creative pursuits. And apparently I’m just going along for the ride. Especially since I’ve been at the forefront of contributing to this through ArtBot, which has so far generated about 34.4 million images over the 3 years it has existed.

Anyway, Suno, a music generation tool that I’ve previously mentioned, recently updated their music model to v5.

They allow you to upload your own source music as inspiration and then use the v5 model to create a cover song.

So, here is an absolutely poor recording of my cousin and I playing some rock and roll to a backing drum machine way back in like 2002. No singing, just pure instrumental (we were in the process of trying to write a song I think).

Well… what happens if you take this song and upload it into Suno? First, it creates a style description (similar to how multi-modal LLMs can now accurately describe an image):

A high-energy instrumental track featuring a driving rock drum beat with prominent snare and kick, a distorted electric guitar playing a fast, melodic riff, and a bass guitar providing a solid rhythmic and harmonic foundation, The tempo is fast, creating an urgent and exciting mood, The production is clean with a strong emphasis on the guitar and drums, suggesting a live band feel, The song structure is repetitive, focusing on the main guitar riff throughout, There are no vocals.

Hey, sure! I’ll take it. That description sounds a lot better than our music.

Alright, let’s feed it to Suno:

Honestly, that sounds pretty awesome! In my original recording, I play a pretty simple guitar solo at about 1:40. Suno used that for inspiration in a number of spots.

I’m pretty impressed! It nailed my rhythm guitar and lead guitar tracks perfectly, while also cleaning it up and adding some additional flourish. And it kept the same tone / mood throughout the whole thing!

Maybe I’ll have to dig up more of our old recordings. The Velvet Sundown better watch out!

They went viral, amassing more than 1m streams on Spotify in a matter of weeks, but it later emerged that hot new band the Velvet Sundown were AI-generated – right down to their music, promotional images and backstory.

The episode has triggered a debate about authenticity, with music industry insiders saying streaming sites should be legally obliged to tag music created by AI-generated acts so consumers can make informed decisions about what they are listening to.

One thing I do notice about AI generated music: in the past, we used to joke the AI artists could not draw hands. Well, AI guitarists can not (currently) do pick scrapes. So, we still have that going for us!

It’s AI all the way down

Back in November, I went with some friends to play paintball — it was the first time I ever played. We had booked a 3 hour session that would feature multiple matches. I don’t think any of us had ever played before and we were all pretty nervous about getting hit.

Lo and behold, within the first 30 seconds of the game, I took a paintball to the knee (cue the “I used to be an adventurer like you…” meme from Skyrim). Somehow, I twisted my leg as I rag dolled into the ground.

Of course, you can’t just give up after 30 seconds, right? So, on I played. The result is that I ended up tearing my ACL (the doc said he had no idea how this could have happened), have a bone contusion, and will likely need reconstructive surgery at some point. Fun!

Anyway, the point of all of this — for funsies, I tried to create a song about the situation using Suno’s generative music service (see previously). I used ChatGPT to come up with some initial lyrics and then did some work to refine them.

Then! I decided to use OpenAI’s generative video tool, Sora, to attempt to create a bunch of clips. I strung everything together in iMovie and the result is this rowdy music video: “This is What I Get

It’s Friday afternoon, so let’s write a song

My latest generative AI obsession: Suno. You provide it some lyrics, give it a musical style to emulate and hit the create button. It’s pretty wild.

I wrote some fun lyrics about deploying code on Fridays, set to some catchy 80’s pop. The result is pretty crazy.

[Verse]
Testing in production (oh yeah)
That is how we roll (whoa)
Testing in production
using my flawless code

[Bridge]
Why should I write tests (what?)
My code is never a mess (oh no)
Did I just rhyme,
Tests and a mess (yeah he did)

[Chorus]
It’s Friday afternoon.
It’s time to deploy my code. (whoa yeah)
The weekend is almost here.
It’s time to deploy my code. (watch out)

[Verse]
It’s Friday afternoon.
I don’t have anything to fear
It’s time to deploy my code.
The weekend is almost here.

[Bridge]
Why should I write tests (what?)
My code is never a mess (oh no)
Did I just rhyme,
Tests and a mess (yeah he did)

[Verse]
It’s Friday afternoon. (Whoa)
It’s Friday afternoon. (Whoaaa)
It’s Friday afternoon. (Yeah!)
It’s time to deploy my code. (WAIT WHAT)

[Bridge]
Why should I write tests (what?)
My code is never a mess (oh no)
Did I just rhyme,
Tests and a mess (yeah he did)

[Chorus]
It’s Friday afternoon.
It’s time to deploy my code. (whoa yeah)
The weekend is almost here.
It’s time to deploy my code. (watch out)

[Chorus]
It’s Friday afternoon.
It’s time to deploy my code. (whoa yeah)
The weekend is almost here.
It’s time to deploy my code. (watch out)