I've been producing House music for a couple decades. After playing the label game for the past 10 years, I'm now happily self-releasing. This means I have much more control over my brand and creative. The large majority of my target demo is on Instagram, and I regularly create IG reels as part of each release's promotional assets. For my release, "Sunrise," I decided to create a 90 second video using genertive AI.
I wanted to be spontaneous, no storyboards. Still, I needed an idea or at the very least a prompt. The track is called "Sunrise," so I decided to take the most obvious route but with a twist: "sunrise on Mars." The first challenge was to turn this into a prompt for Midjourney and get the sophisticated results I had in mind, i.e. more cinematic, with specific lighting effects, and at certain camera angles. For this level of control I used The Filmic Look Prompt Generator by my fellow IBMer, David Avila. I pasted his script into ChatGPT4 and after some tweaks settled on:
Sunrise on Mars, A surreal landscape where the rising sun casts a soft red glow on the dusty Martian surface, craters and mountains barely visible in the dim light, Close Up, Natural Light, Directed by Ridley Scott, Warm Color Palette, --ar 9:16
This prompt, aling with several variations, helped generate a number of Martian sunrise images in Midjourney.
The next step was to generate video clips in Runway from the Midjourney stills. I used Runway's new Gen-2 motion brush feauture to add movement to various elements, and then added horizontal panning to give all my clips a consistent right-to-left motion.
Finally, I edited all my clips using CapCut. After years of using After Effects and Premiere, using this simple browser-based editor made by ByteDance was a guilty pleasure. I also added several admittedly gratuious effects, of which there are hundreds to chose from in CapCut.
This unboosted reel got 2494 plays, 617 replays, 52 likes, 17 comments, and 166 conversions to my Spotify. These aren't mind-blowing numbers compared to your average TikTok dance vid, however, I'm happy with the results.
I think we're in the very early days of generative AI. I had to chuckle to myself when Runway created morphing, melting, and bizarre renders – I'm sure this distinctive look will eventually be made into a preset called "Prehistoric AI from 2024" or something similar. Still, the process was fluid and quick enough for me to iterate on the fly. I had to stop myself from photoshipping some of the Midjourney images to correct "mistakes" and instead try and reivse my prompt. Anyway after a while I felt like I had a lot of control over the content, technique, and style of my imagery and I could focus more on story and flow. If I decide to make another generative video, I'd like to develop even finer control via more precise prompt engineering.