You know those moments when tech feels more like magic than code? That’s exactly what happened during our latest workshop on AI video creation. This wasn’t some dry, theory-heavy session—we got hands-on, experimenting with tools like Runway and MidJourney, figuring out what worked (and what didn’t) through trial, error, and plenty of laughs.
While Runway handles the video side of things, MidJourney is our go-to for generating high-quality images. It’s the best tool for getting those sharp, detailed visuals. Once we had the perfect stills, bringing them into Runway for animation turned out to be a solid strategy for creating polished, professional-looking content.
If there’s one lesson we learned early on, it’s this: AI isn’t a mind reader. You need to be clear, sharp, and to the point. Forget long-winded explanations—this is where short, snappy prompts shine. Turns out, breaking down your ideas into quick bullet points gets way better results than trying to feed the AI a mini novel.
Instead of saying, “Make a video of an elephant walking through a bustling city during sunset,” something like “elephant walking, busy city, sunset lighting, dynamic motion” worked way better. Simple prompts helped the AI focus—and got us the shots we actually wanted.
Once we got the hang of prompting, it was time to dive into Runway’s toolbox. Some features were exactly what we expected, and others? Let’s just say they surprised us—in both good and frustrating ways.
The Motion Brush feature was a game-changer for directing specific movements. Want a character’s arm to wave or an object to glide across the frame? Just brush it in. It wasn’t perfect, though—getting a bird to fly smoothly or a dolphin to dive naturally turned out to be trickier than we thought.
Then came Act 1, which we explored through some impressive examples. Watching pre-made content that mapped real human facial expressions onto AI-generated characters was both fascinating and a little eerie. The technology captured subtle nuances—like smirks, eyebrow raises, and even awkward blinks—that made the digital characters feel surprisingly lifelike.
One of the most satisfying moments came when we started playing with Runway’s camera controls. Being able to pan, zoom, and rotate the virtual camera gave our AI-generated videos a real cinematic feel.
We tried a fun challenge: animating an elephant wandering through a Manhattan street while the camera slowly orbited around it. It genuinely felt like we were directing a mini movie—complete with the depth, perspective, and movement you’d expect from an actual film shoot.
Runway offers a few different video generation models, and of course, we had to try them all. Here’s what stood out:
One of the most unexpectedly brilliant moments? We tried animating that elephant-in-Manhattan scene and, completely unprompted, the AI decided to have a car swerve around the elephant. We didn’t tell it to do that—it just knew. That little touch of unpredictability made the whole scene feel real and spontaneous.
Not every experiment was a win, though. But hey, even the fails were valuable—each misstep showed us more about what AI could (and couldn’t) do.
We got plenty of questions about the cost of using these tools. Here’s a quick breakdown:
AI isn’t perfect—it’s unpredictable, and sometimes it feels like it’s just doing its own thing. But that’s part of the magic. The best results often came from moments we didn’t plan for, like that swerve around the elephant or an unexpected lighting effect.
The real takeaway? AI isn’t here to replace creativity—it’s here to boost it. These tools don’t do the thinking for you, but they help bring your wildest ideas to life faster (and sometimes better) than you thought possible.
This workshop was just the beginning. If you’re ready for more hands-on AI exploration, keep an eye out—we’ll be running more sessions soon. Until then, don’t be afraid to play around with these tools. The best ideas often come from unexpected places—and AI is full of surprises.
Let’s keep creating—and let’s see where the tech takes us next.