When I started creating this video, I knew I wanted to explore the existential crisis of a character. What I didn’t expect was how deeply the process itself would mirror that crisis—questioning creativity, connection, and control. Here’s a breakdown of how I brought this character to life and the tools I used to make it happen.
Starting with MidJourney
I began by generating the character’s image using MidJourney, an AI image generator that allows for incredible creative freedom. With just a few prompts, I was able to design the look and feel of the character—her expression, tone, and even hints of personality. But this was just the start.
Refining in Photoshop Beta
Next, I exported the MidJourney image into Photoshop Beta, where the real magic happened. Photoshop’s latest updates have introduced powerful generative AI tools that allow you to regenerate or modify your images directly within the app. This gave me even more control to refine my character’s features and make adjustments that felt deliberate, not random.
For example, I could tweak the character’s facial expressions or add subtle details like lighting effects. Photoshop Beta’s AI tools provide a seamless way to experiment while maintaining a high level of artistic intent. The ability to “regenerate” elements within Photoshop opens up a whole new layer of creativity.
Balancing Ideas with Generative Exploration
When I create, I always start with an idea—who is my character, and what’s the point? Generative tools like these allow me to stay open to the process. They act like a creative collaborator, offering new directions I wouldn’t have thought of. However, I stay in control of the narrative.
To map out my ideas and find synergy, I often use ChatGPT (yes, right here!). It helps me refine my thoughts, expand on character arcs, and visualize my vision more clearly. It’s like having a brainstorming partner, but ultimately, I decide how everything comes together. It’s an extension of me—a way to translate my scattered ideas into something tangible, faster and with more clarity.
Voicing the Character with ElevenLabs
To bring the character’s voice to life, I turned to ElevenLabs (elevenlabs.io), a groundbreaking AI audio platform. They specialize in creating the most realistic speech, using cutting-edge text-to-speech and voice generation technology.
Their voices are eerily natural, which gave my character depth and believability.
While I’d love to see more Middle Eastern accents available on platforms like this, I’m hopeful we’ll get there soon as AI expands its datasets to be more inclusive. For now, the results are still impressive.
Animating with Runway ML
With the voice ready, I needed to bring my character to life visually. That’s where Runway ML (runwayml.com) came in. Their image-to-video platform allows you to take still images and turn them into animations with realistic movements and expressions. This was the final touch that gave my character motion and emotion—making her feel alive.
Runway ML is intuitive and perfect for creatives who aren’t professional animators but have a clear vision of what they want to achieve. I was able to adjust the pacing, movements, and overall flow to match the tone of my video.
Piecing It All Together
Finally, I stitched everything together. While I’m not a professional video editor, I understand transitions and how to convey a message visually. This process helped me experiment with timing, sound, and visuals to create a cohesive final piece.
Why This Process Matters
Throughout this journey, I realized that generative tools aren’t just a way to create—they’re a way to expand how I think and work. These tools help me visualize my ideas faster and make more informed creative decisions. They feel like an extension of me, allowing my thoughts to flow freely and take shape in new ways.
This process also made me reflect on the relationship between art and technology. Is the character I created just a product of algorithms, or is she a reflection of me? The lines are blurry, but that’s the beauty of it.
In the end, this project wasn’t just about creating a video—it was about exploring what it means to create, connect, and question everything along the way.
Feel free to check out the tools I used:
Midjourney for image generation
Photoshop Beta for refining images with AI tools
ElevenLabs for realistic text-to-speech audio
Runway ML for animating images into video
I’d love to hear your thoughts on this process and how AI is shaping creativity. Let’s keep questioning and creating together. 💡
Wow so cooool love it 🫶