
A speculative narrative about contact, consciousness, and collapse.
All made with Midjourney, Runway, Veo, ElevenLabs, After Effects and a bunch of other tools.
ROLE
Writer, Director, Editor, Effects Artist, AI Generalist
RESPONSIBILITIES
Creative Direction & Story Development
Wrote voiceover script and structured narrative arc; developed a speculative world rooted in emotional realismAI Video & Audio Toolchains
Generated cinematic visuals and audio (voice, music and sound effects) with tools like Runway, Veo, Midjourney, ElevenLabs and SunoMotion Design & Editing
Created unique motion graphics, designed and composited scenes in After Effects, assembled final film and audio in Premiere ProExperimental Design Process
Worked without a traditional storyboard, using a nonlinear, tool-led process to discover story through iteration
A movie created entirely by one human and many machines
This project started with a weird idea and spiraled into something bigger (like all great sci-fi… and all enthralling personal projects).
What if a futuristic package delivery system opened a portal to first contact with an alien intelligence?
What if you could tell that story using a mash-up of generative tools, motion graphics, and a very stubborn UX brain?
This film became a speculative design artifact, a cinematic experiment, and a crash course in editing, timing, motion graphic creation, narrative structure and AI-assisted visual production. No funding. No crew. Just me, a bunch of AI tools, and a lot of rendering time.
I used this fictional film concept to:
Explore narrative design and multi-modal prototyping using AI tools
Treat generative models like collaborators (with as much, or more, flaws as their human counterparts)
Push my comfort zone into video editing, 3D motion graphics design, voiceover direction, and post-production
Build a finished product that asks big questions, and looks cool doing it
THE SETUP
A challenge, a curiosity, a level-up
It started, as these things do, with a desire to do something entirely different and learn new tools and skills. My grasp of A/V design was limited, so by creating a movie I would be learning how to use the new AI A/V stuff, as well as the tried-and-true tools that have been around for years.
The larger goal wasn't really to make a short film. I ultimately wanted to learn what the new wave of AI video and audio tools could do (and more importantly, what they couldn’t). That meant wrangling inconsistencies across platforms, problem-solving where the tools failed, and designing critical pieces from scratch in After Effects to fill in the gaps—including orbital planet systems, holographic overlays, and layered HUD animations the tools just couldn’t come close to being able to do (yet).
From there, it snowballed into a full-on cinematic experiment. A corporate ad. A glitch in the simulation. A woman on a roof holding a mysterious object. And a message from something...else.
This project became equal parts narrative world-building, visual effects sandbox, and stubborn refusal to give up once I’d made it past the halfway mark.
THE STACK
What powered the project
This project was stitched together using a Frankenstein’s monster of tools—some shiny and new, some old and reliable, all of them very opinionated.
AI Video + Visual Generation
Runway (Gen-2): video generation
Google Flow (Veo 2 + 3): video generation
Midjourney: concept imagery, video generation first frames, some video clips
Photoshop: clean up and expansion of Midjourney imagery
Voice + Audio
ElevenLabs: all of the voice overs and Del's voice
Suno: music generation for the all of the soundtrack and some sound effects
Adobe Audition: mixing, EQ, reverb, and all the "fix it in post" audio stuff
Editing + Effects
Adobe After Effects: motion graphics, 3D overlays, custom HUDs, planetary systems, glitch effects
Premiere Pro: master timeline, audio syncing, final edit
Media Encoder: batch rendering (a lot of it)
Organization + Scripts
ChatGPT: brainstorming, structure, script cleanup, moral support
THE CHALLENGES
What could go wrong did go weird
Consistency? AI's never heard of her
MY SOLUTION
Tool-hopping and duct-taping it all together
MY SOLUTION
Sound? A complete mystery
MY SOLUTION
The tools simply couldn’t do what I wanted
MY SOLUTION
Budget, obviously
MY SOLUTION
My experience with some of the tools (AI and analog) wasn't great
MY SOLUTION
Image Generation Bloopers
I feel like the wacky images generated in trying to create a believable evolution of the alien species is a great way to illustrate the magic, and the flaws, inherent in the AI generative tools. I had one image as a starting point (check out my Galaxy X32p-19 project to see where I got the inspiration from) and I worked backwards from there.
THOUGHTS