becoming: behind the scenes

becoming:
a sci-fi short film

A speculative narrative about contact, consciousness, and collapse.
All made with Midjourney, Runway, Veo, ElevenLabs, After Effects and a bunch of other tools.

ROLE

Writer, Director, Editor, Effects Artist, AI Generalist

RESPONSIBILITIES

  • Creative Direction & Story Development
    Wrote voiceover script and structured narrative arc; developed a speculative world rooted in emotional realism

  • AI Video & Audio Toolchains
    Generated cinematic visuals and audio (voice, music and sound effects) with tools like Runway, Veo, Midjourney, ElevenLabs and Suno

  • Motion Design & Editing
    Created unique motion graphics, designed and composited scenes in After Effects, assembled final film and audio in Premiere Pro

  • Experimental Design Process
    Worked without a traditional storyboard, using a nonlinear, tool-led process to discover story through iteration

A movie created entirely by one human and many machines

This project started with a weird idea and spiraled into something bigger (like all great sci-fi… and all enthralling personal projects).

What if a futuristic package delivery system opened a portal to first contact with an alien intelligence?
What if you could tell that story using a mash-up of generative tools, motion graphics, and a very stubborn UX brain?

This film became a speculative design artifact, a cinematic experiment, and a crash course in editing, timing, motion graphic creation, narrative structure and AI-assisted visual production. No funding. No crew. Just me, a bunch of AI tools, and a lot of rendering time.

I used this fictional film concept to:

  • Explore narrative design and multi-modal prototyping using AI tools

  • Treat generative models like collaborators (with as much, or more, flaws as their human counterparts)

  • Push my comfort zone into video editing, 3D motion graphics design, voiceover direction, and post-production

  • Build a finished product that asks big questions, and looks cool doing it

THE SETUP

A challenge, a curiosity, a level-up

It started, as these things do, with a desire to do something entirely different and learn new tools and skills. My grasp of A/V design was limited, so by creating a movie I would be learning how to use the new AI A/V stuff, as well as the tried-and-true tools that have been around for years.

The larger goal wasn't really to make a short film. I ultimately wanted to learn what the new wave of AI video and audio tools could do (and more importantly, what they couldn’t). That meant wrangling inconsistencies across platforms, problem-solving where the tools failed, and designing critical pieces from scratch in After Effects to fill in the gaps—including orbital planet systems, holographic overlays, and layered HUD animations the tools just couldn’t come close to being able to do (yet).

From there, it snowballed into a full-on cinematic experiment. A corporate ad. A glitch in the simulation. A woman on a roof holding a mysterious object. And a message from something...else.

This project became equal parts narrative world-building, visual effects sandbox, and stubborn refusal to give up once I’d made it past the halfway mark.

Every tool had a limit. That’s where the real design work started.

Every tool had a limit. That’s where the real design work started.

Every tool had a limit. That’s where the real design work started.

THE STACK

What powered the project

This project was stitched together using a Frankenstein’s monster of tools—some shiny and new, some old and reliable, all of them very opinionated.

AI Video + Visual Generation

  • Runway (Gen-2): video generation

  • Google Flow (Veo 2 + 3): video generation

  • Midjourney: concept imagery, video generation first frames, some video clips

  • Photoshop: clean up and expansion of Midjourney imagery

Voice + Audio

  • ElevenLabs: all of the voice overs and Del's voice

  • Suno: music generation for the all of the soundtrack and some sound effects

  • Adobe Audition: mixing, EQ, reverb, and all the "fix it in post" audio stuff

Editing + Effects

  • Adobe After Effects: motion graphics, 3D overlays, custom HUDs, planetary systems, glitch effects

  • Premiere Pro: master timeline, audio syncing, final edit

  • Media Encoder: batch rendering (a lot of it)

Organization + Scripts

  • ChatGPT: brainstorming, structure, script cleanup, moral support

THE CHALLENGES

What could go wrong did go weird

This was not a plug-and-play project. It was a crash course in the realities of AI-assisted video creation, and a reminder that every tool has its strengths and weaknesses; it's all about learning how to use each one in the best way possible.

This was not a plug-and-play project. It was a crash course in the realities of AI-assisted video creation, and a reminder that every tool has its strengths and weaknesses; it's all about learning how to use each one in the best way possible.

This was not a plug-and-play project. It was a crash course in the realities of AI-assisted video creation, and a reminder that every tool has its strengths and weaknesses; it's all about learning how to use each one in the best way possible.

This was not a plug-and-play project. It was a crash course in the realities of AI-assisted video creation, and a reminder that every tool has its strengths and weaknesses; it's all about learning how to use each one in the best way possible.

Consistency? AI's never heard of her

Characters changed outfits and hair styles and ethnicities, environments shifted vibe and photographic style, and continuity was basically impossible to achieve without some meticulous editing in Photoshop to the reference images pre-video generation.

Characters changed outfits and hair styles and ethnicities, environments shifted vibe and photographic style, and continuity was basically impossible to achieve without some meticulous editing in Photoshop to the reference images pre-video generation.

Characters changed outfits and hair styles and ethnicities, environments shifted vibe and photographic style, and continuity was basically impossible to achieve without some meticulous editing in Photoshop to the reference images pre-video generation.

Characters changed outfits and hair styles and ethnicities, environments shifted vibe and photographic style, and continuity was basically impossible to achieve without some meticulous editing in Photoshop to the reference images pre-video generation.

MY SOLUTION

Reverse-engineered consistency: used MJ’s omni reference parameter to kickstart image cohesion, composited elements in Photoshop, then used generative fill to smooth it all together. Having cohesive flat images to feed the AI video tools helped achieve a more congruous look overall.

Reverse-engineered consistency: used MJ’s omni reference parameter to kickstart image cohesion, composited elements in Photoshop, then used generative fill to smooth it all together. Having cohesive flat images to feed the AI video tools helped achieve a more congruous look overall.

Reverse-engineered consistency: used MJ’s omni reference parameter to kickstart image cohesion, composited elements in Photoshop, then used generative fill to smooth it all together. Having cohesive flat images to feed the AI video tools helped achieve a more congruous look overall.

Reverse-engineered consistency: used MJ’s omni reference parameter to kickstart image cohesion, composited elements in Photoshop, then used generative fill to smooth it all together. Having cohesive flat images to feed the AI video tools helped achieve a more congruous look overall.

Tool-hopping and duct-taping it all together

Every AI tool offered something good, but none did everything:

  • Midjourney: has been my go-to for image generation since its release. But, it just released its video generation feature as I was making the movie! The V1 video model is good for basic stuff (or really conceptual stuff, depending on how you look at it—like energy waves drifting through space and the like). It crushed for "filler" video and required almost zero coaxing in the video generation department.

  • Runway: it's not as well known for this, but its image generation tool can actually put a subject + object + environment all together into one image. Not spectacularly for sure, but it's the only thing out there I've seen that can do that. It's video generation output was comparable to Veo's.

  • Flow/Veo: allows the user to add a starting image and prompt off of that, which allowed image-to-video generation for continuity when I needed it most. Ultimately this ended up being the tool I used the most. The biggest bonus was that it allows you to extend a shot from the initial eight seconds to 15 or so.

Every AI tool offered something good, but none did everything:

  • Midjourney: has been my go-to for image generation since its release. But, it just released its video generation feature as I was making the movie! The V1 video model is good for basic stuff (or really conceptual stuff, depending on how you look at it—like energy waves drifting through space and the like). It crushed for "filler" video and required almost zero coaxing in the video generation department.

  • Runway: it's not as well known for this, but its image generation tool can actually put a subject + object + environment all together into one image. Not spectacularly for sure, but it's the only thing out there I've seen that can do that. It's video generation output was comparable to Veo's.

  • Flow/Veo: allows the user to add a starting image and prompt off of that, which allowed image-to-video generation for continuity when I needed it most. Ultimately this ended up being the tool I used the most. The biggest bonus was that it allows you to extend a shot from the initial eight seconds to 15 or so.

Every AI tool offered something good, but none did everything:

  • Midjourney: has been my go-to for image generation since its release. But, it just released its video generation feature as I was making the movie! The V1 video model is good for basic stuff (or really conceptual stuff, depending on how you look at it—like energy waves drifting through space and the like). It crushed for "filler" video and required almost zero coaxing in the video generation department.

  • Runway: it's not as well known for this, but its image generation tool can actually put a subject + object + environment all together into one image. Not spectacularly for sure, but it's the only thing out there I've seen that can do that. It's video generation output was comparable to Veo's.

  • Flow/Veo: allows the user to add a starting image and prompt off of that, which allowed image-to-video generation for continuity when I needed it most. Ultimately this ended up being the tool I used the most. The biggest bonus was that it allows you to extend a shot from the initial eight seconds to 15 or so.

Every AI tool offered something good, but none did everything:

  • Midjourney: has been my go-to for image generation since its release. But, it just released its video generation feature as I was making the movie! The V1 video model is good for basic stuff (or really conceptual stuff, depending on how you look at it—like energy waves drifting through space and the like). It crushed for "filler" video and required almost zero coaxing in the video generation department.

  • Runway: it's not as well known for this, but its image generation tool can actually put a subject + object + environment all together into one image. Not spectacularly for sure, but it's the only thing out there I've seen that can do that. It's video generation output was comparable to Veo's.

  • Flow/Veo: allows the user to add a starting image and prompt off of that, which allowed image-to-video generation for continuity when I needed it most. Ultimately this ended up being the tool I used the most. The biggest bonus was that it allows you to extend a shot from the initial eight seconds to 15 or so.

MY SOLUTION

After a good amount of trial and error I figured out which tool to use where, how to achieve cohesion between the varied outputs, and the best way to use the Adobe Suite tools to patch up any glitches or weirdness.

After a good amount of trial and error I figured out which tool to use where, how to achieve cohesion between the varied outputs, and the best way to use the Adobe Suite tools to patch up any glitches or weirdness.

After a good amount of trial and error I figured out which tool to use where, how to achieve cohesion between the varied outputs, and the best way to use the Adobe Suite tools to patch up any glitches or weirdness.

After a good amount of trial and error I figured out which tool to use where, how to achieve cohesion between the varied outputs, and the best way to use the Adobe Suite tools to patch up any glitches or weirdness.

Sound? A complete mystery

From voiceover to sound design, I was starting at zero.

From voiceover to sound design, I was starting at zero.

From voiceover to sound design, I was starting at zero.

From voiceover to sound design, I was starting at zero.

MY SOLUTION

I paid closer attention to movies and tv shows I watched and observed how other people were doing it. Also, YouTube. ElevenLabs was a great starting point, and making modifications in Audition helped to add uniqueness and believability to the voices.

I paid closer attention to movies and tv shows I watched and observed how other people were doing it. Also, YouTube. ElevenLabs was a great starting point, and making modifications in Audition helped to add uniqueness and believability to the voices.

I paid closer attention to movies and tv shows I watched and observed how other people were doing it. Also, YouTube. ElevenLabs was a great starting point, and making modifications in Audition helped to add uniqueness and believability to the voices.

I paid closer attention to movies and tv shows I watched and observed how other people were doing it. Also, YouTube. ElevenLabs was a great starting point, and making modifications in Audition helped to add uniqueness and believability to the voices.

The tools simply couldn’t do what I wanted

Not even close. Want a character holding a glowing object? Speaking words? In the same outfit across three shots? Good luck. Another layer of difficulty was added to the whole mess because of the nature of the subject matter (see: aliens) and the tools weren't really trained on that.

Not even close. Want a character holding a glowing object? Speaking words? In the same outfit across three shots? Good luck. Another layer of difficulty was added to the whole mess because of the nature of the subject matter (see: aliens) and the tools weren't really trained on that.

Not even close. Want a character holding a glowing object? Speaking words? In the same outfit across three shots? Good luck. Another layer of difficulty was added to the whole mess because of the nature of the subject matter (see: aliens) and the tools weren't really trained on that.

Not even close. Want a character holding a glowing object? Speaking words? In the same outfit across three shots? Good luck. Another layer of difficulty was added to the whole mess because of the nature of the subject matter (see: aliens) and the tools weren't really trained on that.

MY SOLUTION

I realized that for anything too technical or anything that contained text I would have to do it myself. So I learned a lot about creating motion graphics in After Effects: HUD overlays, sci-fi visual effects, and animated graphics like planetary systems and 3D holograms.

I realized that for anything too technical or anything that contained text I would have to do it myself. So I learned a lot about creating motion graphics in After Effects: HUD overlays, sci-fi visual effects, and animated graphics like planetary systems and 3D holograms.

I realized that for anything too technical or anything that contained text I would have to do it myself. So I learned a lot about creating motion graphics in After Effects: HUD overlays, sci-fi visual effects, and animated graphics like planetary systems and 3D holograms.

I realized that for anything too technical or anything that contained text I would have to do it myself. So I learned a lot about creating motion graphics in After Effects: HUD overlays, sci-fi visual effects, and animated graphics like planetary systems and 3D holograms.

Budget, obviously

Free and trial versions meant rationing credits like a Victorian orphan. Also, none of the tools I could afford did character lip-syncing—so I had to work around it.

Free and trial versions meant rationing credits like a Victorian orphan. Also, none of the tools I could afford did character lip-syncing—so I had to work around it.

Free and trial versions meant rationing credits like a Victorian orphan. Also, none of the tools I could afford did character lip-syncing—so I had to work around it.

Free and trial versions meant rationing credits like a Victorian orphan. Also, none of the tools I could afford did character lip-syncing—so I had to work around it.

MY SOLUTION

Got real creative and stingy about dialogue. Also embraced a “good enough” approach and built layered animations in AE or re-used scenes with the time reversed to create a loop in and out video.

Got real creative and stingy about dialogue. Also embraced a “good enough” approach and built layered animations in AE or re-used scenes with the time reversed to create a loop in and out video.

Got real creative and stingy about dialogue. Also embraced a “good enough” approach and built layered animations in AE or re-used scenes with the time reversed to create a loop in and out video.

Got real creative and stingy about dialogue. Also embraced a “good enough” approach and built layered animations in AE or re-used scenes with the time reversed to create a loop in and out video.

My experience with some of the tools (AI and analog) wasn't great

For many of these tools I was a first time user. Let me tell you, the learning curve was a rollercoaster.

For many of these tools I was a first time user. Let me tell you, the learning curve was a rollercoaster.

For many of these tools I was a first time user. Let me tell you, the learning curve was a rollercoaster.

For many of these tools I was a first time user. Let me tell you, the learning curve was a rollercoaster.

MY SOLUTION

YouTube tutorials. Generating ten videos in order to get one decent one. Leaning on my innate design brain to make something beautiful even if I didn't know exactly how to do so.

YouTube tutorials. Generating ten videos in order to get one decent one. Leaning on my innate design brain to make something beautiful even if I didn't know exactly how to do so.

YouTube tutorials. Generating ten videos in order to get one decent one. Leaning on my innate design brain to make something beautiful even if I didn't know exactly how to do so.

YouTube tutorials. Generating ten videos in order to get one decent one. Leaning on my innate design brain to make something beautiful even if I didn't know exactly how to do so.

Image Generation Bloopers

I feel like the wacky images generated in trying to create a believable evolution of the alien species is a great way to illustrate the magic, and the flaws, inherent in the AI generative tools. I had one image as a starting point (check out my Galaxy X32p-19 project to see where I got the inspiration from) and I worked backwards from there.

Generative tools aren’t magic (although my emotional brain feels like they are)—they’re just probability engines with decent taste.

Generative tools aren’t magic (although my emotional brain feels like they are)—they’re just probability engines with decent taste.

Generative tools aren’t magic (although my emotional brain feels like they are)—they’re just probability engines with decent taste.

THOUGHTS

So, what was the point of all this?

(Besides late nights and weird dreams)

“Becoming” was never about perfection. It was about experimenting boldly, problem-solving creatively, and building something that felt real enough to work. I didn’t know how to do most of what this film required—but I learned. Quickly.

This project sharpened my design instincts, pushed me into unfamiliar mediums, and reminded me that I’m annoyingly stubborn when I want to make something work. So if you’re looking for someone who can think fast, adapt faster, and stay weird under pressure—well. You found her.

The point of this project wasn’t just to make a short film. It was to prove that I could thrive in ambiguity, navigate new tools, and bring a vision to life without waiting for someone to hand me the perfect brief.

“Becoming” was never about perfection. It was about experimenting boldly, problem-solving creatively, and building something that felt real enough to work. I didn’t know how to do most of what this film required—but I learned. Quickly.

This project sharpened my design instincts, pushed me into unfamiliar mediums, and reminded me that I’m annoyingly stubborn when I want to make something work. So if you’re looking for someone who can think fast, adapt faster, and stay weird under pressure—well. You found her.

The point of this project wasn’t just to make a short film. It was to prove that I could thrive in ambiguity, navigate new tools, and bring a vision to life without waiting for someone to hand me the perfect brief.

“Becoming” was never about perfection. It was about experimenting boldly, problem-solving creatively, and building something that felt real enough to work. I didn’t know how to do most of what this film required—but I learned. Quickly.

This project sharpened my design instincts, pushed me into unfamiliar mediums, and reminded me that I’m annoyingly stubborn when I want to make something work. So if you’re looking for someone who can think fast, adapt faster, and stay weird under pressure—well. You found her.

The point of this project wasn’t just to make a short film. It was to prove that I could thrive in ambiguity, navigate new tools, and bring a vision to life without waiting for someone to hand me the perfect brief.

“Becoming” was never about perfection. It was about experimenting boldly, problem-solving creatively, and building something that felt real enough to work. I didn’t know how to do most of what this film required—but I learned. Quickly.

This project sharpened my design instincts, pushed me into unfamiliar mediums, and reminded me that I’m annoyingly stubborn when I want to make something work. So if you’re looking for someone who can think fast, adapt faster, and stay weird under pressure—well. You found her.

The point of this project wasn’t just to make a short film. It was to prove that I could thrive in ambiguity, navigate new tools, and bring a vision to life without waiting for someone to hand me the perfect brief.

If you made it all the way down here and you're thinking, “Wow, someone should pay this person to do wild cool stuff full-time”… let's talk.

If you made it all the way down here and you're thinking, “Wow, someone should pay this person to do wild cool stuff full-time”… let's talk.
If you missed the movie up top you can check it out on YouTube by clicking the button.
If you made it all the way down here and you're thinking, “Wow, someone should pay this person to do wild cool stuff full-time”… let's talk.
If you missed the movie up top you can check it out on YouTube by clicking the button.

© 2025 Maggie Zukowski. All rights reserved. Portfolio content is displayed for illustrative purposes only and may be subject to confidentiality restrictions.
Please contact Maggie Zukowski for detailed information regarding specific projects.

© 2025 Maggie Zukowski. All rights reserved. Portfolio content is displayed for illustrative purposes only and may be subject to confidentiality restrictions. Please contact Maggie Zukowski for detailed information regarding specific projects.

© 2025 Maggie Zukowski. All rights reserved. Portfolio content is displayed for illustrative purposes only and may be subject to confidentiality restrictions. Please contact Maggie Zukowski for detailed information regarding specific projects.