Workflow
AI Product Video Workflow
This workflow takes a single brand-approved product still all the way to a finished, multi-aspect ad bundle on Martini's canvas. Image generation feeds background removal, image-to-video, audio, sequencing, and NLE export — every step lives on the same graph. Use it to ship the next SKU drop without booking a studio shoot, then save the canvas as a template so the second SKU is four times faster than the first.
When to use this workflow
- Shipping a DTC SKU launch on Meta, TikTok, and YouTube without a studio shoot
- Producing 1:1, 9:16, and 16:9 cutdowns of one hero concept on a 24-hour turnaround
- Standing up an Amazon-listing video for the marketplace-required ASIN spec
- Replacing per-SKU production days with a templated canvas the agency reuses every drop
- Running a paid-social A/B sweep across three motion treatments before committing budget
Required inputs
- A brand-approved product still at the highest resolution available (4K preferred)
- Brand color hex codes and typography, plus any logo lockup for end-card reference
- Voiceover script (15s / 30s / 60s) if the cut needs an explainer beat
- Aspect-ratio targets per platform (1:1 Meta, 9:16 TikTok / Reels / Shorts, 16:9 YouTube)
- Final-cut frame rate the editor expects (24 / 25 / 30 fps for ads)
Steps
- 1
1. Drop the product still into an image node
Open a new canvas, add an image node, and drop the brand-approved product photograph as the source asset. This single still becomes the upstream anchor for every downstream cut, so use the highest-resolution file available — 4K minimum if the deliverable touches a Connected TV placement. Label the node "hero-still" so it shows up cleanly in the bin after NLE export. Keep the background and packaging untouched at this stage; you will refine and recompose downstream rather than re-uploading later.
- 2
2. Remove or swap the background
Wire the hero still into a background-removal pass, then into a Nano Banana 2 node for a lifestyle scene swap. Use multi-anchor reference — drop the product still on one input slot, the brand color script on another, and a scene reference (kitchen, vanity, gym, plinth) on a third. This is the differentiator versus single-prompt tools: each anchor stays addressable as its own node. Generate three to five backdrop variants in parallel and pick the strongest before continuing to the motion stage.
- 3
3. Fan out to image-to-video nodes per shot type
Wire the chosen still into three video nodes simultaneously: Seedance 2 for label-locked hero spins, Runway Gen-4 for editorial lifestyle cutaways, and Kling 3 for cinematic camera moves. One model per shot intent collapses the win — a six-shot ad benefits from each model doing what it does best rather than forcing one generator across all coverage. The canvas runs the three branches in parallel, so total wall time tracks the slowest model rather than the sum of all three.
- 4
4. Write motion-only prompts
Each video node takes its own prompt, but every prompt must be motion-only. Do not re-describe the bottle, label, color, or material — the reference image already carries that information, and re-describing it causes label drift and identity loss across frames. Write camera and motion grammar instead: "slow orbit, 60-degree arc, soft rim-light reveal" beats "shiny aluminum can on white background spinning slowly." Cap durations at six to ten seconds for paid social; longer takes drift past coherent length on most models.
- 5
5. Upscale and run audio in parallel
Branch the strongest video output into an upscale node so the final master holds up on a 4K timeline. In parallel, add an ElevenLabs node for the voiceover with inline emotion tags where the brand allows it, plus a second audio node for the brand SFX bed and any Foley. Audio runs on its own track on the canvas — it does not block video iteration. By the time the final upscale finishes, the VO take and SFX bed are already mixed and ready for the sequencer downstream.
- 6
6. Assemble the cut in the sequence builder
Drop a sequence-builder node and wire the video, VO, and SFX outputs into it in cut order: hero spin, lifestyle insert, macro detail, end-card with logo lockup. The sequence builder lets you trim, reorder, and preview without leaving the canvas — which means director feedback like "swap shots two and three" stays a one-minute task instead of a round trip to the editor. Lock the sequence only after the team signs off on the rough cut.
- 7
7. Export to NLE for grade and delivery
Add an NLE export node and select the editor's deliverable spec — H.264 for cutting proxies, ProRes 422 or DNxHD for masters that survive a Lumetri / DaVinci color pass. Match the editor's frame rate (24 / 25 / 30 fps depending on territory) and ship the bundle with cut order and node labels preserved. Premiere, DaVinci Resolve, and Final Cut all accept the bundle the same way: drop into the bin, drag onto the timeline, and the cuts land in order with no manual re-organization.
- 8
8. Duplicate for platform variants and save as template
Duplicate the finished sequence three times — one fork per aspect ratio (1:1 Meta, 9:16 TikTok / Reels / Shorts, 16:9 YouTube) — and re-export each at the matching spec. Then save the entire canvas as a SKU template. The next product launch reuses the wiring; you swap the upstream still, hit run, and the chain produces the new bundle without re-building nodes. Director revisions on the current campaign are equally cheap: regenerate the video node, re-export, relink in the NLE, no manual rebuild.
Recommended models
Martini canvas notes
- The product still is a single image node with multiple downstream connections — drop once, wire to every video node, do not re-upload per branch.
- Multi-anchor reference works because product, brand color, and scene each occupy a separate input slot on the Nano Banana 2 node — the prompt does not have to carry that data.
- Three video nodes run in parallel, so wall time tracks the slowest generation, not the sum of three. Use this to fan out shot intents instead of serializing one generator.
- The sequence builder previews and reorders on the canvas — no MP4 round-trip to the editor for shot reorder feedback.
- NLE export packages the sequence as a labeled bundle (cut order + node names preserved) so Premiere / DaVinci / FCP bins drop in clean.
Variations
15-second TikTok cutdown
Three shots, 9:16, voiceover replaced by a music bed and on-screen text. Use Hailuo for fast iteration on motion variants and pick the punchiest take.
60-second YouTube hero
Eight shots, 16:9, full ElevenLabs voiceover with a SFX bed. Sequence in three acts (problem, product, payoff) and finish in DaVinci Resolve for color.
Amazon ASIN listing video
30 seconds, 16:9 marketplace spec, no voiceover (sound off by default), text overlays for benefits. Use Seedance 2 for label-locked hero spins.
Multi-SKU campaign canvas
One template canvas, four SKU branches, each running the same wiring with a different upstream product still. Ship four ads in the time the first one took.
Related features
AI Product Video Generator — From Product Image to Ad Video
Create product ads and demos from product images on Martini's canvas — chain product photo to multi-shot video across Seedance, Runway Gen-4, and GPT Image.
AI Image to Video — Animate Stills Into Production-Ready Shots
Turn still images into production-ready video shots on Martini's canvas — multi-model, reference-aware, NLE-export ready.
AI Ad Creative Generator — Multi-Format Ad Visuals and Video
Generate ad visuals and videos across Ideogram, Flux, Seedance, and Runway on Martini — every aspect ratio, every variant, one canvas.
AI Video NLE Export — From Generation to Premiere, DaVinci, Final Cut
Move AI-generated sequences from Martini into Premiere Pro, DaVinci Resolve, and Final Cut Pro.
AI Product Photography — Studio-Quality Product Images on Martini
Generate studio-quality product photos for e-commerce on Martini's canvas.
Related how-to guides
Related reading
Related docs
Frequently asked questions
Why does the workflow split across three video models instead of one?
One model per shot intent gives you the strongest take per coverage type — Seedance 2 holds product labels, Runway Gen-4 handles editorial lifestyle, Kling 3 sells cinematic camera moves. Forcing a single model across all six shots produces a bundle where two-thirds of the cuts feel close-but-not-right. The canvas runs the three branches in parallel, so the cost is wall time on the slowest generation, not on all three combined.
Do I have to re-describe the product in every video prompt?
No, and you should not. The reference image already carries the product details. Re-describing the bottle, label, color, or material in the prompt causes label drift and identity loss frame to frame. Write motion grammar only — camera move, lighting cue, action — and let the upstream image node anchor the visual identity.
What aspect ratios should I generate for paid social?
Plan for three at minimum: 1:1 for Meta feed, 9:16 for TikTok, Reels, and YouTube Shorts, and 16:9 for YouTube and Connected TV. Duplicate the finished sequence per aspect rather than re-generating the source video at each ratio — the cut is the same, only the framing changes. Save the multi-aspect canvas as a template once it works.
Where does NLE export fit in?
NLE export is the last canvas step before grade and delivery. Pick H.264 for cutting proxies, ProRes 422 or DNxHD for masters that survive a Lumetri or DaVinci color pass. Match the editor's frame rate (24 / 25 / 30 fps depending on territory and platform). The bundle preserves cut order and node labels so Premiere, DaVinci Resolve, and Final Cut bins drop in clean — no manual re-organization.
How do I handle director revisions without rebuilding the canvas?
Regenerate the affected video node, re-export at the same spec, and relink in the NLE. The canvas wiring stays intact — only the regenerated clip swaps in. This is the lasting win of building the chain on Martini rather than dumping raw MP4s: the canvas is the source of truth, the NLE is the finishing room, and revisions never force a rebuild.
Can I save the workflow as a reusable template across SKUs?
Yes — that is the point. Save the canvas after the first SKU. The second SKU swaps the upstream product still, leaves all wiring intact, and produces the matching bundle without re-building any nodes. A four-SKU quarter that took eight days the first time becomes a one-day templated job by the next quarter.
Build it on the canvas
Open Martini and wire this workflow up in minutes. Free to start — no card required.