Video
AI Product Video Generator
You have a clean product photo and a brief that asks for moving creative — a hero spin, a lifestyle insert, a cutaway — usually before the next sprint deadline. Martini takes the product image into a node, fans it across Seedance 2, Runway Gen-4, GPT Image 2, and Nano Banana 2 on one canvas, and exports the cut your ads team can ship the same afternoon.
What this feature solves
Product video for paid social and ecommerce is a volume problem dressed up as a creative problem. Each SKU needs a hero spin, a lifestyle moment, a benefit insert, and at least three aspect-ratio cutdowns — and most teams are still shooting locked-off product photography that does not move. Hiring a motion studio per SKU does not scale, and stock motion graphics never match the brand. Single-prompt AI video tools push back the bottleneck a little, but they still leave product label distortion, mismatched lighting, and one model's quirks across an entire campaign.
The deeper issue is reference fidelity. The product photo has been color-graded, the label is legal-approved, and the packaging shape is non-negotiable. The moment a video model starts moving, those exact details drift — labels warp, materials lose their finish, and the highlight pattern no longer matches the brand book. Without a way to lock the product image as a reference and run several models against it, every clip becomes a guess and the legal review cycle balloons.
Then comes the format problem. One winning take is not enough — you need 9:16 for TikTok and Reels, 1:1 for feed, 16:9 for YouTube, plus shorter cutdowns at 6, 10, and 15 seconds. Recreating each variant in a separate browser session burns the speed advantage AI was supposed to deliver. Product teams need a pipeline that produces a campaign, not a clip.
Why Martini is different
Martini chains the product photo into video on the canvas. Drop the brand-approved still into an image node, then wire it into multiple video nodes — Seedance 2 for label-locked hero spins, Runway Gen-4 for editorial cutaways, Nano Banana 2 for lifestyle scene swaps. Every branch reads the same reference, so the product looks identical across cuts and the legal-approved still stays the source of truth. Fanout on one canvas replaces tab-switching, and the lineage shows exactly which prompt produced which clip.
Multi-model chaining is the unlock for product video specifically. Use GPT Image 2 to generate a clean lifestyle scene around the product, push it back into a video node for motion, then chain into a sequence builder that orders the spin, the lifestyle, and the cutaway into a single timeline. The same canvas handles the 1:1 feed cut, the 9:16 vertical, and the 16:9 hero by duplicating the sequence and swapping the aspect — no rebuilding from scratch.
Export lands at frame rates and codecs your editor opens natively. NLE export drops the multi-cut campaign into Premiere Pro, DaVinci Resolve, or Final Cut Pro as a real timeline — each shot, each version, each aspect — without a HandBrake round trip. Save the canvas as a template and the next SKU launch swaps the inputs and re-runs. That is the difference between a one-off creative tool and a real product-marketing pipeline.
Common use cases
Hero spin and detail loop for ecommerce PDPs
Animate a packshot into a 360-degree spin and a label close-up loop, then export both to your product page CMS without commissioning a motion studio.
SaaS demo and product walkthrough cuts
Turn a product UI screenshot into a pan-and-zoom demo video chained with voiceover for your homepage hero or sales deck.
Paid social ads across every aspect ratio
Fan one hero shot into 1:1 feed, 9:16 vertical, and 16:9 in-stream variants for Meta, TikTok, and YouTube from one canvas.
Fashion lookbook motion for DTC drops
Take an outfit still, generate a runway-style camera move on Runway Gen-4, and chain into a lifestyle cutaway for the launch reveal.
Marketplace product videos for Amazon and Shopify
Ship the marketplace-required video asset (1:1 or 16:9, under 60 seconds) without booking a shoot per SKU.
A/B test creative variants for campaign optimization
Run the same product across five visual treatments in parallel, export all five, and let the ad platform pick the winner.
Recommended model stack
seedance-2
videoStrongest reference adherence for branded product stills — labels, materials, and packaging stay accurate.
runway-gen4
videoDirector-level controls for editorial product shots and lifestyle cutaways.
gpt-image-2
imageGenerate clean lifestyle scenes and contextual backgrounds before pushing into video.
nano-banana-2
imageReference-locked image edits to vary scene, color, or composition without losing the product.
kling-3
videoCinematic camera moves for hero shots and brand reveals.
hailuo
videoFast iteration cycles when testing scene treatments at volume.
How the workflow works in Martini
- 1
1. Drop the product photo into an image node
Use the brand-approved, color-graded still as the source. The cleaner the input, the cleaner the output — high resolution, neutral background, no compression artifacts.
- 2
2. Add scene context with an image-edit node
For lifestyle shots, wire the product into Nano Banana 2 or GPT Image 2 to swap the background or place it in a contextual scene without losing the product itself.
- 3
3. Wire into video nodes per shot type
Connect the still or the scene image into video nodes — Seedance 2 for label-locked spins, Runway Gen-4 for editorial moves, Kling 3 for hero cinematics. Run them in parallel.
- 4
4. Write motion-only prompts
Tell the model what should move — slow push-in, orbit, parallax, rim-light reveal. Avoid restating the product; the reference does that for you.
- 5
5. Assemble cuts in the sequence builder
Order the hero spin, lifestyle cut, and detail loop in a sequence node. Preview the cut before exporting.
- 6
6. Duplicate for aspect ratios and export
Fork the sequence into 1:1, 9:16, and 16:9 variants. Use NLE export to send each timeline into Premiere or DaVinci ready to grade and ship.
Example workflow
A DTC haircare brand launches a new serum and needs paid-social creative across Meta, TikTok, and YouTube within the week. The team drops the legal-approved bottle photo into an image node. From there, they wire into three branches — Seedance 2 for a slow hero push-in, Runway Gen-4 for an editorial pour shot, and Nano Banana 2 to relight the bottle on a marble vanity scene that then chains into Kling 3 for an orbit. Voiceover from ElevenLabs wires into the hero cut. The sequence builder orders the four shots into one cut, then the team duplicates the sequence for 1:1, 9:16, and 16:9. NLE export drops three timelines into Premiere ready for trim and grade. Concept to ad-platform-ready creative in one afternoon, on one canvas, with the bottle label identical across every shot.
Tips and common mistakes
Tips
- Use the highest-resolution product photo you have. Video models inherit and amplify every artifact in the source.
- Run two or three models in parallel for the hero shot. Label fidelity and camera move rarely come from the same model.
- Use Nano Banana 2 to swap scenes before going to video — it is faster and cheaper than re-prompting a video model for context.
- Keep cuts short. Most paid-social product videos win at 6-10 seconds per shot, not 30-second epics.
- Save the canvas as a template after the first SKU. The second product launch should take a quarter of the time.
Common mistakes
- Uploading a JPG with visible compression. The video output amplifies every artifact across 24 frames per second.
- Writing prompts that re-describe the bottle, the label, the color. The model already sees the reference — tell it what should move.
- Picking one model for every shot type. Hero spins, lifestyle cuts, and detail loops each have a strongest model.
- Exporting raw MP4 and rebuilding the timeline in your NLE. Use sequence + NLE export so the campaign lands as one timeline.
- Skipping the aspect-ratio duplicate step. Generate the cut once, fork it three ways — do not re-prompt for each platform.
Related how-to guides
Related models and tools
Tool
AI Image Upscaling
Upscale images and keyframes before final video generation on Martini.
Tool
AI Background Removal
Remove backgrounds from images for assets and compositing on Martini.
Tool
AI Video Upscaling
Upscale generated video outputs on Martini's canvas.
Provider
Google's Veo video, Imagen image, and Nano Banana model workflows on Martini.
Provider
OpenAI
OpenAI's GPT Image and Sora video model workflows available on Martini.
Provider
ByteDance
ByteDance's Seedance video and Seedream image model families on Martini.
Provider
Runway
Runway's Gen4, Aleph, and image model workflows on Martini.
Provider
Luma
Luma's Ray video model workflows and alternatives on Martini.
Provider
Vidu
Vidu's reference-driven video and character consistency workflows on Martini.
Related features
AI Image to Video — Animate Stills Into Production-Ready Shots
Turn still images into production-ready video shots on Martini's canvas — multi-model, reference-aware, NLE-export ready.
AI Ad Creative Generator — Multi-Format Ad Visuals and Video
Generate ad visuals and videos across Ideogram, Flux, Seedance, and Runway on Martini — every aspect ratio, every variant, one canvas.
AI Storyboard Generator — Plan Shots, Generate Frames, Then Animate
Plan shots, generate storyboard frames, and convert frames into video on Martini's canvas.
AI Video Workflow — Node-Based Production From Concept to Final Sequence
Build node-based AI video production pipelines on Martini's canvas — from concept and storyboard to final NLE-ready sequence.
Multi-Shot AI Video — Build Connected Scenes, Not Isolated Clips
Plan, generate, and sequence multi-shot AI video on Martini — keep characters, style, and motion consistent across shots.
AI Influencer Video Generator — Repeatable Character Pipeline
Design, generate, and scale AI influencer videos on Martini — character library, voice cloning, lip-synced video, all on one canvas.
AI Avatar Video Generator — Talking Avatars from Image and Audio
Create talking avatar videos from image and audio on Martini's canvas — Kling Avatar, OmniHuman, ElevenLabs, locked identity across every clip.
AI Talking Head Video — Spokesperson, Course, and Narration
Produce spokesperson, course, and narration videos on Martini's canvas — Kling Avatar, OmniHuman, ElevenLabs, Fish Audio, locked identity end to end.
AI Video Reference Images — Preserve Subject and Style
Lock subject, character, and style across every video generation on Martini's canvas — Vidu, Kling O3, Seedance 2, Nano Banana 2 reference workflows.
Video to Video AI — Restyle, Edit, Transform Source Footage
Restyle, transform, and edit source video on Martini's canvas — Runway Aleph, Kling O3, Wan chained into multi-shot pipelines.
AI Video Generator — Multi-Model AI Video Production on Martini
Multi-model AI video generation with text, image, reference, and editing workflows on Martini's canvas.
Text to Video AI — Generate Video From Prompts on Martini
Generate video from prompts and chain outputs into scenes on Martini's multi-model canvas.
Consistent Character AI Video — Reference-Driven Video on Martini
Preserve character identity through reference-driven video models on Martini.
AI Explainer Video — Educational and B2B Demo Videos
Generate explainer videos, B2B demos, and educational content on Martini's canvas.
Related docs
Related reading
Comparisons
Frequently asked questions
Which model should I start with for product hero spins?
Start with Seedance 2 — it has the strongest reference adherence for branded packshots and tends to keep labels, materials, and highlights accurate across the take. For more editorial or lifestyle moves, fan out to Runway Gen-4 in parallel and pick the take that holds the brand best.
Can I keep my legal-approved label exactly the same?
Yes. Pin the brand-approved photo in an image node and feed it as the reference into every downstream video node. Seedance 2 and Nano Banana 2 are the strongest choices for label and packaging fidelity. For long takes, chain shorter clips with the same source instead of running one long generation that drifts.
How do I get 1:1, 9:16, and 16:9 variants without re-running everything?
Build the sequence once, then duplicate it on the canvas and swap the aspect ratio on the sequence node. Each fork inherits the same shots and re-renders only what needs to be re-rendered. NLE export ships all three timelines in one pass.
How long can a single product video clip be?
Most video models cap individual generations at 5-10 seconds. For ecommerce loops or longer hero cuts, chain shorter clips on the canvas using the same product reference so the product, lighting, and brand stay consistent across cuts.
Will the video import cleanly into my NLE for finishing?
Yes. NLE export renders at standard frame rates (24, 25, 30, 60) and codecs (H.264 and ProRes) that Premiere Pro, DaVinci Resolve, and Final Cut Pro open natively. No transcode round trip, no codec mismatch when you build the campaign timeline.
How does this compare to Runway directly?
Runway gives you one model per generation. Martini lets you fan a single product reference across Seedance 2, Runway Gen-4, Kling 3, and image edit models on one canvas — and chain the chosen takes into a sequence and an NLE export. For one-off clips, Runway direct is fine. For SKU campaigns, the canvas saves hours per launch.
Build it on the canvas
Open Martini and wire this workflow up in minutes. Free to start — no card required.