Seedance 2 Handbook: Variants, Best Workflows, and How to Use It on Martini
Hands-on guide to Seedance 2 — variants, strengths, and the production workflows it fits on Martini's canvas.
Key takeaways
- Seedance 2 is the second-generation ByteDance video model, with Pro for cinematic single-takes, Lite for fast iteration, and Omni for tagged image, video, and audio references.
- Strongest use case is image-to-video with controlled motion and physically plausible camera moves; weakest case is long dialogue scenes (use Kling Avatar instead).
- On Martini, Seedance 2 reads from any upstream image node, which means you can iterate prompt or reference image without re-uploading.
- Chain Seedance 2 into a Runway Aleph or Wan continuation node when you need to extend a single take past the model length cap.
- For multi-shot sequences, generate each shot from a shared character image, then assemble in the NLE export node — do not ask Seedance to plan the cut for you.
What Seedance 2 actually does
Seedance 2 is ByteDance's second-generation diffusion-transformer video model, released as a successor to Seedance 1 with three variants you will see exposed across the API surface: Pro (the flagship, slower, cinematic), Lite (faster, cheaper, less polish on hair and water), and Omni (the reference-aware variant that accepts tagged inputs for character, motion, audio, and style). The 2.0 line keeps the same prompting grammar as 1.0 but adds noticeably better adherence to physical motion cues and a much cleaner handling of human gait, hands, and reflection passes.
In practice, the model behaves like a competent live-action camera operator with a junior gaffer: it will obey camera direction such as "slow dolly in" or "rack focus from the foreground glass to the actor's eyes," but it will improvise lighting and atmosphere unless you constrain those with a reference image. The 2.0 cohort also handles transparent surfaces, smoke, and steam without the rubbery look that flagged Seedance 1 outputs from a distance, which is the single biggest reason teams are migrating their image-to-video shot lists onto it.
Where Seedance 2 falls short is sustained dialogue. Lip movement is reasonable for two or three words, but for a thirty-second monologue you should switch to Kling Avatar or a dedicated lip-sync pass. Likewise, do not expect Seedance to plan a multi-shot sequence; the model thinks in single takes, and you should too when you build a Martini canvas around it.
Pro, Lite, and Omni — when to pick which
Pick Seedance 2 Pro when the shot will end up in a finished piece. The Pro variant is the one that justifies the cost difference against Veo or Kling 3.0 in head-to-head A/B tests on motion realism. Use Pro for hero shots, product reveals, anything cinematic, and any frame that will be color-graded in a downstream NLE. Pro tolerates longer prompts (typically up to several hundred tokens) and gives you the most reliable response to camera-direction language.
Pick Seedance 2 Lite when you are prototyping. Lite gives you a much faster iteration loop on prompt phrasing, reference image choice, and motion intensity. The visual gap between Lite and Pro shrinks dramatically once you reduce target resolution and remove transparent surfaces from the frame, so for board pitches and motion tests Lite is usually the right node to drop.
Pick Seedance 2 Omni when your shot needs a specific person, vehicle, or branded prop to be recognizable. Omni reads tagged image references and lets you pin character identity across multiple shots without retraining. On Martini's canvas this matters because you can wire the same reference image into ten Omni nodes and get ten shots that share an identity — exactly the workflow you need for an AI-influencer reel or a recurring spokesperson video.
Prompt structure that actually works
Treat each Seedance 2 prompt as a single shot in a shot list, not a paragraph of mood. The structure that consistently outperforms in our internal QA is: subject + action + camera move + lens + lighting + atmosphere. For example, "A barista in a navy apron sliding a flat white across a marble counter, slow dolly in from medium-wide to medium close-up, 35mm anamorphic look, warm late-afternoon window light, faint steam rising." That single line tells the model who, what, how the camera behaves, what optical character to use, and what mood the air has.
Two prompting habits hurt Seedance 2 specifically. First, do not stack adjectives in front of the subject; "cinematic, beautiful, atmospheric, hyper-realistic" wastes prompt budget the model spends on motion. Second, do not give Seedance multiple actions in the same shot ("she pours the coffee then turns and waves") — the model will compress them into a half-second blur. Split actions into separate generations, then cut on the canvas.
When you have an image input wired in, you can drop most of the visual description from the prompt and lean on the reference. In that mode, the prompt becomes pure motion direction: "subject begins still, then slow head turn to camera left, micro smile at the end." This is the most controllable mode of Seedance 2 and the one you should use for any shot that needs to match other shots in a sequence.
Using Seedance 2 on the Martini canvas
On Martini, Seedance 2 lives as a video node that accepts an optional image input from any upstream image node — Nano Banana 2, Flux Kontext, GPT Image 2, Imagen 4, or a user-uploaded image. The canvas pattern that makes Seedance 2 productive is image-first: generate or upload the still you want, wire it into the Seedance 2 node, write a one-shot motion prompt, and iterate the prompt while the image stays pinned. You never have to re-upload, and the canvas remembers every variant in the version history.
For multi-shot work, duplicate the Seedance 2 node and re-wire each duplicate to the same character image. Vary only the prompt across the duplicates — different camera moves, different actions, different micro-emotions. Then drop an NLE export node downstream of all of them. The export node assembles the takes in the order you wire them, which is a cleaner workflow than re-rendering inside an editor.
When you need shots that are longer than the model's native cap, chain Seedance 2 into a Runway Aleph continuation node. Aleph is the cleanest extender we have measured for keeping Seedance's tonal grade intact. If Aleph is unavailable, Wan is the fallback. Do not chain Seedance into another Seedance node hoping for continuation — the per-frame seed shifts and you will see a visible cut.
When to swap Seedance 2 for Kling 3 or Google Veo
Swap to Kling 3 when the shot is dominated by a human face speaking. Kling's lip-sync and micro-expression handling, especially through the Avatar variant, is meaningfully ahead of Seedance 2 for any shot longer than a couple of words of dialogue. If you have already generated the still in Martini, drop a Kling 3 or Kling Avatar node in parallel and compare; the canvas keeps both takes in the version tray so you can A/B without losing work.
Swap to Google Veo when the shot is a wide environmental establish — landscapes, weather, crowds in a plaza. Veo's long-range motion coherence is the strongest in this category right now, and it handles depth-of-field falloff better than Seedance for very wide shots. The tradeoff is cost; Veo is the most expensive of the three and worth reserving for hero environment frames.
Stay on Seedance 2 for everything in between — character close-ups without dialogue, product motion, kinetic typography over plates, and any shot where you have a strong reference image and want fast, faithful motion that respects the source frame.
The bottom line
Seedance 2 has earned its place as the default image-to-video node on the Martini canvas for most shot types. Pro for finished work, Lite for iteration, Omni when identity matters across shots. Treat each generation as a single take, write tight one-shot prompts, and lean on image input whenever you can. Reach for Kling Avatar for talking heads and Veo for long environmental establishes, but otherwise leave Seedance 2 in the slot.
The biggest unlock of running Seedance 2 on Martini specifically is the version tray and the wired-image pattern: you iterate the prompt twenty times against the same reference, keep every take, and assemble the chosen takes downstream without re-uploading anything. That loop is what turns Seedance 2 from "neat model" into a production tool.
Workflow example
A typical Seedance 2 workflow on Martini for a product reveal: drop a Nano Banana 2 image node and generate the hero still of the product on a marble surface, lock the seed, wire that image into a Seedance 2 Pro video node, and write the prompt "product remains static for first second, then slow dolly in from medium wide to extreme close-up, anamorphic 35mm look, soft window light from frame left." Render two or three takes, pick the strongest from the version tray, then duplicate the Seedance node, swap the prompt to a different camera move, and repeat. Wire all chosen takes into an NLE export node downstream and you have a finished sequence without touching an editor.
Recommended models
Recommended features
Related models and tools
Tool
AI Video Upscaling
Upscale generated video outputs on Martini's canvas.
Tool
AI Video Frame Extraction
Extract frames from video for reference and image-to-video workflows.
Tool
AI Camera Control
Camera movement and angle control for AI video on Martini.
Provider
ByteDance
ByteDance's Seedance video and Seedream image model families on Martini.
Provider
Kling
Kling 3, O3, and Avatar video model workflows on Martini.
Provider
Google's Veo video, Imagen image, and Nano Banana model workflows on Martini.
3D model
Marble 3D AI
Marble 3D and world generation workflows on Martini.
World model
World Labs
World Labs image/text-to-navigable-world workflows on Martini.
Related how-to guides
Related reading
Awesome Seedance 2 Prompts: Curated Open-Source Prompt Library
A curated, open-source library of Seedance 2 prompts — copy-paste recipes for product spins, image-to-video motion, character shots, and cinematic camera moves. Maintained on GitHub, paired with the Martini canvas.
Kling 3 Guide: Variants, Use Cases, and How to Choose
Kling 3, O3, and Avatar variants — when to use each, on Martini.
How to Turn an Image Into Video With AI
End-to-end image-to-video workflow on Martini — model choice, motion control, and chaining shots.
Open-source resources
Frequently asked questions
- Is Seedance 2 better than Seedance 1?
- For motion realism, hands, gait, and reflective or transparent surfaces, yes — Seedance 2 is a clear step up. For pure cost-per-take on a prototype, Seedance 1 is still a defensible pick. Most teams that have migrated to 2.0 do not go back.
- Should I pick Seedance 2 Pro or Lite?
- Pro for finished shots that will end up in a deliverable; Lite for prompt and reference exploration. The visual gap shrinks at lower resolutions and on simpler subjects, so Lite is usually fine for boards and tests.
- Does Seedance 2 do lip-sync?
- Roughly, for two or three words. For a real talking head, switch the node to Kling Avatar or run a dedicated lip-sync pass over the Seedance take. Do not rely on Seedance 2 for sustained dialogue.
- How do I keep one character consistent across multiple Seedance shots?
- Generate the character once as a still in Nano Banana 2 or Flux Kontext, then wire that single image into every Seedance 2 Omni node on the canvas. Vary only the motion prompt across nodes. The canvas keeps all takes in the version tray.
- How do I extend a Seedance 2 take past the length cap?
- Chain the Seedance node into a Runway Aleph continuation node. Aleph holds the grade and pacing of the source clip more cleanly than re-rolling Seedance. Wan is the second-best fallback if Aleph is not available for the shot.
- Can I use Seedance 2 for multi-shot sequences directly?
- No — Seedance 2 thinks in single takes. Build the shot list as separate Seedance nodes on the canvas, share a character reference image across them, and assemble the cut in the NLE export node downstream.
Ready to try it on the canvas?
Open Martini and fan your prompt across every frontier model in one workflow.