ByteDance
Pixverse Extend is the specialist for one job and one job only: take a video clip and seamlessly continue it. It does not generate from text, it does not animate a still image, and it does not edit content — its single purpose is preserving original motion, style, and lighting in the continuation. For an editor who inherits a 5-second AI clip that needs to be 12 seconds, Pixverse drops onto the canvas as a downstream V2V node and extends the footage without re-prompting from scratch.
Pixverse Extend is V2V-only — it cannot produce footage from text or a single image. On Martini's canvas, route the source clip from a Seedance 2, Sora 2, Kling 3, or Hailuo 02 video node into a Pixverse Extend V2V node downstream. The source can also be a real-world clip uploaded into a video reference node.
Pixverse continuation reads cleanest when the input ends mid-motion at a stable frame — not at a hard accent or unfinished gesture. Use the frame-extraction tool node to scrub to a clean cut point, then pass that trimmed segment into Pixverse Extend. Endings at the apex of a motion produce the smoothest extensions.
Pixverse takes the source plus a target additional duration. Common pattern: a 5s source extended to 12s = a 7s extension target. The model preserves motion vector and style automatically, so no prompt is required for the basic extend operation. For the smoothest result, keep individual extension calls under 8s — chain two Pixverse Extend nodes if you need to add more than that.
If the extension needs to introduce a new beat (e.g. character starts walking after standing in source), Pixverse accepts a short motion hint. Keep it minimal: "character begins walking forward, same lighting." Avoid radical prompt changes — Pixverse is built for continuity, not transformation. Heavy prompts will fight the preservation step.
For B-roll inserts that need to loop seamlessly, generate the extension and route it into a loop builder downstream. Pixverse extensions chain into seamless loops cleanly because the start frame of the extension is identical to the end frame of the source — perfect cross-fade material.
On the canvas, route the original source + the Pixverse extension into the sequence builder as adjacent clips. The cut between source and extension reads as zero — no editor will notice it. Export the timeline to your NLE; the editor only sees a single longer clip in the sequence.
Pixverse extends with no prompt for pure motion-preservation continuation. The cleanest result.
(no prompt — basic extension, 7 second target duration)
Minimal new-beat hint — only when the extension needs to introduce action that was not in the source.
character begins walking forward, same lighting
Continuation hint that reinforces the original motion vector. Useful when source motion was subtle.
camera continues slow dolly forward, same scene, same light
Pixverse Extend is V2V-only — it requires an input clip and cannot generate from text or single image.
Trim the input to end at a stable mid-motion frame for the smoothest continuation.
Keep individual extension calls under 8 seconds; chain multiple Pixverse Extend nodes for longer needs.
Skip the prompt for pure motion-preservation; only add a hint if you need to introduce a new beat.
Pixverse extensions chain into seamless loops because the extension start frame matches the source end frame exactly.
Pixverse Extend outputs at the source resolution and frame rate, preserving motion, style, and lighting in the continuation. Render time is 60-120 seconds for a 7-second extension. There is no audio, no resolution upgrade, and no internal reframing — the model's job is to extend, not transform. For B-roll loops or hero shot lengthening before a 15s ad cut, Pixverse is the cleanest pipeline. If you also need to restyle the footage, route through Runway Aleph or Wan VACE Video Edit instead.
Connect Pixverse Extend with other AI models on Martini's infinite canvas. No GPU required — start free.
Get Started FreeAlibaba
Wan 2.6 extends clips by chaining its general I2V mode off the source clip's last frame — a budget-friendly approach when Pixverse Extend is overkill or you want creative control over the extension. Drop the source into a frame-extraction tool node, pipe the last frame into Wan 2.6 with a continuation prompt, and the model generates new footage that picks up where the original left off. For high-volume production where credits matter and the extension can absorb a small style shift, Wan is the practical pick.
View guideRunway
Runway Aleph is V2V-only — it operates on existing footage and preserves the original camera and timing exactly. While its primary use is style transfer and reference-driven re-render, Aleph can also be used for extension by pairing the source clip with a continuation reference image and a directional prompt. The output picks up the source's motion vector while honoring the new reference. For an editor who needs a hero shot lengthened with a specific look (a brand seasonal pivot mid-clip), Aleph is the most controllable extend option.
View guide