Frame extraction on Martini pulls individual frames out of a video clip — first frame, last frame, or any timestamp — so you can reuse them as image-to-video starting frames, character reference, or storyboard panels. Use it to bridge takes, lock character identity across shots, or harvest cleanup-ready stills from existing footage without leaving the canvas.
Frame extraction takes a video clip — generated on Martini or uploaded — and outputs one or more still frames at specified timestamps. The most common pull is the last frame of a take, used as the starting frame for the next take so motion appears continuous across cuts. You can also pull the first frame as a thumbnail, midpoint frames as story beats, or any custom timestamp as a reference still for downstream image generation.
On the canvas, frame extraction is a tool node that accepts a video input and emits image outputs. From there, the extracted frame flows into another video model for image-to-video continuation, into Nano Banana 2 or Flux Kontext for character editing, into background removal for compositing, or into the export bundle as a deliverable still.
Generate or upload your source video first. Frame extraction works on any video on the canvas — Seedance 2, Kling 3, Runway, Sora, or an upload. The cleaner the source, the more usable the extracted frames.
Drop a Frame Extraction tool node and wire your video into its input. Pick the timestamp or named position (first frame, last frame, midpoint) you want. Most workflows just need first and last frames, which most extractor variants expose as one-click options.
Submit and the extracted frames return as image nodes on the canvas. Inspect them at full resolution; if the source video has motion blur on a critical frame, pick an adjacent timestamp where the subject is sharper.
Wire the extracted frame into the next stage. For continuation, feed the last frame as the starting image of a new Seedance 2 or Kling 3 take. For editing, send the frame to Nano Banana 2 to swap costume or background, then use the edited frame as the starting frame for a fresh video. For deliverables, drop frames straight into the export bundle.
Repeat per take. Multi-shot sequences typically alternate generate, extract last frame, regenerate from that frame — chaining takes is the simplest way to build longer-form AI video without expensive single-call generations.
Pull the last frame of a Seedance 2 take and feed it back in to continue the motion seamlessly.
查看模型Bridge Kling 3 takes by extracting the final frame and starting the next take from it.
查看模型Edit an extracted frame in Nano Banana 2 to change wardrobe, background, or product before regenerating the next take.
查看模型You can extract any frame by timestamp. First and last are presets because they are the most common in continuation workflows.
For image-to-video continuation, extract at the source resolution. For deliverable stills or for editing in Nano Banana 2, upscale first so the next stage has cleaner input.
Yes — feeding the exact last frame as a starting image is the strongest identity bridge between takes. It outperforms re-prompting from scratch.
Frame extraction is cheap relative to generation; the cost is essentially server compute, not a model call. Use it freely while iterating.
PNG by default, preserving full quality. They drop into the canvas as image nodes that any downstream image or video tool can consume.
在 Martini 的无限画布上将 AI Video Frame Extraction 与其他 AI 模型链接起来。无需 GPU,免费开始。
免费开始