Video breakdown on Martini analyses an existing video — generated or uploaded — and decomposes it into shots, keyframes, and reusable references you can drop back onto the canvas. Use it to reverse-engineer a reference film into a multi-shot project, audit a generated cut for missing coverage, or harvest stills and beats for downstream image and video generation.
Video breakdown takes a video clip and runs shot-detection plus keyframe-extraction across it. The output is a timeline of detected cuts with a representative frame per shot, so a 30-second reference film resolves into six or eight panels you can study and reuse. Some breakdown variants also tag camera movement, scene type, and dominant subject so you can search the breakdown by intent.
On the Martini canvas, video breakdown spawns multiple downstream nodes — one image node per detected shot — that you can wire into the next stage of your project. Common follow-ups include feeding each shot frame into Nano Banana 2 to generate stylised variants, into Seedance 2 or Kling 3 for image-to-video reproduction, or into Flux Kontext to swap subject and re-render in your own style.
Upload or select your source video on the canvas. Reference films, trailers, prior generations, or competitor ads all work. Cleaner cuts produce sharper shot detection — heavily edited or fast-cut content can over-segment.
Drop a Video Breakdown tool node and wire the source into its input. Configure thresholds if the model exposes them — sensitivity controls how aggressively the detector splits on motion or lighting changes. Default settings are tuned for typical narrative pacing.
Submit and the breakdown returns as a panel of image nodes (one per detected shot) plus optional metadata per shot. Walk the timeline left-to-right and decide which beats matter for your project. Delete shots you don't need to keep the canvas clean.
Send selected shot frames into the next stage. For multi-shot AI video, feed each frame into Seedance 2 or Kling 3 for image-to-video. For style transfer, route through Flux Kontext. For storyboards, drop the frames into the export bundle. For character consistency, lock identity per frame in Nano Banana 2 before passing forward.
Once each shot has been regenerated to spec, assemble the final cut in your editor. The breakdown panels become both your storyboard and your continuity reference for the rest of the project.
Regenerate broken-down shots with Seedance 2 for cinematic motion in your own style.
查看模型Pair with Kling 3 to recreate character-driven shots from the breakdown panels.
查看模型Edit individual breakdown frames before re-rendering, locking new characters or products into the recreated shots.
查看模型Frame extraction pulls a single frame at a specific timestamp. Video breakdown runs full shot detection across the clip and returns one representative frame per detected shot, plus shot boundaries.
Yes — feed any video on the canvas through the breakdown node, including ones you generated earlier. This is useful for auditing whether a multi-shot generation actually has the coverage you intended.
Hard cuts work best. Cross-dissolves, wipes, and morphs can fool shot detection — pre-trim or use the frame extraction tool for those segments.
Lock the character look once with Nano Banana 2 or Flux, then use that character image plus each broken-down frame as combined inputs into Seedance 2 or Kling 3 image-to-video.
Absolutely. Breakdown is the fastest way to turn a reference cut into a panel of starting frames you can re-style, then regenerate end-to-end in your own visual language.
在 Martini 的无限画布上将 AI Video Breakdown 与其他 AI 模型链接起来。无需 GPU,免费开始。
免费开始