AI video upscale on Martini takes a generated clip and re-renders it at higher resolution while sharpening detail, recovering edges, and smoothing temporal noise. Use it as the final step after Sora 2, Seedance, Kling, or Runway generations so your delivery files hit 1080p or 4K with cleaner faces, text, and texture instead of soft AI mush.
Video upscale takes a low or mid-resolution clip — typically 480p, 720p, or 1080p output from a base AI video model — and reconstructs each frame at a higher target resolution. Unlike a simple bicubic resize, the model hallucinates plausible high-frequency detail across faces, hair, fabric, and product surfaces, then enforces temporal consistency across frames so the upscaled footage doesn't shimmer or flicker.
On Martini's canvas, video upscale appears as a downstream tool node connected to the output of any video generation node. You feed the source clip in, choose a scale factor (commonly 2x or 4x), and the upscaled MP4 writes back to a new node. This keeps the original generation untouched, so you can A/B compare or rerun upscale with different settings without regenerating the underlying video.
Generate your base clip first using any video model — Seedance 2 for cinematic motion, Sora 2 for narrative shots, Kling 3 for character work, or Runway Gen 4 for stylised aesthetics. Render at the model's native resolution and accept that base outputs are typically softer than a real camera capture.
Drop a Video Upscale tool node onto the canvas and connect it to the video output of your source generation. Choose the target resolution (2x is the safe default; 4x is best reserved for short hero shots given longer render times and credit cost).
Submit the upscale job. Martini queues the task through its async job system, and the upscaled MP4 lands in the new node when ready. If the result still looks soft, try a lower scale factor with a sharper denoise, or regenerate the source clip at a higher base resolution before upscaling.
For multi-shot edits, batch upscale each shot in parallel by duplicating the upscale node per source. Once everything has been upscaled, export the assets and assemble in your NLE for color and audio finishing.
Pair upscale with Seedance 2 cinematic generations to deliver 4K hero shots.
查看模型Use upscale as the final pass on Sora 2 narrative clips for high-resolution delivery.
查看模型Sharpens Kling 3 character footage where faces or hair need extra fidelity.
查看模型Lifts stylised Runway Gen 4 output to broadcast-grade resolution.
查看模型If a model offers a higher native resolution (some video models do), prefer that. Use upscale when the base model maxes out below your delivery spec, or when you're polishing a take you've already approved.
Modern video upscalers preserve original motion and only redraw detail per frame. They won't change camera moves or character action.
Technically yes, but quality degrades after the first pass. Aim for one well-tuned upscale per clip; don't stack passes.
Cost scales with resolution and clip length. Check the credit estimate Martini shows before submitting; 4x upscales on long clips can rival the cost of a fresh generation.
No — upscale is spatial, not temporal. For frame interpolation, use a dedicated interpolator or regenerate at a higher fps if the base model supports it.
在 Martini 的无限画布上将 AI Video Upscaling 与其他 AI 模型链接起来。无需 GPU,免费开始。
免费开始