AI camera control on Martini lets you direct virtual camera moves — push-in, pull-out, orbit, pan, tilt, dolly, and tracking shots — across video models like Sora 2, Kling 3, Runway Gen 4, and Google Veo. Instead of describing camera motion in prose and hoping for the best, you specify the move explicitly so each take matches the storyboard you planned.
Camera control parameters tell a video model how a virtual camera should move during the clip — direction, speed, and shape of the move. Supported moves typically include push-in, pull-out, pan left or right, tilt up or down, orbit around the subject, dolly forward or back, crane up or down, and roll. Some models accept named presets; others accept a combination of motion vectors that you blend per clip.
On Martini, camera control surfaces directly inside compatible video nodes — Sora 2, Kling 3, Runway Gen 4, and Google Veo all expose camera-direction parameters in their node UI. Rather than fighting prompt phrasing, you pick the move from the node's camera control field and submit. The model then renders frames consistent with that virtual camera path while keeping subject identity, lighting, and environment locked.
Plan camera language at the storyboard stage. Decide which moves belong to which shots before generating — establishing wide with a slow push-in, medium with a static lock, close-up with a subtle orbit, etc. This planning pays back tenfold when assembling the final cut.
Pick a video model that exposes camera control. Sora 2 and Kling 3 are strong defaults; Runway Gen 4 is preferred for stylised work; Google Veo is a good fit for naturalistic photography. Drop the model node onto the canvas with your image input or text prompt.
In the node UI, find the camera control field. Pick the move from the dropdown — push-in, pull-out, pan-left, pan-right, orbit, dolly-zoom, etc. — and adjust intensity if the model exposes it. Lower intensity reads more cinematic; higher reads dramatic but can cause warping or instability.
Render the take, review it on the canvas, and iterate. If the move overshoots or feels off-axis, lower intensity or pick an adjacent move. Once each shot's camera move is locked, run video upscale on the finals and assemble the cut in your editor.
Sora 2 honours explicit camera control parameters for narrative and cinematic work.
查看模型Strong camera control with smooth motion across character and product shots.
查看模型Stylised camera moves with film-grade aesthetic — preferred for branded creative.
查看模型Photographic camera language for naturalistic shots; dolly and orbit feel grounded.
查看模型Sora 2, Kling 3, Runway Gen 4, and Google Veo are the strongest. Other models accept camera language in the prompt but with less reliability.
Most models prefer one dominant move per clip. Stacking a dolly with an orbit usually produces inconsistent results — split into two takes and cut between them instead.
Pick the same model and similar intensity across the sequence. Mixing models within a single sequence makes camera language inconsistent because each model interprets the moves differently.
Yes — providing a starting frame plus a camera control directive is the most reliable way to get a controlled move. The first frame anchors composition while the control directs the move.
Check that you set the dedicated camera-control field rather than just prose. Also ensure the prompt does not contradict the move (e.g., do not say "static shot" if you set push-in).
在 Martini 的无限画布上将 AI Camera Control 与其他 AI 模型链接起来。无需 GPU,免费开始。
免费开始