Workflow
AI Canvas Workflow on Martini
The canvas is the product. Most AI tools force you into a single linear prompt; Martini gives you a node graph — image, video, audio, world, 3D, tool, and script nodes wired through edges, fan-out, chain, and reference slots. This page is the meta-feature hub: how the canvas itself works, and why every other feature on Martini assumes one in the first place.
What this feature solves
Tab-based AI tools sell single models behind single prompts. You log into one tool for image, another for video, another for voice, another for upscale. Each tab has its own subscription, its own quirks, its own export format. To produce anything more complicated than a one-model output, creators move files manually between tools — download, upload, download, upload — losing time and fidelity at every handoff. The work itself becomes navigating tools rather than navigating creative decisions, which is the opposite of what the technology should enable.
Even tools that try to chain models — sequential prompt-tab interfaces, AI agents, automated pipelines — typically force a linear flow. You move from step A to step B to step C, with no way to fan out, no way to branch on a decision, no way to revisit step A without rebuilding step B. Real production work does not flow linearly. Producers compare options, iterate on upstream decisions, swap models mid-flow, and need every dependent downstream step to refresh from the new upstream choice. Linear pipelines collapse under that complexity.
And there is the collaboration problem. AI tools that target individual creators rarely work for teams. The work lives in one person's account, the references sit on their laptop, the iteration history disappears after the session, and the next teammate has no way to see what was tried, what was rejected, or how the final asset came to be. Production at any meaningful scale needs the workflow to be a shared, persistent, lineage-aware artifact — not a transient chat with an AI tab.
Why Martini is different
Martini's canvas is a node graph. Every node is a primitive — image node, video node, audio node, world node, 3D node, tool node, script node, text node. Every edge is a connection — a reference flowing into a generation, a generation flowing into the next step, a tool refining the previous output. Fan-out is native: one node feeds many in parallel. Chain is native: one node feeds the next sequentially. Reference slots accept multiple anchors with distinct roles. Lineage tracks which inputs fed which outputs. Templates capture proven chains for reuse. This is the canvas mental model.
Multi-model orchestration is the wedge. The canvas runs Sora 2, Seedance 2, Kling 3, Runway Gen-4, Veo, Hailuo, and Luma Ray for video; Nano Banana 2, Imagen 4, Flux Kontext, Midjourney, and Seedream for image; ElevenLabs and Fish Audio S2 for audio; world-labs and image-to-3d-world for 3D and world; tool nodes for upscale, background removal, lip-sync, frame extraction, and camera control. The same canvas reads them all and the chain is fluid across modalities. Image flows into video, video flows into audio, world flows into multi-shot video, all on one screen.
Workspace and collaboration are first-class. Real teams share canvases like Figma boards. Multiple editors work simultaneously, lineage persists across sessions, references are managed assets rather than per-tab uploads, and templates can be shared across the workspace. Be honest about the boundaries: Martini is a browser-cloud canvas — not a local model runner, not a Python-node ComfyUI replacement, not an SDK platform. The wedge is curated cloud models plus collaborative canvas plus NLE-native export, which is what production teams actually need.
Common use cases
Multi-model orchestration on one canvas
Run image, video, and audio models in parallel on the same project. Pick the right engine per shot rather than committing to one tool for everything.
Reference-anchored fan-out for hero campaigns
Drop one reference, fan out across multiple downstream models with consistent identity, scene, or style.
Linear chain from concept to export
Image generation → video generation → lip-sync → audio score → sequence builder → NLE export, all on one canvas.
Template reuse across campaigns
Lock a proven chain as a canvas template; future campaigns swap inputs and re-run rather than rebuilding from scratch.
Lineage-aware iteration on long projects
See exactly which references and prompts produced which outputs. Swap an upstream reference and watch the chain refresh downstream.
Real-time collaboration on a single project canvas
Workspace canvases let multiple editors work simultaneously — Figma-style multiplayer applied to AI production.
Branching version exploration
Try alternate creative directions on parallel branches of the same canvas without losing the previous direction.
NLE-export-aware production planning
Build the canvas with editor handoff in mind from step one — frame rate, codec, sequence order, and bundling all flow through to the editor cleanly.
Recommended model stack
nano-banana-2
imageReference-driven image generation that anchors the canvas chain across modalities.
sora-2
videoLong-take video generation that pairs with image and audio nodes on the same canvas.
kling-3
videoCinematic video output that integrates with image, audio, and world nodes downstream.
seedance-2
videoReference-faithful video generation for product, brand, and character chains.
elevenlabs
audioVoiceover and dialogue nodes that wire into the video sequence on the same canvas.
flux-kontext
imageEdit-aware image refinement that lives in the chain alongside generation and video models.
How the workflow works in Martini
- 1
1. Open a workspace canvas
Sign in, open a workspace, create a new canvas. Workspace canvases are shared across the team and persist across sessions.
- 2
2. Drop the upstream reference nodes
Image references, brand color script, character anchors, scene references — every upstream input lives as a labeled node at the top of the canvas.
- 3
3. Build the chain by wiring nodes
Connect references into image nodes, image nodes into video nodes, video nodes into audio nodes, audio nodes into sequence builder. Each edge is an explicit data flow.
- 4
4. Fan out for hero shots and decision points
Duplicate nodes and assign different models to compare takes against an identical input. Pick the winner and continue the chain.
- 5
5. Iterate with lineage preserved
Swap an upstream reference or prompt; the chain re-renders downstream against the new source. Lineage tracking shows which inputs fed which outputs.
- 6
6. Save the canvas as a template
Once the chain works, save the canvas. Future projects start from the template — a proven chain that scales across campaigns.
- 7
7. Export to NLE and ship
NLE export bundles the sequence at clean frame rates and codecs. Drop into Premiere Pro, DaVinci Resolve, or Final Cut Pro and finish.
Example workflow
A small studio is producing a one-minute brand film for a tech client. They open a workspace canvas. Three teammates collaborate live. Upstream nodes: brand color script, founder portrait reference, product reference, mood-board style anchor. Image chain: portrait reference + style anchor → Nano Banana 2 generating the founder in editorial style; product reference + style anchor → Flux Kontext generating four product placements. Video chain: each polished still chains into a Sora 2 or Kling 3 video node depending on the shot type — long-take establish on Sora, hero close-up on Kling. Audio chain: ElevenLabs node generating the voiceover from the script, plus a music bed brought in from Suno. Sequence builder packages the cuts with voiceover and music aligned. NLE export to Premiere as ProRes 24p. The editor finishes the cut. The entire production — concept references through editor handoff — sits on one shared canvas, with lineage preserved for future revisions and a saved template ready for the next client.
Tips and common mistakes
Tips
- Treat references as upstream nodes, not per-generation uploads. Wire once, fan out — that is the canvas advantage.
- Label nodes clearly. After ten nodes the canvas becomes unreadable without labels.
- Save canvases as templates the moment a chain works. Production scales on reuse, not on rebuilding.
- Use fan-out for hero decisions and templates for catalog volume. The two patterns serve different cost and iteration profiles.
- For team work, use workspace canvases — collaboration, lineage, and shared assets only show up at workspace scale.
Common mistakes
- Treating Martini as a single-prompt tab tool. The whole point is the graph; one node is a degenerate use of the canvas.
- Re-uploading references inside every node. Wire one reference into many downstream nodes instead.
- Expecting Python custom nodes or local model running. Martini is browser-cloud — curated models, not extensible code, is the trade.
- Skipping templates. Every proven chain becomes a future template; that compounding is the long-term canvas value.
- Building a flat grid of disconnected nodes. The edges are the value — the chain expresses the production logic.
Related how-to guides
Related models and tools
Tool
AI Lip Sync
Lip-sync tools on Martini for syncing voice and dialogue to portraits and video.
Tool
AI Video Upscaling
Upscale generated video outputs on Martini's canvas.
Tool
AI Image Upscaling
Upscale images and keyframes before final video generation on Martini.
Tool
AI Background Removal
Remove backgrounds from images for assets and compositing on Martini.
Tool
AI Camera Control
Camera movement and angle control for AI video on Martini.
Tool
AI Video Frame Extraction
Extract frames from video for reference and image-to-video workflows.
Tool
AI Video Breakdown
Analyze videos into shots and reusable frames on Martini's canvas.
Provider
OpenAI
OpenAI's GPT Image and Sora video model workflows available on Martini.
Provider
Google's Veo video, Imagen image, and Nano Banana model workflows on Martini.
Provider
ByteDance
ByteDance's Seedance video and Seedream image model families on Martini.
Provider
Kling
Kling 3, O3, and Avatar video model workflows on Martini.
Provider
Runway
Runway's Gen4, Aleph, and image model workflows on Martini.
Provider
Minimax
Minimax's Hailuo video model and adjacent audio workflows on Martini.
Provider
ElevenLabs
ElevenLabs voiceover, lip-sync, and voice cloning workflows on Martini.
Provider
Vidu
Vidu's reference-driven video and character consistency workflows on Martini.
Provider
Luma
Luma's Ray video model workflows and alternatives on Martini.
3D model
Marble 3D AI
Marble 3D and world generation workflows on Martini.
3D model
Image to 3D
Convert images into 3D assets and scenes on Martini.
3D model
Gaussian Splat AI
Gaussian splat 3D outputs on Martini's canvas.
World model
World Labs
World Labs image/text-to-navigable-world workflows on Martini.
World model
Image to 3D World
Turn a visual reference into a reusable navigable 3D world on Martini.
Related features
AI Video Workflow — Node-Based Production From Concept to Final Sequence
Build node-based AI video production pipelines on Martini's canvas — from concept and storyboard to final NLE-ready sequence.
AI Video Generator — Multi-Model AI Video Production on Martini
Multi-model AI video generation with text, image, reference, and editing workflows on Martini's canvas.
AI Character Reference — Reference-Image Workflows on Martini
Use reference images to guide AI model outputs on Martini's canvas.
Multi-Shot AI Video — Build Connected Scenes, Not Isolated Clips
Plan, generate, and sequence multi-shot AI video on Martini — keep characters, style, and motion consistent across shots.
AI Storyboard Generator — Plan Shots, Generate Frames, Then Animate
Plan shots, generate storyboard frames, and convert frames into video on Martini's canvas.
AI Video NLE Export — From Generation to Premiere, DaVinci, Final Cut
Move AI-generated sequences from Martini into Premiere Pro, DaVinci Resolve, and Final Cut Pro.
AI Video to Premiere Pro — Export Workflow on Martini
Move AI-generated sequences from Martini into Adobe Premiere Pro for finishing.
AI Video to DaVinci Resolve — Export Workflow on Martini
Export AI sequences from Martini for color and finishing in DaVinci Resolve.
Related docs
Related reading
Comparisons
Frequently asked questions
How is Martini's canvas different from ComfyUI?
ComfyUI is a powerful local node-graph environment with extensive Python-based custom nodes; the trade is local install, custom-node maintenance, and self-hosted compute. Martini is a browser-cloud canvas with curated models, real-time collaboration, NLE-native export, and zero install. The wedges are different — ComfyUI optimizes for breadth and custom code; Martini optimizes for cloud production and team collaboration.
How is this different from OpenArt?
OpenArt is a tab-and-template tool that wraps individual model interactions. Martini is a node graph that connects every modality on one canvas with shared references, lineage, and chained NLE export. OpenArt is excellent for single-output asset generation; Martini is built for multi-step productions and team workflows.
Can I run my own custom Python nodes on Martini?
No. Martini is a browser-cloud platform with curated cloud models. We do not support arbitrary Python nodes, local model running, or SDK-level extensions today. The trade-off is zero-install, real-time collaboration, and curated model quality. For custom-code pipelines, ComfyUI or local environments fit better.
What modalities does the canvas support?
Image, video, audio, world, 3D, tools (upscale, background removal, lip-sync, frame extraction, camera control), and script and text nodes. The chain spans them all on one canvas; the same canvas can hold image generation feeding video generation feeding audio generation feeding NLE export.
How does collaboration work?
Workspace canvases support multiple editors working simultaneously, Figma-style. Lineage persists, references are workspace assets, and templates are shareable. Workspace billing tracks per-project usage. Real production teams use the workspace; individual creators can also work in personal canvases.
How do templates and lineage actually save time?
Once a chain proves out — references, models, prompts, and edges — saving the canvas as a template means the next project starts from the proven chain. New inputs swap in, the chain re-renders, the output stays on-brand. Lineage shows what fed what, so iteration is non-destructive and audit-friendly.
Build it on the canvas
Open Martini and wire this workflow up in minutes. Free to start — no card required.