Image
AI Character Reference on Martini
This page is the primitive — how the reference itself works on Martini's canvas. Drop the portrait once into a reference slot, label it, and fan it across every downstream image and video model. For the cross-modal outcome see ai-character-consistency; for the multi-shot video delivery see consistent-character-video. Here we go deep on the reference-node mechanics that make both possible.
What this feature solves
Most AI tools treat reference images as a one-time upload. You paste a portrait into a single tab, generate, and if the result is good you save the image elsewhere. Try the next prompt and you re-paste. Try a different model and you re-paste. Try video instead of image and you re-paste. The reference itself is not a managed asset — it is a transient ingredient. For a creator working on a recurring character, a brand spokesperson, an influencer persona, or an episodic series, that lack of asset management is the bottleneck. The reference becomes the source of truth, but the tool gives it nowhere to live.
The second break is multi-anchor. Real characters have multiple reference angles — front, side, three-quarter, expression range, wardrobe variations. Tab-based AI typically supports one reference per session, which forces creators to flatten the character down to a single best-portrait and hope. The result is identity that holds for some shots and falls apart on others, because the model never saw enough of the subject to generalize. Without a way to attach multiple anchors and label them, you lose fidelity at the foundation of every downstream generation.
And the third break is lineage. When the reference improves — better lighting, better likeness, sharper output — there is no way to retroactively push that improvement through every downstream generation that depended on the old version. You manually re-run every prompt, re-curate every output, and accept that some assets in the campaign reflect an old reference and some reflect the new one. For series-grade and brand-grade work, that drift is unacceptable. The reference needs to be a node, not a file.
Why Martini is different
On Martini, the reference is a first-class node on the canvas. Drop your character portrait into an image node and label it — Mia front, Mia three-quarter, Mia wardrobe one. The node persists. Wire it into every downstream image node and every video node, and they all consume the same anchor. Swap the upstream reference for a sharper version and every dependent node refreshes from the new source. The reference becomes the asset; every generation becomes a derivation. That is the foundation of identity-driven AI work.
Multi-anchor is native. Drop multiple references — different angles, expressions, wardrobes — onto the canvas and wire them into the right downstream nodes for the shot type. Nano Banana 2 and Flux Kontext consume the per-shot anchor that suits the prompt. Vidu and Kling 3 anchor video shots to the still that matches the action. The character is no longer a single image — it is a labeled set of anchors, and the canvas keeps track of which anchor goes where. That fidelity is impossible in a single-tab tool.
Templates make the reference reusable across projects. Once a character canvas works — anchors labeled, models tuned, chain proven — save the canvas as a template. Next campaign, drop in a new outfit prompt or new location and re-run; the reference, the model selection, and the chain are baked in. The character becomes a real asset across every campaign you ship. Combined with workspace collaboration, the team works on one shared reference rather than passing copies of a portrait around in chat.
Common use cases
Build the canonical character portrait once and reuse it
Refine one master portrait on Nano Banana 2, drop it as the canvas anchor, and wire it into every downstream image and video node going forward.
Manage multi-angle reference for a serialized character
Front, side, three-quarter, wardrobe variants — labeled as separate anchors, wired into the right downstream nodes per shot type.
Hand off a character canvas to a teammate
Workspace collaboration means the reference, the model setup, and the chain become a shared asset — no copying portraits in Slack.
Upgrade the reference and refresh every downstream output
When a sharper portrait lands, swap the upstream node and the downstream chain re-renders against the improved anchor.
Cross-modal fan-out from one reference
Wire the same reference into Nano Banana 2 image nodes for stills and Vidu video nodes for motion. Identity holds across modalities.
Lineage-aware iteration on a recurring persona
See exactly which anchors fed which generations. When the protagonist evolves, the canvas history shows what came from which reference version.
Recommended model stack
nano-banana-2
imageStrongest reference adherence for character identity in still generation.
flux-kontext
imageOutfit, pose, and scene changes that preserve the original face from the reference.
midjourney
imageStylized character generation with reference influence and creative range.
vidu
videoReference-driven video that anchors to the canvas character still.
kling-o3
videoCharacter-aware motion that preserves identity from a still anchor.
gpt-image-2
imageEdit-aware image generation that respects reference composition for refinement.
How the workflow works in Martini
- 1
1. Generate or upload the master reference
Build the canonical portrait — high-resolution, well-lit, neutral pose, sharp likeness. Generate on Nano Banana 2 or upload from your existing library.
- 2
2. Drop the reference as a labeled image node
Place it on the canvas with a clear label. This is the source of truth for every downstream generation. Treat it like a managed asset, not a transient upload.
- 3
3. Add multi-angle anchors as separate nodes
Front portrait, three-quarter, side, expression range, wardrobe variants — each is its own labeled image node. Wire each to the downstream node that matches the shot.
- 4
4. Fan out into downstream image and video models
Connect the right anchor to Nano Banana 2 for new poses, Flux Kontext for outfit changes, Vidu or Kling for video shots. The chain consumes the right reference per node.
- 5
5. Save the canvas as a character template
Once the chain is proven, save the entire setup — references, labels, model assignments, prompts — as a reusable template for future projects.
- 6
6. Iterate on the upstream reference, not the downstream outputs
When the master portrait can be improved, swap the upstream node and re-render the chain. Every dependent generation refreshes from the new anchor automatically.
Example workflow
An agency is building a recurring AI brand persona named Theo for a financial-services campaign. They generate a canonical Theo portrait on Nano Banana 2, then build a multi-anchor canvas: front portrait, three-quarter, business casual wardrobe, formal wardrobe, expression range. Each anchor is a labeled image node. The campaign needs ten image placements and four video cuts. They wire the formal wardrobe anchor into six Flux Kontext nodes for the formal placements and the casual anchor into four Flux Kontext nodes for the lifestyle placements. The strongest stills feed into four Vidu video nodes for the motion cuts. Three weeks later, the agency lands a sharper photographic style and re-generates Theo's master portrait. They swap the upstream node and the entire campaign canvas refreshes against the new anchor — no manual rework, no curated drift, just a cleaner Theo across every asset.
Tips and common mistakes
Tips
- Invest in the master reference. Every downstream output inherits its quality — a sharper anchor produces sharper everything.
- Label every reference node clearly. "Mia front" beats "untitled" once the canvas has eight nodes on it.
- Use multi-angle anchors for shots that need range. One portrait cannot serve every camera angle equally.
- Wire the right anchor per shot type rather than wiring one generic anchor everywhere. The match between anchor and shot improves fidelity.
- Save the canvas as a template the moment it works. Future projects start from the template, not from scratch.
Common mistakes
- Re-uploading the same reference into every new node instead of wiring one canvas anchor everywhere.
- Mixing two different references in one chain. The downstream model averages them and identity collapses.
- Using a low-resolution or stylized reference. The chain inherits every flaw.
- Treating the reference as an afterthought. On Martini the reference is the upstream center of gravity — give it the same care a director gives to casting.
- Forgetting to label nodes. After ten generations the canvas becomes unreadable without clear node labels.
Related how-to guides
generate-consistent-character · nano-banana-2
/en/how-to/generate-consistent-character/nano-banana-2
create-video-with-reference-character · vidu
/en/how-to/create-video-with-reference-character/vidu
create-brand-visuals
/en/how-to/create-brand-visuals
edit-transform-photos · flux-kontext
/en/how-to/edit-transform-photos/flux-kontext
Related models and tools
Tool
AI Image Upscaling
Upscale images and keyframes before final video generation on Martini.
Tool
AI Background Removal
Remove backgrounds from images for assets and compositing on Martini.
Provider
Google's Veo video, Imagen image, and Nano Banana model workflows on Martini.
Provider
Kling
Kling 3, O3, and Avatar video model workflows on Martini.
Provider
Vidu
Vidu's reference-driven video and character consistency workflows on Martini.
Provider
ByteDance
ByteDance's Seedance video and Seedream image model families on Martini.
Provider
OpenAI
OpenAI's GPT Image and Sora video model workflows available on Martini.
Related features
AI Character Consistency Across Images and Video
Keep a subject consistent across image and video generations on Martini using reference workflows.
Consistent Character AI Video — Reference-Driven Video on Martini
Preserve character identity through reference-driven video models on Martini.
AI Influencer Video Generator — Repeatable Character Pipeline
Design, generate, and scale AI influencer videos on Martini — character library, voice cloning, lip-synced video, all on one canvas.
AI Storyboard Generator — Plan Shots, Generate Frames, Then Animate
Plan shots, generate storyboard frames, and convert frames into video on Martini's canvas.
AI Photo Restoration — Restore Old Photos on Martini
Restore old, damaged, or low-quality photos with AI on Martini's canvas.
AI Style Transfer — Apply Artistic Styles to Images on Martini
Transfer artistic styles between images using AI on Martini.
AI Product Photography — Studio-Quality Product Images on Martini
Generate studio-quality product photos for e-commerce on Martini's canvas.
AI Headshot Generator — Professional Headshots in Minutes
Generate professional headshots for LinkedIn, resumes, and team pages on Martini's canvas.
AI Mockup Generator — Product, Device, and Brand Mockups
Generate product, device, and brand mockups for marketing on Martini's canvas.
AI Thumbnail Generator — YouTube and Social Thumbnails
Generate scroll-stopping thumbnails for YouTube, podcasts, and social on Martini.
AI Logo Generator — Brand Marks and Wordmarks on Martini
Generate logo concepts, brand marks, and wordmarks on Martini's canvas.
AI Emoji Generator — Custom Emoji on Martini
Generate custom emoji and stickers for Slack, Discord, and brand on Martini.
AI Sticker Generator — Telegram, WhatsApp, Discord Packs
Generate sticker packs for Telegram, WhatsApp, Discord, and iMessage on Martini.
AI Comic Strip Generator — Multi-Panel Comics on Martini
Generate multi-panel comic strips with consistent characters on Martini's canvas.
AI Presentation Slides — Pitch Decks and Slide Visuals
Generate slide visuals, pitch deck imagery, and presentation graphics on Martini.
AI Icon Generator — App and UI Icons on Martini
Generate app icons, UI icons, and brand icon sets on Martini's canvas.
AI Character Design — Game and Story Characters on Martini
Design original characters for games, stories, and animations on Martini's canvas.
AI Architecture Rendering — Building and Space Visualization
Generate architectural renderings, exterior visualizations, and concept art on Martini.
AI Interior Design — Room and Space Visualization on Martini
Visualize interior designs, room concepts, and decor schemes on Martini's canvas.
AI Game Asset Generator — Sprites, Concept Art, Backgrounds
Generate game-ready assets, sprites, concept art, and backgrounds on Martini.
Related docs
Related reading
Comparisons
Frequently asked questions
How is this different from ai-character-consistency?
ai-character-consistency is the outcome page — it explains the goal of identity preservation across image and video. ai-character-reference is the primitive — how the reference slot itself works. Use the primitive when you need to understand how to set up references on the canvas; use the outcome page when you need cross-modal continuity end-to-end.
How is this different from consistent-character-video?
consistent-character-video is the video-specific delivery — same person across every cut. ai-character-reference is the upstream slot mechanics. The two pair: build the reference correctly here, then ship multi-shot video on the consistent-character-video workflow.
Can I have more than one reference for the same character?
Yes. Drop multiple labeled image nodes — front, three-quarter, wardrobe variants, expression range — and wire each into the downstream node that suits the shot. Multi-anchor reference is one of the canvas advantages over single-tab tools.
Which model handles character reference best?
For stills, Nano Banana 2 has the strongest reference adherence for identity. Flux Kontext is best for outfit and scene changes that preserve the face. For video, Vidu and Kling O3 anchor most reliably to a clean still.
Do I need to train a custom model on my character?
No. Reference-driven generation skips the training step. Drop the canonical portrait on the canvas, wire it into the chain, and run. Custom training is still an option for advanced use cases but the reference workflow covers the majority of needs.
What if my reference is low quality?
Refine it first. Run the original through Nano Banana 2 with a refinement prompt, or upscale via the image-upscale tool, before treating it as the canonical anchor. The chain is only as strong as the upstream reference.
Build it on the canvas
Open Martini and wire this workflow up in minutes. Free to start — no card required.