Editing
AI Background Remover on Martini
Cut once, recompose on the canvas. The background-removal tool is the start of a recomposition pipeline — cutout the subject, drop into a Flux Kontext or Nano Banana 2 node for a new scene, refine, upscale, export. The wedge is the chain, not the cutout itself.
What this feature solves
Background removal looks like a one-click utility, and many tools sell it that way. The reality is more complicated. A real production workflow rarely stops at the cutout — the cutout is step one in a chain that ends with the subject living in a new scene, on a new background, in a new lighting context. Generic cutout tools spit out a transparent PNG and dead-end there. The next steps live in another tool, often a manual Photoshop session, and the back-and-forth is where the workflow loses momentum.
The other half is fidelity. Hair, fur, glass, motion blur, and translucent fabric remain hard for any background-removal model. Generic tools claim hair-perfect output but visually fall short on every difficult edge. Without a way to fix problem edges in the same canvas where the cutout lives, the user is forced to download the masked PNG, open Photoshop, fix manually, save, and re-import. Multiply that by twenty product photos and the workflow becomes manual labor.
And there is the recompose gap. Once the cutout exists, the next move is usually placing the subject into a new background. Whether that means generating a new scene, dropping in a brand backdrop, or compositing into existing imagery, the work depends on the cutout integrating naturally — same lighting direction, same color temperature, same edge softness. Generic tools cannot help with that integration, so the cutout sits there waiting for a separate compositing pass.
Why Martini is different
Martini treats background removal as a node in a recomposition chain. Wire the source still into the background-removal tool node, get the cutout, and immediately chain into a Flux Kontext or Nano Banana 2 image node with a prompt for the new scene. The chained model sees the cutout as input and generates the new background or composite directly — no Photoshop session, no manual integration. The cutout becomes the first step, not the last step.
Edge cleanup happens in the chain. When hair or glass edges need refinement, route the cutout through Flux Kontext or Nano Banana 2 with an edit prompt to smooth, refine, or recompose the trouble area. The canvas keeps the lineage, so the original source still lives upstream and the refined cutout lives downstream — both available for further iteration. That iterative chain is impossible in a one-click cutout tool.
Downstream the cutout chains into product photography, ad creative, brand asset packs, or video reference inputs. The cutout-first pipeline becomes the foundation for the entire campaign rather than a single utility step. Combined with workspace billing and template reuse, the chain saves as a campaign template — every future product or character recompose runs through the same proven sequence.
Common use cases
Cut out a product and recompose it into a new lifestyle scene
Background-remove the product photo, then chain into Nano Banana 2 with a scene prompt to drop it into a kitchen, beach, or studio context.
Prepare hero stills for ad placements with brand backgrounds
Cutout the subject, then chain into Flux Kontext to drop in a brand-colored backdrop ready for the placement spec.
Build a product cutout library for catalog use
Run cutouts in batch as nodes on the canvas, then save the cutouts as a brand asset pack ready for ecommerce listings.
Refine difficult edges through edit-aware models
Route trouble edges (hair, glass, fur) through Flux Kontext or Nano Banana 2 for smoothing in the same canvas where the cutout lives.
Composite subjects into AI-generated environments
Cutout the talent or product and chain into a Nano Banana 2 node generating a new scene around the cutout for ad creative.
Prepare reference inputs for character or product video
Provide downstream video models with a clean cutout subject so the video generation anchors to the subject without scene noise.
Recommended model stack
nano-banana-2
imageEdit-aware recomposition that places the cutout into a new scene cleanly.
flux-kontext
imageOutfit, scene, and edge edits applied to the cutout while preserving the subject.
gpt-image-2
imageEdit-aware refinement for difficult cutout edges and small composite fixes.
qwen-image
imageAlternative image-edit model for variant compositions on a cutout subject.
How the workflow works in Martini
- 1
1. Drop the source still onto the canvas
Upload or generate the source image as an image node. The cutout will read this directly downstream.
- 2
2. Add the background-removal tool node and wire it in
Drop the background-removal tool, connect it to the source image. Run to produce the cutout output.
- 3
3. Review the cutout for edge fidelity
Hair, glass, and translucent edges are typical trouble spots. Identify any areas that need refinement before chaining downstream.
- 4
4. Chain into Flux Kontext or Nano Banana 2 for refinement or recompose
For edge fixes, prompt the model to smooth or refine the trouble area. For new scenes, prompt the new background or composite directly with the cutout as input.
- 5
5. Iterate with multiple background or scene variants
Fan out the cutout into several Nano Banana 2 nodes — different scenes, different lighting, different brand contexts. Pick the strongest variant.
- 6
6. Chain into upscale, asset pack, or video reference
Send the chosen composite into image-upscale for delivery resolution, into a brand asset pack for the campaign library, or into a video reference node for downstream motion shots.
Example workflow
A skincare brand is launching a new serum and needs ten lifestyle images placing the bottle in different scenes — bathroom, vanity, beach, kitchen, hotel suite, and so on. The team uploads the studio product photo to a canvas. They wire it into the background-removal tool and produce a clean cutout. The cutout chains into ten Nano Banana 2 nodes, each prompted for a different scene with consistent lighting direction. Two scenes have hair-edge artifacts on the bottle's neck label — the team routes those two through Flux Kontext with a prompt to refine the edge. All ten composites land on the canvas. The team picks the strongest seven, upscales them through the image-upscale tool node for the campaign asset pack, and exports the bundle. One canvas, ten lifestyle scenes, no Photoshop session.
Tips and common mistakes
Tips
- Treat the cutout as step one. The recompose chain — cutout → Flux Kontext → upscale — is where the real value lives.
- Hair, fur, and glass edges are sensitive. Plan for an edge-refinement step in the chain rather than expecting one-click perfection.
- For brand-color critical work, set the new scene background to match the brand palette via prompt rather than fixing in post.
- Reuse the cutout across multiple downstream scenes by wiring one cutout node into many recompose nodes.
- Save the cutout → recompose → upscale chain as a campaign template. Catalog and lifestyle workflows scale on template reuse.
Common mistakes
- Treating background removal as the deliverable. Cutout PNG sitting alone is rarely the campaign asset — recompose is usually next.
- Expecting hair, fur, or glass edges to be perfect on first run. Plan an edit-aware refinement step into the chain.
- Re-uploading the cutout into Photoshop for refinement. Chain it through Flux Kontext or Nano Banana 2 instead and stay on the canvas.
- Mismatched lighting between cutout subject and recompose scene. Match the prompt to the source lighting direction for natural integration.
- Skipping the asset-pack export. Recomposed scenes belong in the brand asset library, not in a download folder.
Related how-to guides
Related models and tools
Tool
AI Background Removal
Remove backgrounds from images for assets and compositing on Martini.
Tool
AI Image Upscaling
Upscale images and keyframes before final video generation on Martini.
Provider
Google's Veo video, Imagen image, and Nano Banana model workflows on Martini.
Provider
OpenAI
OpenAI's GPT Image and Sora video model workflows available on Martini.
Provider
ByteDance
ByteDance's Seedance video and Seedream image model families on Martini.
Related features
AI Product Photography — Studio-Quality Product Images on Martini
Generate studio-quality product photos for e-commerce on Martini's canvas.
AI Image Upscaler — Upscale Keyframes and Stills on Martini
Upscale keyframes, products, and still assets before video generation on Martini.
AI Style Transfer — Apply Artistic Styles to Images on Martini
Transfer artistic styles between images using AI on Martini.
AI Photo Restoration — Restore Old Photos on Martini
Restore old, damaged, or low-quality photos with AI on Martini's canvas.
AI Lip Sync — Sync Voice and Dialogue to Portraits and Video
Sync voiceovers, dialogue, and music to portraits and video on Martini using lip-sync models.
AI Camera Control — Orbit, Push, Pull, Pan, Crane
Direct AI video like a real DP — Sora 2, Kling 3, Runway Gen-4, Veo with director-level shot planning on Martini's canvas.
AI Video Editing — Transform and Extend Existing Clips
Restyle, replace, extend, and transform existing clips on Martini's canvas — Runway Aleph, Kling O3, Wan, Seedance 2 chained into a real edit.
AI Video Upscaler — Polish AI Video to 4K on Martini
Improve AI video resolution and polish outputs on Martini's canvas.
Related docs
Related reading
Comparisons
Frequently asked questions
Which model does Martini use for background removal?
The background-removal tool is a routed tool node — the underlying engine is determined by workspace defaults. Common defaults include modern industrial-grade matting models that handle most subject types reliably; difficult edges (hair, glass, fur) typically need a chained edit-aware refinement step.
Can I get a transparent PNG export?
Yes. The cutout output preserves the alpha channel for transparent backgrounds when downloaded directly. For composite delivery (cutout placed onto a new scene), use the recompose chain via Flux Kontext or Nano Banana 2.
How does the recomposition chain work?
Wire the cutout output into a Flux Kontext or Nano Banana 2 image node with a prompt describing the new scene. The model takes the cutout as input and generates the new background or composite directly. No Photoshop required.
Will the cutout handle hair and glass perfectly?
For most subjects yes; for trouble edges (fine hair, glass rims, motion blur), expect some refinement. Chain the cutout through Flux Kontext or Nano Banana 2 with an edit prompt to clean those edges in the same canvas.
Can I batch background removal across many images?
Yes. Drop multiple image nodes onto the canvas and wire each into its own background-removal tool node. The fan-out runs in parallel, and the cutout outputs sit on the canvas ready for downstream chaining.
How is this different from a one-click background removal site?
A one-click tool gives you a transparent PNG and a download. Martini puts the cutout into a recomposition chain on the same canvas — refine, recompose, upscale, export — so the workflow continues rather than dead-ending at the PNG.
Build it on the canvas
Open Martini and wire this workflow up in minutes. Free to start — no card required.