What's the difference between credits and tokens on Martini?
Short answer
Martini bills in credits — a unified unit that abstracts model token costs across providers. One credit covers one image generation up to a defined size; video and audio cost more credits because they require more compute. Tokens are the underlying provider unit; you never see them. Credits give you predictable pricing across dozens of models without doing per-provider math.
Why a single credit unit
Different AI providers price differently. OpenAI bills text models per million input and output tokens. Image and video providers bill per image, per second of video, per resolution, sometimes per inference step. Audio providers bill per minute or per character of input. If Martini exposed these raw units, every node on the canvas would have a different unit and you would do per-provider math before each Run.
Credits abstract that. Every model, every provider, every modality maps to a credit cost. A simple image generation costs a small number of credits; a long high-resolution video costs more; a minute of voice synthesis costs somewhere in between. The exact cost is shown on the node before you run it, so you see the price in a single unit you already understand. Behind the scenes, Martini converts credits to provider tokens or per-second charges using a stable conversion table.
How credit cost is set
Credit cost reflects the actual compute cost from the provider plus a small overhead for orchestration, storage, and platform operations. Cheap models (Hailuo image, Luma Ray short video) cost few credits per run; flagship models (Sora 2 1080p, Veo 3 cinematic) cost many credits per run because the upstream compute is more expensive. Resolution, duration, and batch size are linear multipliers — doubling duration roughly doubles the cost, doubling resolution roughly doubles the cost.
The credit-cost table is published per model on the model card and on the pricing page. When a provider changes their underlying token or per-second pricing, Martini updates the credit cost on the affected models. Updates are announced in the changelog and the help icon notifications, and never apply retroactively to credits already spent.
Why you do not see tokens
Tokens are an implementation detail. They map cleanly to text models — input tokens, output tokens, context window — but they map awkwardly to image and video models, where the unit is per image, per second, per resolution. Showing tokens on a video node would be misleading; showing per-second on a text node would be wrong. Credits unify the surface so the same unit applies everywhere on the canvas.
If you need to inspect underlying token usage for billing reconciliation or research, the Settings, Billing usage detail page shows the per-run breakdown including the underlying provider unit (tokens for text, seconds for video, characters for voice). For most users the credits view is all you need; the token-level view is there for power users and finance teams.
How to estimate before you run
Every node shows a credit estimate before Run. The estimate updates as you change resolution, duration, batch size, or model. Use the estimate to compare options — switching from a 1080p flagship to a 720p mid-tier model can drop the cost by an order of magnitude. For multi-step workflows, the dashboard summarizes total credit spend per project so you can see how a sequence of nodes adds up.
Plan your iteration accordingly: cheap models for drafts and prompt-tuning, flagship models for the final pass. This is the single workflow change that saves the most credits over the course of a project. The do-credits-roll-over article explains why you should not bank on unused subscription credits — and the top-up-credits article explains how to add a buffer when you need it.
Examples
- A 1024x1024 image on a mid-tier model costs around 1-2 credits per output.
- A 5-second 720p Hailuo video costs roughly the same as 50-100 image generations on the same plan.
- Switching from 1080p Sora 2 to 720p Hailuo can cut the credit cost by an order of magnitude.
- A 30-second voice synthesis at standard quality is in the single-digit credit range on most plans.
- Doubling video duration from 4 to 8 seconds roughly doubles the credit cost on every video model.
Edge cases
- Promotional credits and grant credits behave the same as subscription credits for spend order.
- Workspace credit pools are denominated in the same credits unit as personal credits — pricing is consistent.
- Token-level detail on long text generations is available in the per-run breakdown for finance teams.
- Provider price changes take effect at the next monthly cycle and do not retroactively recharge spent credits.
What to do next
- Read the credit estimate on each node before clicking Run to compare model options.
- Iterate cheap first (draft models) and promote final passes to flagship models to save credits.
- See the how-credits-work article for the full credit balance and spend-order model.
- See the top-up-credits article if you need credits beyond your monthly subscription.
Related help articles
Related docs
Still need help? Contact support.