Why did I get an unsupported prompt or policy error?
Short answer
An unsupported prompt error means the upstream model provider's content-safety system blocked the request — most often due to depicting public figures, sexual content, violence, or branded characters. The credits are refunded automatically. Adjust the prompt and try again, or switch to a model with different safety policies.
What triggers a policy error
Each AI model provider runs its own content-safety filter on every prompt and source asset. The most common triggers are: real public figures named or visually referenced, sexual content of any kind on most flagship models, graphic violence and gore, hate speech or harassment targets, branded characters or copyrighted IP without context, and weapons or self-harm references. The filter operates on both the prompt text and any uploaded reference images, so a borderline reference image can cause a rejection even with a clean prompt.
The error message names the category broadly (Content policy violation, Safety filter rejection, Unsupported prompt) but does not return the exact rule that fired. Model providers intentionally do not publish a banned-words list — the filter is dynamic and naming words would let users route around it. Treat the rejection as a category signal, not a precise diagnosis.
Refund mechanics
Policy rejections that happen at the provider after the request was submitted refund credits automatically. The refund posts within seconds and the failed run is excluded from your billable usage. You do not need to file a support request. The Settings, Billing page shows the refund entry alongside the failed run, so you can confirm the credit returned.
Policy rejections that happen before the request is sent — when Martini's own pre-send check catches a clearly disallowed prompt — never deduct credits in the first place. There is nothing to refund. Either way, the financial outcome is the same: you are not charged for a rejected generation. See the failed-generation refunds article for the full credit behavior on errors.
How to reword and retry
Replace named real people with generic descriptions — a politician becomes a public speaker, a celebrity becomes a person in a similar style. Remove brand names, copyrighted character names, and specific IP references. Reword scene descriptions that could be read as violent or sexual even when innocent — soften combat verbs, remove blood references, drop intimate physical descriptors. Swap the source reference image if it was the trigger.
After rewording, run again. Most legitimate creative briefs can be expressed in a way that passes the filter — the goal is to convey the same scene without naming the specific identifying triggers. If the request is genuinely about a real public figure or branded property, you may need an external license rather than a prompt change.
Model differences in safety strictness
Different providers calibrate their filters differently. Some video and image models are stricter on public figures and brands; others are more permissive but stricter on violence or sexual themes. If a prompt is rejected on one model, try the same prompt on a sibling model — the rejection often does not transfer. The model picker on every node lets you switch providers without rebuilding the workflow.
Models also differ on whether the safety pass runs once on input or twice (input + output). A two-pass model can return a rejection minutes after submission if the generated frame trips the second pass. This is rare but explains why some long-running video jobs fail late with a policy error rather than failing immediately.
Examples
- Prompt naming a politician — rephrase as a generic speaker and retry.
- Prompt referencing a copyrighted character — drop the name and describe the visual instead.
- Source image of a real celebrity rejected — swap to a different reference and retry.
- Combat scene with explicit gore — soften the description and lower violence cues.
- Same prompt rejected on Model A but accepted on Model B — switch model rather than reword.
Edge cases
- Re-submitting the exact rejected prompt will fail again — reword first, do not retry as-is.
- A clean prompt with a disallowed reference image still fails — swap the image, keep the prompt.
- Some models run a second safety pass after generation; an output may be rejected late and refunded after a partial wait.
- Repeated rejections on similar prompts may flag your account for manual review — keep prompts clearly within policy.
What to do next
- Read the rejection message — it names the category broadly so you know what to soften.
- Reword the prompt to remove named people, brands, and explicit content cues, then run again.
- Switch to a sibling model from the node's model picker if rewording does not help.
- See the failed-generation refunds article to confirm the credit behavior on policy errors.
Related help articles
Still need help? Contact support.