Debugging Seedance 2.0: Face Blocks, IP Warnings, and What to Do
Two Seedance 2.0 filters block a lot of first tries: the face guardrail on real-person references, and the copyright guardrail on trademarked characters. Both live at the model layer.
You ship a reference to bytedance/seedance-2.0/image-to-video, the call fails, and the error mentions a face filter or an IP warning. The instinct is to blame the API. The filter is not there. Both guardrails live inside the ByteDance model, so you hit identical behavior on any provider that hosts Seedance 2.0. A different host does not change rules. A different reference does.

The face filter
Seedance 2.0 runs a classifier across every reference image. It flags frames where a recognizable real person is the primary subject. The call returns a refusal and does not consume a billable second. The reason reads "contains real-person reference."
It applies to public figures, identifiable real people with the face sharp and centered, paparazzi-style crops, and crowd shots with one sharp foreground face. It does not apply to wide establishing shots, back-of-head or profile shots, or stylized illustrations.
The filter exists because likeness rights are a legal minefield ByteDance will not take on for arbitrary inputs. Other model families run similar classifiers. This is category-wide, not Seedance-only.
The copyright warning
The second filter checks references and prompts against trademarked characters and costumes. After the Disney and Paramount cease-and-desist round that hit multiple AI video vendors earlier this year, ByteDance tightened the net. The model refuses prompts that name Mickey Mouse, Darth Vader, Spider-Man, Elsa, a Transformer, a Starfleet uniform, and many peers. It refuses references where those costumes dominate the frame.
Refusal reads "potential IP conflict" or "contains protected content."
Describe the archetype, not the character. A black-cloaked villain with a masked helmet is fine. A red-and-blue suit swinging between skyscrapers is not.

Recovery: generated portraits
Stop feeding real people in. Generate a portrait with a still-image model, then feed the synthetic portrait as your I2V reference. The classifier reads it as non-real.
- Write a brief (age, build, wardrobe, expression, lighting).
- Render four variants at 720p with a consistent seed family.
- Pick the match and use it as your reference.
- Reuse that portrait across the campaign.
Provider-agnostic. Same pattern works on Kling I2V and Veo.
Recovery: licensed stock
If you need a real human and have budget, license from a rights-cleared library that includes model releases for AI uses. Not every stock license covers AI-derived outputs; read before you buy. Ones that do spell it out under "generative AI inputs" or similar. Licensed stock passes the filter, but the classifier still flags high-risk compositions.
Recovery: identity-verified routes
For a real person you have the right to depict (a founder, a hired talent with a signed release), identity-verified routes exist. You upload consent documentation and a verification still, case by case at ByteDance. Reach out to the provider's partnerships channel for recurring use.
A working fallback
01import { fal } from "@fal-ai/client";0203async function safeI2V(refUrl: string, prompt: string) {04 try {05 const r = await fal.subscribe("bytedance/seedance-2.0/image-to-video", {06 input: { image_url: refUrl, prompt, duration: 10, resolution: "720p" }07 });08 return r.data.video.url;09 } catch (err: any) {10 const msg = String(err?.message ?? err);11 if (msg.includes("real-person") || msg.includes("IP")) {12 const fb = await fal.subscribe("bytedance/seedance-2.0/text-to-video", {13 input: { prompt, duration: 10, resolution: "720p" }14 });15 return fb.data.video.url;16 }17 throw err;18 }19}
The fallback drops the reference. If the prompt itself triggered the filter, rewrite toward an archetype.
What does not work
- Blurring the face. Still reads as real-person centered.
- Cropping to the jaw or one eye. Same outcome.
- Describing a celebrity without naming them. Triggers the classifier on output.
- Routing through a different host. Filter is in the model weights.
Two filters, model layer, applied everywhere Seedance 2.0 runs. Generate synthetic references for characters. License stock for real people. Describe archetypes, not trademarks. Same recovery paths on Kling and Veo.