Skip to content
Gen Art Studios
AI Voice Cloning: When It's Useful, When It's Not, How We Use It
AI Music7 min read·

AI Voice Cloning: When It's Useful, When It's Not, How We Use It

Voice cloning is one of the most powerful — and most ethically loaded — tools in the AI production stack. Here's how we decide when to use it, how we get consent, and what guardrails we put in place.

The four uses we actually approve

We use voice cloning for four specific things, with a clear consent paper trail for each:

  1. Cloning the client's own voice for content they create themselves but don't have time to record. The voice belongs to them; the use case is them; the deletion right is theirs.
  2. Cloning a hired voice actor's voice under a contract that explicitly licenses cloning, with usage caps and revocation rights.
  3. Synthetic voices that don't claim to be anyone real — a "narrator" or "host" character with a unique synthetic identity used consistently across a brand.
  4. Voice cloning for accessibility — restoring or extending the voice of someone who has lost it, with their consent or their family's.
That's it. We turn down everything else.

What we won't do

  • Clone a public figure's voice, even for satire — risk-reward is too lopsided.
  • Clone a deceased person's voice without explicit estate authorization.
  • Produce content where a real person's cloned voice says things they would object to.
  • Build voice clones for impersonation, scams, or manipulation, ever.
These aren't legal positions only — they're business positions. The downside of getting one of these wrong is reputational damage that no project fee covers.

The workflow

For an approved cloning project our standard workflow is:

  1. Written consent and scope — what the voice will be used for, what it won't, how long, and the revocation process.
  2. Reference recording session — 5-10 minutes of clean audio in a quiet room, varied content (declarative, conversational, emotional).
  3. Train the clone — typically takes minutes with current tools.
  4. A/B test — generate 5-10 sample lines and have the voice owner approve the quality and accuracy.
  5. Production use — generate content with the clone, with the owner reviewing key deliverables.
  6. Audit log — every generated audio file is logged so the owner can review what was made on their behalf.

The honest tradeoff

Voice cloning saves enormous amounts of time. It also creates a real artifact — a model that exists, that could be misused, that someone might want to delete later. We treat that artifact with the same seriousness as any other identifiable data.

If your AI vendor doesn't have clear answers about consent, deletion, and audit, walk away from the engagement. The tools are widely available; integrity isn't.

Voice CloningEthicsElevenLabsProduction
G

Gen Art Studios

AI-powered creative studio building apps, videos, music, and marketing assets.

Frequently Asked Questions

Cloning a voice with the explicit consent of the person whose voice it is, for purposes they've agreed to, is legal in most jurisdictions. Cloning without consent is increasingly restricted — Tennessee, California, the EU AI Act, and several other jurisdictions now have specific laws against unauthorized voice replicas. Always start with written consent.
For a trained voice with 3-5 minutes of clean reference audio, ElevenLabs and similar tools produce output that even close acquaintances often can't distinguish from the real person. The remaining tells are usually around emotion in long-form delivery, not in the voice timbre itself.
If you create content at scale, yes — it can save hours per week. The catch is that you need to be comfortable with the existence of a model trained on your voice. If you ever want to delete it, get a tool with clear deletion / right-to-be-forgotten support.

Keep Reading

Related articles

Ready to build something?

Let's talk about your next project. We'll help you move fast with AI-powered tools and workflows.