Overview: The Controversy That Won’t Go Quietly
The unveiling of an AI-generated performer named Tilly Norwood has ignited a fierce debate within Hollywood about the future of creativity, employment, and consent. SAG-AFTRA, the actors’ union, condemned the project, insisting that the artificial talent cannot replicate human emotion and that its development relied on performances by real actors without permission. As studios weigh the potential of AI to expand their creative horizons, industry leaders and fans are asking: where does the line between art and automation lie?
What Is Tilly Norwood?
Created by Eline Van der Velden, an actor and producer, Tilly Norwood was unveiled at the Zurich Film Festival as part of an AI talent studio project known as Xicoia. Velder Velden described Norwood as a potential standout—“the next Scarlett Johansson or Natalie Portman”—and claimed multiple studios had expressed interest in using the AI performer. She argued that AI can unleash boundless creativity by removing budgetary limits, framing Norwood as a creative work rather than a replacement for human talent.
Norwood’s presence extends beyond the festival stage. An Instagram profile, launched in May, paints the AI as London-based and an aspiring actress. The account, which has accumulated more than 36,000 followers, features posts of Norwood at a café, composing essays, and other everyday scenes that simulate human activity. In later posts, the AI depicts acting tests and an action-movie performance, accompanied by captions like: “I may be AI, but I’m feeling very real emotions right now. I am so excited for what’s coming next!”
Who Has Responded, And What They’re Saying
The reaction to Norwood has been swift and polarized. The SAG-AFTRA statement framed the AI project as a broader threat to actors’ livelihoods, arguing that synthetic talent cannot genuinely emote or connect with audiences in the way real performers can. The union’s stance reflects a long-running concern about consent, residuals, and the use of living actors’ performances to train AI models.
High-profile industry voices quickly joined the chorus of concern. Whoopi Goldberg, appearing on The View, warned audiences against embracing AI-generated performers: “You can always tell them from us. We move differently, our faces move differently, our bodies move differently.” The remark underscores the perceived gap between synthetic imagery and the nuanced expressiveness developed through real-life experience.
Emily Blunt, an Oscar-nominated actor, told Variety she hadn’t heard of Norwood until later in the process but reacted with alarm when the concept was described to her. “I don’t know how to quite answer it, other than to say how terrifying this is,” Blunt said, adding, “Come on, agencies, don’t do that. Please stop. Please stop taking away our human connection.”
Other performers have weighed in as well. Melissa Barrera urged those connected to Norwood—specifically actors represented by agents who sign on to the project—to reconsider: “drop their a$$. How gross, read the room.” Mara Wilson drew attention to the ethical dimensions, asking what happens to “hundreds of living young women whose faces were composited together to make her.” The underlying concern from these voices is clear: AI should not substitute for real human experiences and consent in performance careers.
Why This Debate Matters for the Industry
At the heart of the debate is a tension between innovation and human-centric storytelling. Proponents of AI in film argue that synthetic talent can expand creative possibilities, reduce production costs, and shield performers from unsafe working conditions. Critics, however, contend that AI-based performers lack the essential spontaneity and emotional depth of living actors and may undermine consent-driven art by repurposing real people’s faces and performances without proper compensation.
Van der Velden has framed Norwood as a “creative work”—equivalent in some ways to animation or puppetry—yet the comparison fuels questions about ownership, authorship, and compensation. If AI is used to generate characters who resemble living actors, should there be a framework that protects performers’ rights and provides appropriate recognition or remuneration?
What Comes Next?
The Zurich Festival appearance and subsequent discussions have placed Norwood at the center of a larger industry reckoning about AI’s place in storytelling. Industry observers say we are only at the beginning of dialogues about how to govern AI-created characters, how to ensure transparency in how they are trained, and what safeguards govern the use of real actors’ images and performances. The immediate task is to define ethical guidelines that balance artistic experimentation with the rights and livelihoods of performers.
Bottom Line
As studios explore AI-driven storytelling, the voice of human creators—captured in the concern voiced by SAG-AFTRA and echoed by actors across the industry—remains a powerful force. The question is not only what AI can do, but what it should do, and who benefits when creativity is shared, consented to, and fairly compensated.