When AI Makes Movies —
and Why That’s Not the Interesting Part

AI Governance EU AI Act Insurance Content Generation

My colleagues in marketing at Versicherungskammer recently produced a fully AI-generated short film for our annual sales partner kick-off. It tells the story of an astronaut floating in space, a golden compass strapped to her wrist, and a flashback to the grandfather who inspired her to reach for the stars. A toy rocket launches into an evening sky. The word “JETZT” — “NOW” — glows among the stars in the final frame.

Watch the film here →

It’s cinematic, emotional, and genuinely impressive. A year ago, this quality would have required a production crew, actors, and a serious budget. Today, it was generated by AI.

And that’s exactly why it’s worth pausing for a moment.

The visible and the invisible

AI-generated content — videos, images, text — is the most visible layer of what AI can do. It’s what shows up on LinkedIn feeds and in keynote demos. It’s impressive, and it’s accessible: anyone can see the output and immediately understand what happened.

But in insurance, the AI applications that will drive real transformation are largely invisible. They sit deep inside the value chain, in places most people never see.

Consider risk assessment and pricing: AI models that evaluate a person’s risk profile and determine what they pay for life or health insurance. These decisions directly affect access to essential financial protection. Or look at claims handling, where AI can influence how fast and how accurately a claim is processed. Or the software development lifecycle itself, where AI is increasingly used to write, test, and review the code that runs our core systems.

None of this makes for a cinematic short film. But these are the places where AI decisions affect whether someone gets coverage, how much they pay, and how quickly they’re helped when it matters most.

Different layers, different risk

The narrator in the film says something that stuck with me: “It was never about the technology for him — it was always about finding the best path.” That line captures something essential about where our industry stands with AI right now.

An AI-generated video is a creative tool with a manageable risk profile. If a frame looks slightly off or the narrative doesn’t land — it’s a marketing asset. You adjust, you reshoot, you move on.

An AI model that determines what someone pays for health insurance is a fundamentally different category. The risk isn’t aesthetic. It’s about fairness, transparency, and accountability. It’s about whether the people affected by these decisions can understand and challenge them.

This is where governance becomes essential — not as a compliance checkpoint at the end, but as a design principle from the start. The question isn’t “can we deploy this model?” It’s “have we built it in a way that we can explain, monitor, and correct?”

Where the EU AI Act draws the line

The EU AI Act provides a useful lens here. Content generation — the kind of AI that produced the kick-off film — falls into the limited-risk category. It requires transparency (audiences should know it’s AI-generated), but it doesn’t trigger the regulation’s more demanding requirements.

AI systems used for risk assessment and pricing in life and health insurance are a different matter. Annex III of the EU AI Act explicitly classifies these as high-risk — with requirements around data quality, human oversight, documentation, and ongoing monitoring. Not because the regulator wants to slow innovation down, but because these systems affect people’s access to essential services in ways that demand accountability.

The gap between limited-risk content generation and high-risk decision-making is where the real work of AI governance happens. And it’s where the insurance industry needs to invest — not just in technology, but in the organizational capability to deploy AI responsibly at scale.

The compass points inward

The film ends with the astronaut looking outward — toward the stars, toward possibility. It’s a beautiful image.

But the compass that insurance needs for AI doesn’t point outward. It points inward: into our processes, our data, our decision-making infrastructure. Into the less glamorous question of how we build trust at the intersection of automation and human impact.

The film is a strong beginning. The real journey is just getting started. Now — JETZT — is the right time.

Oliver Duschka works in AI and Data Governance at Versicherungskammer, Germany’s largest public insurer, and holds a PhD in Computer Science from Stanford University. The views expressed here are his own.