Menu Close

Is This Hollywood’s New Weapon Against AI?

Another day, another artificial intelligence issue in Hollywood. We’ve been covering how AI impacts our industry (and how the industry has been fighting back) since the beginning.

And now, there’s a new fighter entering the ring.

A startup called LightBar has just been announced, saying it will help studios catch AI models that generate content trained on copyrighted material. How will they do it? According to Deadline, it’s crowdsourcing users to act as internet sleuths.

The company is enlisting everyday internet users to test AI platforms and find instances where copyrighted characters or content slip through.

Users can submit examples of AI-generated content that appears to violate copyright, and, if verified, earn around $2 per submission, depending on what they uncover.

The company’s founder remains anonymous. Deadline says he has a fintech background and previously raised $50 million for another venture. (We suppose staying under the radar makes sense when you’re poking tech giants like this.)

LightBar has already run proof-of-concept tests using Paramount and Warner Bros. Discovery content.

Deadline says the company’s researchers were among the first to spot that Google started blocking Disney character prompts after Disney sent a cease-and-desist letter in December.

In that letter, Disney wrote (per Variety):

“Google operates as a virtual vending machine, capable of reproducing, rendering, and distributing copies of Disney’s valuable library of copyrighted characters and other works on a mass scale. And compounding Google’s blatant infringement, many of the infringing images generated by Google’s AI Services are branded with Google’s Gemini logo, falsely implying that Google’s exploitation of Disney’s intellectual property is authorized and endorsed by Disney.”

LightBar wants to turn these findings into revenue by helping studios build evidence for lawsuits, settlements, or licensing negotiations.

The business model is similar to bug bounty programs in cybersecurity, in which companies pay researchers to identify vulnerabilities in their systems before bad actors exploit them.

LightBar is doing something similar, except the vulnerabilities are AI models that generate copyrighted content without permission. Instead of studios having to monitor every AI platform themselves, the researchers are actively testing prompts and documenting what slips through.

With Disney’s billion-dollar OpenAI deal allowing Sora users to generate videos featuring Disney characters, LightBar envisions using its technology to monitor usage in real time and ensure studios are compensated.

Whether studios will actually hire LightBar is still a question. The studios might already be building similar internal tools.

Hollywood has been ramping up legal action against AI companies, and it seems the stakes keep climbing. As copyright questions around AI continue to evolve in courtrooms and regulatory bodies, having concrete evidence of how AI models could become valuable.

The legal framework governing AI and creative work is still being written in real time. Studios are fighting back more aggressively than they did in earlier phases of this conflict, and startups like LightBar are hoping there’s money to be made in that fight.

Let us know what you think.

Author: NFS Staff
This article comes from No Film School and can be read on the original site.

Related Posts