← Back To All Posts
Date:
October 4, 2023

The State of AI in Media and Entertainment

Insights from IBC2023

It's hard to believe nearly a year has passed since Generative AI's coming out party. The WGA and SAG-AFTRA strikes had AI at the center of their negotiations. Industry conferences, including the recently concluded IBC 2023, spent many hours discussing, publicizing, and revealing Generative and Predictive AI products. The investor community is intensely focused on it, investing a reported $91.9B in the industry last year. Academics and regulators try to understand and predict AI's impact and influence on society, and critics wonder if Gen AI might be this year's NFT. AI certainly has people's attention.

Attention is one thing, but utility is another. The questions people need to ask are: how useful is it, and what are the risks? We've written before on AI in M&E here , here , and here , discussing many of these issues. AI was a significant topic of discussion at IBC, and here are a few observations on the state of AI in M&E in 2023.

The Promise of AI for M&E

John Footen, Managing Director for Media & Entertainment at Deloitte, was one of many IBC 2023 speakers to express the industry's hopes for AI. "AI has possibilities in production/post-production and distribution. An AI that creates or adjusts content based upon who is in the room, the story, the culture, and the language has a future in the industry." He said Generative AI "could" blur or remove blood from horror movies to allow it to show in markets where such content is prohibited. Thus, AI could simplify creating optimized releases for any market or platform and be built into smart TV operating systems.

Other AI applications are aiding the targeting of specific advertising based on who's watching a show, analyzing data to make better content recommendations to consumers, translating languages in real-time, and generating new audio, video, and effects content creation on demand.

Some of these technologies are already coming to market. The new Ryoo Seung-wan film "Smugglers" uses Flawless' Generative AI tools, referred to as "visual translation," to graphically modify mouth movements to precisely sync the English dub onto the Korean actor's faces; it is the first full-length film foreign-language application of the technology.

AI Limitations

Clint Eastwood's iconic line from the film "Magnum Force" was, "A man's got to know his limitations." This admonition certainly applies to the adoption of AI technology, especially now. While products are being released that use AI, experts warn relying too heavily on new technologies presents risks that must be thoroughly understood before making a significant commitment.

Garrett Goodman from Papercup put it this way. "The barriers of entry (for AI) into localization are low," he said, "because there are plenty of models available for simple language translation. But it's more complicated than that. The source data used in training is an important factor because, with incomplete models, it's easy for a machine to miss something a human would catch instantly." He went on to say that even with their investment in developing smaller, specialized models using voice actors and experienced translators, when it comes to localization, "the human touch" remains a critical component of their product delivery.

The new WGA agreement establishes guidelines for the use of AI in the writing process and recognizes both its challenges and risks:

  • AI can't write or rewrite literary material, and AI-generated material will not be considered source material.
  • A writer can use AI when performing writing services if the company consents and provides that the writer follows applicable company policies. Still, the company can't require the writer to use AI software (e.g., ChatGPT) when performing writing services.
  • The company must disclose to the writer if any materials given to the writer have been generated by AI or incorporate AI-generated material.
  • The WGA reserves the right to assert that exploitation of writers' material to train AI is prohibited by MBA or other law.

The Early Results Are In

AI has begun to transform our lives and our businesses. The extent to which AI impacts M&E has yet to be determined, but, as we have seen, content localization is one sector already acknowledging the benefits. As a provider to the industry of manual content analysis and age-rating services for years, Spherex recognized the promise AI/ML had in improving efficiency, expanding territories, and enhancing workflows. After three years of development, Spherexgreenlight™ was released in October 2021 and revolutionized how content is prepared for international distribution. Since then, the Spherex AI platform has won numerous industry awards and has been approved by regulators worldwide as the one public tool available to any content company seeking regulatory approval for any film or TV title.

If you want to learn how AI can help you distribute your content globally, please click here or email sales@spherex.com

Related Insights

Spherex Wins MarTech Breakthrough Award for Best AI-Powered Ad Targeting Solution

The annual MarTech Breakthrough Awards are conducted by MarTech Breakthrough, a leading market intelligence organization that recognizes the world’s most innovative marketing, sales, and advertising technology companies. 

This year’s program attracted over 4,000 nominations from across the globe, with winners representing the most innovative solutions in the industry. This year’s roster includes Adobe, HubSpot, Sprout Social, Cision, ZoomInfo, Optimizely, Sitecore, and other top technology leaders, alongside in-house martech innovations from companies such as Verizon and Capital One.

At the heart of this win is SpherexAI, our multimodal platform that powers contextual ad targeting at the scene level. By analyzing video content across visual, audio, dialogue, and emotional signals, SpherexAI enables advertisers to deliver messages at the most impactful moments. Combined with our Cultural Knowledge Graph, the platform ensures campaigns resonate authentically across more than 200 countries and territories while maintaining cultural sensitivity and brand safety.

“Spherex is leveraging its expertise in video compliance to help advertisers navigate the complexities of brand safety and monetization,” Teresa Phillips, CEO of Spherex, said in a statement. “SpherexAI is the only solution that blends scene-level intelligence with deep cultural and emotional insights, giving advertisers a powerful tool to ensure strategic ad placement and engagement.”

This recognition underscores Spherex’s commitment to building the next generation of AI solutions where cultural intelligence, relevance, and brand safety define success. The award also highlights the growing importance of cultural intelligence in global advertising. As audiences consume more content across borders and devices, brands need solutions that go beyond surface-level targeting to connect meaningfully with viewers. SpherexAI provides that bridge, empowering advertisers to scale campaigns that are not only effective but also contextually relevant and culturally respectful.

Read Now

YouTube Thumbnails Can Get You in Trouble

Here’s Why Creators Should Pay Attention

When we talk about content compliance on YouTube, most people think of the video content itself — what’s said, what’s shown, and how it’s edited. But there’s another part of the video that carries serious consequences if it violates YouTube policy: the thumbnail.

Thumbnails aren’t just visual hooks — they’re promos and they’re subject to the same content policies as videos. According to YouTube’s official guidelines, thumbnails that contain nudity, sexual content, violent imagery, misleading visuals, or vulgar language can be removed, age-restricted, or lead to a strike on your channel. Repeat offenses can even result in demonetization or channel termination. That’s a steep price to pay for what some may think of as a simple promotional image.

The Hidden Risk in a Single Frame

The challenge? The thumbnail is often selected from the video itself — either manually or auto-generated from a frame. Creators under tight deadlines or managing high-volume channels may not take the time to double-check every frame. They may let the platform choose it automatically. This is where things get risky.

A few seconds of unblurred nudity, a fleeting violent scene, or a misleading expression of shock might seem harmless in motion. But when captured as a still image, those same moments can trigger YouTube’s moderation systems — or worse, violate the platform’s Community Guidelines.

Let’s say your video includes a horror scene with simulated gore. It might pass YouTube’s rules with an age restriction. But if the thumbnail zooms in on a blood-splattered face, that thumbnail could be removed, and your channel could be penalized. Even thumbnails that are simply “too suggestive” or “misleading” can get flagged.

Misleading Thumbnails: Not Just Clickbait — a Violation

Another common mistake is using a thumbnail that implies something the video doesn’t deliver — for example, suggesting nudity, shocking violence, or sexually explicit content that never appears in the video. These aren’t just bad for audience trust; they’re a clear violation of YouTube’s thumbnail policy.

Even if your content is compliant, the wrong thumbnail can cause very real problems.

The Reality for Content Creators

It’s essential to recognize that YouTube’s thumbnail policy doesn’t exist in isolation. It intersects with other rules around child safety, nudity, vulgar language, violence, and more. A thumbnail with vulgar text, even if the video is educational or satirical, may still result in age restrictions or removal. A still frame with a suggestive pose, even if brief and unintended in the video itself, can be enough to get flagged.

And for creators monetizing their work, especially across multiple markets, the risk goes beyond visibility. A flagged thumbnail can reduce ad eligibility, limit reach, or cut off monetization entirely. Worse, a pattern of violations can threaten a channel’s long-term viability.

What’s a Creator to Do?

First, you need to know how to spot the problem and then know what to do about it. Second, you need to know if the changes you make might affect its acceptance in other markets or countries. Only then can you manually scrub through your video looking for risky frames. You can review policies and try to stay up to date on the nuances of what YouTube considers “gratifying” versus “educational” or “documentary.” But doing this at scale — especially for a growing content library — is overwhelming.  

That’s where a tool like SpherexAI can help.

A Smarter Way to Stay Compliant

SpherexAI uses frame-level and scene-level analysis to flag potential compliance issues — not just in your video, but in any frame that could be selected as a thumbnail. Using its patented knowledge graph, which includes every published regulatory and platform rule, it will prepare detailed and accurate edit decision lists that tell you not only what the problem is, but also for each of your target audiences. Whether you're publishing to a single audience or distributing globally, SpherexAI checks your content against YouTube’s policies and localized cultural standards.

For creators trying to grow their brand, monetize their work, and stay in good standing with platforms, that kind of precision can mean the difference between success and a takedown notice.

Want to know if your content is at risk? Learn how SpherexAI can help you protect your channel and optimize every frame — including the thumbnail. Contact us to learn more.

Read Now

Automating Peace of Mind: Navigating YouTube's Global Guidelines with SpherexAI

For media companies distributing content across YouTube, compliance is no longer just a legal requirement—it’s a prerequisite for discoverability, monetization, and channel survival. YouTube enforces strict policies governing child safety, vulgarity, graphic content, and cultural sensitivity. For content owners, ensuring compliance across multiple categories and geographies is a complex and labor-intensive process. To address this issue, SpherexAI provides a scalable solution tailored for any content creator or owner.

YouTube’s Expanding Compliance Landscape

YouTube’s Community Guidelines cover a wide array of regulated categories. Content can be removed or age-restricted—and creators may face penalties—if videos violate policies on:

  • Nudity and sexual content: Content that includes sexually gratifying imagery or non-consensual sexualization is prohibited.
  • Violence and graphic imagery: Footage showing serious injury, bodily fluids, or torture intended to shock viewers can be flagged or removed.
  • Child safety: Content that exploits minors, includes inappropriate family content, or features children in dangerous stunts is not allowed.
  • Illegal or regulated goods: YouTube restricts promotion of firearms, narcotics, and gambling services, among others.

Managing compliance with each of these categories—especially when content is global and multilingual—is a logistical challenge for distributors.

Enter SpherexAI: Precision Compliance Automation at Scale

SpherexAI applies multimodal AI to analyze video content across dialogue, visuals, audio, and metadata. It detects compliance issues not only by scanning for policy violations but also by identifying subtle cultural or regional sensitivities that could result in content removal or limited distribution.

For example, the platform flags:

  • Dialogue with excessive profanity or sexual references, aligned with YouTube’s vulgar language policy.
  • Visuals showing partial nudity, firearm use, or dangerous stunts, which may trigger strikes or age restrictions.
  • Culturally sensitive depictions—such as religious imagery or portrayals of death—that may violate local norms and platform rules.

SpherexAI outputs include timestamped alerts and severity levels, allowing content owners to make targeted edits rather than performing full manual reviews.

Equal Rules for All Creators

Whether you’re a major studio releasing film clips or a digital-first creator uploading your first series, YouTube holds all content publishers to the same standards. Community Guidelines are enforced platform-wide, regardless of a channel’s size, history, or market familiarity.

This presents a significant challenge for new entrants. Many first-time creators or distributors may be unaware that a thumbnail featuring misleading imagery, a prank involving minors, or a scene with unedited drug references can lead to demonetization or a channel strike. But YouTube’s enforcement is uniform: content that violates policy is subject to the same sanctions across the board.

SpherexAI helps level the playing field by equipping every content team—regardless of experience—with access to the same tools used by top studios. Its patented knowledge graph, built on over a decade of regulatory insight and expert human annotation, powers its AI models with unmatched precision. The result: faster reviews, greater accuracy, and fewer costly mistakes.

Cross-Platform, Region-Aware, and Regulation-Ready

Unlike tools focused on metadata or age ratings alone, SpherexAI delivers:

  • Granular analysis: Scene-by-scene breakdowns for violence, vulgarity, sexual content, and self-harm risks.
  • Cultural intelligence: Predictive models assess content suitability across 240+ territories using Spherex’s proprietary “cultural distance” framework.
  • Workflow integration: The platform’s API allows integration into existing supply chains and CMS platforms for automated review at scale.

Reducing Risk, Unlocking Revenue

YouTube’s monetization eligibility hinges on content safety. Channels can be demonetized or de-prioritized in search and recommendation if flagged for repeated violations. Well-known creators Logan Paul, ScreenCulture, and LH Studios have all been sanctioned for violations. By proactively identifying and resolving compliance issues before publishing, SpherexAI empowers content owners to:

  • Avoid strikes or takedowns
  • Retain monetization rights
  • Accelerate time-to-market
  • Protect brand reputation

Conclusion

YouTube is a dynamic platform for global content distribution that requires rigorous adherence to evolving content standards. For studios, broadcasters, and new creators alike, SpherexAI offers an AI-powered safety net automating policy compliance while preserving creative integrity. When SpherexAI is integrated into your production workflow, you can publish confidently at scale, with full compliance, and with no brand risk.

Ready to streamline compliance and expand your YouTube strategy globally?

Book a demo or visit spherex.com to learn how SpherexAI can support your team.

Read Now