IBC 2025 Preview: AI transforms content production and creative workflows

By NCS Staff September 4, 2025

Subscribe to NCS for the latest news, project case studies and product announcements in broadcast technology, creative design and engineering delivered to your inbox.

As broadcast professionals converge on Amsterdam for IBC 2025 this September, tools with artificial intelligence capabilities are reshaping creative workflows across the content production pipeline.

Industry executives report AI tools have moved beyond experimental phases into production processes, automating time-intensive tasks while expanding creative possibilities.

From scripting to final delivery

AI systems now assist content creators throughout the entire production lifecycle, from initial story development through final platform delivery. These applications address labor-intensive processes that traditionally required extensive human resources and extended timelines.

“AI is transforming content production end-to-end. Automating scripting, enriching live production with real-time tagging, and accelerating post with instant highlights, edits, and localization,” said Ross Tanner, senior vice president for EMEA at Magnifi. “For sports and media, it turns days of work into minutes, enabling personalized, platform-ready content at scale while giving creators more time to focus on storytelling.”

The technology extends beyond simple automation to what industry professionals describe as creative partnership.

AI-driven systems provide real-time assistance during live broadcasts, managing technical elements without disrupting creative focus. These systems handle automatic camera tracking, audio level optimization, and real-time graphics generation.

“Beyond a tool to automate tasks, AI is increasingly a creative partner for live production teams,” said Roberto Musso, technical director at NDI. “The use of AI in workflows has allowed teams to simplify the content production process by enlisting tools that offer functions such as automatic camera tracking, optimizing audio levels, and generating graphics in real-time.”

A significant development in AI-assisted production involves agentic workflows, where AI systems can execute complex, multi-step tasks with minimal human oversight while maintaining editorial quality standards. These systems can tailor content for different audiences, apply real-time metadata, and format content for multiple channels simultaneously.

Advertisement

“AI, especially in the form of agentic workflows, is now accelerating every stage of the content pipeline — from automated story discovery and script generation to multi-platform clipping and post-production,” said Jonas Michaelis, CEO of Qibb. “These systems can quickly tailor content for different audiences, apply real-time metadata, and format it for a range of channels—tasks that typically require large, specialized teams.”

However, industry executives emphasize that human oversight remains essential for maintaining editorial standards and creative intent, particularly in news and compliance-sensitive content.

This “human in the loop” approach ensures AI acceleration doesn’t compromise editorial quality or compliance requirements.

“Even with AI doing more heavy lifting, having a ‘human in the loop’ remains essential to ensuring editorial quality, compliance and creative intent while speeding up time-to-air,” Michaelis said.

News production and post-production acceleration

AI applications show particular impact in news operations, where speed and accuracy requirements create specific challenges for traditional production methods. Automated systems handle routine tasks like transcription, translation and metadata enrichment while journalists focus on reporting and analysis. The technology addresses increasing demand for multi-platform content delivery, where single stories must be formatted for television, digital platforms and social media channels simultaneously.

“AI is moving beyond experimentation into core production processes, with the greatest impact seen in accelerating timelines and reducing hours spent on manually-intensive tasks,” said Craig Wilson, product evangelist at Avid. “In news, this includes automating transcription, translation, and metadata enrichment to support faster story creation and multi-platform delivery.”

In post-production environments, AI applications focus on content discovery and editing acceleration. Systems can identify specific clips within extensive video archives, eliminating time-consuming manual review processes that traditionally slow editorial workflows. The impact extends to performance refinement and localization tasks, where AI systems adapt content for different markets and technical requirements.

“AI enables faster post production workflows by automating tasks like video indexing and content discovery,” said Frederic Petitpont, CTO and co-founder of Moments Lab. “AI and AI agents significantly reduce the time editors spend for example, scrubbing through footage to find exact clips. The result is creative teams significantly increasing their video output.”

Wilson notes that post-production AI applications enable more efficient editing, localization, and performance refinement, giving creative teams more time to focus on storytelling rather than technical tasks.

Infrastructure requirements and technical processing

The effectiveness of AI applications depends significantly on the underlying technical infrastructure and data workflows. Industry executives emphasize that successful AI implementation requires centralized, well-organized content repositories rather than fragmented storage systems.

“There’s a lot of snake oil out there right now when it comes to AI. Buyer beware,” said Derek Barrilleaux, CEO of Projective. “If content is strewn all over the organization, it will be next to impossible to get real usable value from AI. But if you have everything centralized and coherent, now AI tools can truly provide value.”

Technical processing architecture also affects AI system performance and cost-effectiveness. Some companies report significant efficiency gains by consolidating video compression and AI processing into unified pipelines using GPU-based processing rather than separate CPU-based systems.

Advertisement

“Many media companies are using slow, complex, and costly CPU-based processing, where one pipeline handles compression and another handles AI processing,” said Sharon Carmel, CEO of Beamr. “By using GPUs exclusively, video compression and AI enhancements can run together in the same real-time pipeline, with faster, more efficient, and cost-effective video and data processing.”

Data quality represents another critical factor in AI system effectiveness. Organizations must invest in comprehensive content cataloging and proper indexing to maximize AI tool value, as systems are only as effective as the data they process.

“AI agents are only as good as the quality of the data they’re fed, and large-scale video indexing projects are essential to unlocking the full value of AI workflows,” Petitpont said.

Some companies focus on eliminating data preparation bottlenecks through streaming-based approaches that allow AI systems to access content without time-consuming file transfers and format conversions.

“AI is the hook, automating repetitive tasks while surfacing insights from a single source of truth,” said Peter Thompson, CEO and co-founder of Lucidlink. “Stream-don’t-sync workflows eliminate data-prep bottlenecks in Gen-AI pipelines, enabling teams to spend less time wrangling data and more time delivering insights and impact.”

Quality control and advanced content analysis

Traditional quality control processes relied on rule-based systems and manual review procedures. AI applications now identify technical issues that previously required human inspection, including complex audio-visual synchronization problems and graphical interference. The technology enables natural language-driven workflow creation, which lowers technical barriers and democratizes media operations.

“Quality control is also evolving; rather than relying solely on rule-based checks, AI can now identify issues like lip-sync mismatches or graphical interference that traditionally required human review,” said Charlie Dunn, executive vice president of products at Telestream. “Perhaps the most profound shift is the rise of natural language-driven workflow creation, which lowers technical barriers and democratizes media operations.”

Audio processing represents an area where AI capabilities have expanded beyond basic loudness control into sophisticated language and speech management.

Machine learning systems can now handle complex multilingual content requirements automatically, identifying speakers, languages, and inconsistencies while adapting mixes for different playback devices.

“AI functions are increasingly capable of managing language and speech clarity at scale, going far beyond basic loudness control,” said Costa Nikols, executive-team strategy advisor for media and entertainment at Telos Alliance. “Machine learning can identify speakers, languages, flag inconsistencies, adapt mixes for intelligibility across devices, and detect profanity in multiple languages and dialects.”

Advertisement

Advanced AI systems now analyze multiple content elements simultaneously to generate comprehensive metadata and enable sophisticated content manipulation through multimodal analysis. These applications examine visual, audio and narrative components to understand content context at granular levels, enabling automated highlight generation, trailer creation and replay sequences.

“The technology examines visual, audio, and narrative elements frame by frame to capture the full context of each scene and automatically generate detailed metadata,” said Adam Massaro, senior product marketing manager at Bitmovin, noting it can help deliver hyper-personalized viewing experiences and more effective ad targeting.

This scene-level analysis extends content value and creates new monetization opportunities by enabling automated content derivatives and more effective advertising placement.

As the broadcast industry prepares for IBC 2025, creative professionals continue evaluating AI applications that provide genuine workflow improvements versus implementations driven primarily by marketing considerations.

The focus appears to be shifting toward practical tools that integrate seamlessly into existing creative processes while maintaining the editorial standards essential to professional broadcasting.

The 2025 IBC will provide industry professionals with opportunities to examine these AI applications firsthand and evaluate their potential impact on content creation workflows. 

Subscribe to NCS for the latest news, project case studies and product announcements in broadcast technology, creative design and engineering delivered to your inbox.