SMPTE report examines artificial intelligence’s growing role in media workflows
Subscribe to NCS for the latest news, project case studies and product announcements in broadcast technology, creative design and engineering delivered to your inbox.
SMPTE has released an updated engineering report examining artificial intelligence and machine learning in media production, developed in collaboration with the European Broadcasting Union and the Entertainment Technology Center.
The document, designated SMPTE ER 1011:2025, represents a revision of the organization’s 2023 report and emerges from an AI standards task force established in 2020. The 54-page report provides technical background on AI and ML systems while examining their applications in media workflows.
According to the organizations, the update includes new material on several topics that have gained prominence since the previous version. These additions cover the Model Context Protocol, a framework for connecting AI applications to external systems; enhanced security considerations for AI implementations; and the role of open source software in AI development.
The report also addresses ISO/IEC 42001:2023, a management system standard for organizations developing or deploying AI systems. This framework, which became available in late 2023, establishes requirements for AI governance, risk assessment and ethical considerations.
“The AI in Media landscape is evolving at an unprecedented pace, and the SMPTE AI Taskforce remains committed to keeping up with this transformation,” said Thomas Bause Mason, SMPTE standards director. “The revision to the Engineering Report reflects new developments such as the Model Context Protocol, enhanced security considerations, open source in AI, and emerging frameworks like ISO/IEC 42001 among others.”
The document covers technical aspects of different AI approaches, including supervised learning, unsupervised learning and reinforcement learning. It examines generative AI systems, including large language models and diffusion models used for content creation. The report includes discussion of variational auto-encoders and generative adversarial networks, two architectures used in media applications.
A section on AI ethics examines considerations for organizations implementing these systems. The report outlines principles including transparency, inclusivity and accountability. It describes an “AI ethics pipeline” covering organizational structure, product design, data collection and model development.
The document addresses security concerns specific to AI systems, including data poisoning and jailbreaking. It examines how these threats differ from traditional information security challenges and discusses approaches to securing AI implementations in media workflows.
According to the report, the media industry faces unique considerations when implementing AI systems.
These include intellectual property protection, compliance with licensing agreements and labor contracts, and maintaining standards for content appropriateness. The document notes that media organizations process large amounts of consumer data, requiring adherence to privacy regulations.
The report surveys the current standards landscape for AI, including work by ISO/IEC Joint Technical Committee 1 Subcommittee 42, which oversees AI standardization efforts. It describes the European Union’s AI Act and the U.S. National Institute of Standards and Technology’s AI Risk Management Framework as two regulatory approaches under development.
The organizations identify several areas where standards development might prove useful. These include benchmarking methodologies for media-specific AI tasks, metadata schemas for AI models and datasets, and best practices for using data in model training. The report notes that formal standardization of protocols like MCP and agent-to-agent communication frameworks could improve interoperability.
The document examines current applications of AI in media production, distribution and consumption. These include automated content production, metadata generation, audience analytics and content recommendation systems. The report describes how AI systems are being used for tasks such as sports video production, where algorithms can recognize game situations and generate highlights.
A section on datasets discusses the importance of training data quality and availability. The report notes that development of large, annotated datasets has enabled advances in AI performance but that licensing issues can limit access to media content for training purposes. It suggests that standardized datasets with clear licensing terms could benefit the industry.
The task force that produced the report includes participants from broadcast organizations, technology companies, academic institutions and standards bodies. Contributors include representatives from the BBC, ITV, France Télévisions, RAI, Netflix, Adobe, AWS and academic institutions.
The report acknowledges that AI technology continues to develop rapidly. The organizations indicate that ongoing monitoring and periodic updates will be necessary as new capabilities emerge and as regulatory frameworks evolve.
The organization’s AI task force continues to meet regularly to assess developments in the field and consider areas for potential standards work.
Subscribe to NCS for the latest news, project case studies and product announcements in broadcast technology, creative design and engineering delivered to your inbox.



tags
AI, Artificial Intelligence, EBU, Entertainment Technology Center, European Broadcasting Union, SMPTE
categories
Broadcast Automation, Broadcast Engineering