Column: The five pillars of media production – considerations in creating your media supply chain

By Ben Davenport, Pixotope

Featured Column

View more columns from NewscastStudio contributors

In the previous article, we drew parallels between media production and distribution and the manufacture of vehicles and looked at how principles from lean manufacturing might be applied to the media supply chain. In this article, we’re going to consider the five pillars of media production, and therefore the five key components in the media supply chain.

Infrastructure

Everyone loves a good analogy, so we’ll stick with manufacturing. At the center of your operations are your plant or factory and your warehouses. The location, size and layout of your warehouse is going to depend on what you store there and how quickly your plant needs access to it – media and metadata storage is similar.

In some manufacturing, the location of the plant is also very important – keeping it as close as possible to the distribution network or even consumer, to reduce time to market, or keeping it close to the raw material to minimize transportation or acquisition cost.

Historically, media has been similar, with media production processes often co-located with studios and/or distribution headends. However, over the last few years, the amount of proprietary or use-case-specific hardware has dropped almost to zero and the vast majority of media “plant” is now running on commodity CPU in standard IT datacenters. This, together with remote connectivity possibilities, means that we can be far more flexible in locating the infrastructure for the media supply chain, including, of course, private or public cloud as well as hybrid scenarios. Cloud or hybrid infrastructure offers us incredible flexibility to scale our warehouse and plant up and down as the business demands, massively reducing waste. Equally, cloud and hybrid afford us the opportunity to re-tool our plant for different purposes without significant change or startup costs. For example, if we wanted to change the transcoder in a media workflow, with a cloud-based media supply chain, this can be as simple as updating a configuration file.

Transformation

Transformation isn’t a term we commonly use, but it works well to describe all the processes we might use to take media and metadata from one form and/or place to another – of which transcoding is, of course, a significant subset. What transformations we include (or exclude) in our workflows has a significant bearing on the overall design and operation of our supply chain, and vice versa.

Advertisement

For example, in the previous article, we looked briefly at standardization and how, by using a standardized componentized mezzanine format, we could more easily apply just-in-time principles on delivery. However, in order to achieve that mezzanine format, it’s highly likely that the first step in the workflow after (or during) acquisition will be an initial transformation. In instances where we want to minimize the number of transformations – for reasons of speed or preservation – this additional step may be undesirable. In most scenarios, minimizing the number of types of formats passing through a workflow between acquisition and the final transformation for distribution is going to significantly reduce the variables in the workflow, making the supply chain more maintainable and minimizing errors.

Another consideration in transformations is how they are carried out. When it comes to metadata, most transformations are quick, and the time those transformations take has little bearing on the end-to-end time of the workflow. However, a lot of transcoding and other media processing applications can take a significant portion of time, especially with long-form material, and, therefore, how the process is done and how the output is delivered can have important downstream implications.

One of the ways that Henry Ford dramatically decreased end-to-end production time of his vehicles was through the application of the moving production line. This meant a second task could begin before the first was completed, rather than waiting for each task to complete fully before moving the vehicle between stations or bringing parts and tools for the next task to the vehicle.

Similarly, in media, we can concatenate tasks by using media formats in the production stages of the supply chain that allow for growing file workflows. This means we can start editing a file or a media analysis process (such as QC or AI processes like speech to text) while a file is still being transcoded. For long-form content this can have a significant impact on the overall time it takes the media to pass through the supply chain and reduce time to market. For short clips and media, such as advertisements, there may be limited benefit to growing file workflows. It should also be noted that implementing these workflows does have an impact on network and storage specifications.

Manipulation

Manipulation covers anything we might do to change or add to the media or metadata – most obviously media editing, but we can also include things such as adding captions, graphical overlays and different languages, as well as creating different versions (through segmentation) and other creative processes that might be applied to media. What manipulation we need to do to what type of media will be a significant factor in decisions around infrastructure; a workflow that requires significant manipulation of high-bit rate, high-resolution media may be better suited to a hybrid infrastructure.

Analysis

In many ways, media and metadata analysis isn’t dissimilar to media and metadata transformation, but instead of equivalent media or metadata on the output, we get new, enriched or augmented metadata. However, similarly to transformation, what analysis we do and at what stage in the workflow will depend mostly on the end destination of the media. For example, we may run media through an automated QC tool to check for technical compliance with a delivery format, or even a cognitive service that checks for content compliance if the media is destined to be distributed prior to a watershed or into a region that restricts certain imagery.

We may also perform analysis both on the metadata and media to aid or instruct manipulation, for example using a simple scene detection to find potential ad insertion points, speech-to-text analysis to complement a captioning workflow, or facial/object recognition analysis to add metadata, enabling editors to more quickly find clips to use in an edit.

Acquisition

Media and metadata acquisition is often the first step in a media supply chain, but it is listed as the last of the five pillars here because in all cases it will be some combination of transformation, manipulation and analysis. The process of acquiring media and metadata, or ingesting into our workflow, is fundamentally about taking that media and metadata from an untrusted to a trusted state. By “trusted” we mean a state in which we know that we have all the required metadata, in the required form, to initiate workflows and that the media will pass through those workflows and down the supply chain without error. Depending on the workflows, we may have different levels of trust, and, depending on the source and how and when we receive the media and metadata, we may also consider separate infrastructure to facilitate our acquisition process. For example, if we frequently but irregularly receive media from other facilities with differing production standards, we could consider creating a cloud-based quarantine where we can receive, analyze (QC), manipulate (add metadata) and transform (transcode to a mezzanine format) incoming media and metadata on an on-demand basis.

In the next article, we’ll look at how the considerations above can be put together into a specification for a media supply chain that meets our specific needs.

Ben Davenport, PixotopeBen is a B2B marketer with a keen interest and understanding of the technologies that underpin the media and entertainment industry. During the past two decades, Ben has played a key role in some of the most complex and progressive file-based media solutions and projects in the industry while enabling leading media & entertainment technology vendors to differentiate their brands and products. Having previously headed up Portfolio & Marketing Strategy for Vidispine - an Arvato Systems brand, Ben has recently joined Pixotope as VP Global Marketing. Ben holds a Bachelor's degree in Music & Sound Recording (Tonmeister) from the University of Surrey.

Author Avatar