Five Considerations for Large Scale Media Workflow Projects

One of the biggest barriers to cloud adoption in media workflows has been the massive scale of changing broadcast infrastructure. When a broadcaster, especially a large one, wants to change its media workflow, there are so many moving parts that it can take a huge amount of time and resource to see it through. If a system is already working, it can be challenging to get the resources allocated for such a project. The cloud is slowly changing that, promising better efficiencies and cost savings once the first (somewhat overwhelming) hurdle is negotiated.

How can broadcasters ensure that those infrastructure changes stay on track and that the resulting workflow delivers improvements that make it worthwhile? There are a few key considerations to take into account:

1. Requirements change

As simple as this sounds, this is often the hardest part. Normally as the RFI is issued, the broadcaster will not yet have fully fleshed out requirements. While these will be better shaped by the RFP, it is still unlikely that the broadcaster will fully know the scale of the requirements, even once vendors are selected. It is important however that before any work begins on designing the workflow, you fully understand what you need to achieve. The only way to do that is by conducting research with the users. However, even with extensive research, requirements can often shift as the project gets underway, normally as new options are uncovered. This invariably also changes the budget so the more research you can do at the start, the more you can reduce the likelihood of constant evolutions throughout.

2. Updating workflows can’t happen overnight

Changing an entire media workflow is a massive undertaking. If you try and do it in one go, it is likely you will not end up with something that is workable and you will be waiting a long time for it to be delivered. We always break it up into smaller tasks as we begin and make sure that the team understands what is required for each of those tasks and who will take responsibility for them. The team needs to be assigned based on the effort needed for each of those tasks and the expected timeframe to deliver it.

It is also sensible to have continuous updates with small sprints designed to deliver immediate value that can be iterated on as the project continues to roll out. The first part in this approach will be a minimum viable product, delivering the basic functionality to test and pilot. This should ideally be done alongside existing equipment so you can compare and improve.

3. Cloud makes testing more attainable

Testing is of course a crucial part of any media workflow update. However, if you are moving to cloud workflows, you have much more flexibility for the amount of testing that you can do. In an ideal scenario you should have:

  1. Test environment – this is where the developers test the initial code and iron out any issues.
  2. Staging environment – once the developers are happy, it can be transitioned to a staging environment, giving super users access to begin testing actual workflows and highlighting any issues.
  3. Production environment – this is where the environment is then rolled out to users in the production setting and enables final testing in situ. At this stage, it will also be important to connect different services, such as CDN, authentication, transcode, etc.

Enabling three test environments running alongside the actual environment would be complex and expensive on-premise. This means that often you simply cannot have multiple environments so the testing is not as good as it could be. At the same time, the cloud ensures you can roll out continuous updates as those testing environments identify any changes you want to make.

4. Understanding costs can be a challenge

Many broadcasters using legacy hardware simply cannot tell you the cost to produce and deliver media content. This is because there are so many different elements and it is almost impossible to track this. The move to the cloud is changing that as it means you consume per gigabit and have a continuous overview of every element of cost.

5. Not all serverless architectures are the same

There is a growing trend towards serverless architectures, giving broadcasters much more flexibility, less need for physical infrastructure, as well as better control over cost. Amazon Web Services has been a fundamental element in this transition, with its widely available media-grade solutions all running totally serverless. This trend is putting pressure on vendors to deliver tools via a serverless architecture. However, some so-called serverless architectures are actually just legacy servers being hosted in the cloud. In my mind this is akin to saying “here is an image of our software.” It is simply not the same and broadcasters need to be mindful of the difference.

Transitioning media workflows is a monumental task, especially if there is a fundamental shift from on-premise to cloud. Often these projects take years and involve constant evolution as processes evolve and new innovation changes what is possible. As broadcasters move media workflows to the cloud however, this will make the process much simpler in the future, making it easy to swap around elements in real-time without having to totally redesign the workflow from scratch.

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.