NAB17 UHDF Demo blogs 3/5 - Dynamic Metadata

Ian Nock – Author

The Ultra HD Forum booth at NAB17 demonstrated Ultra HD and particularly HDR technologies that are deployable today. For the Forum, HDR has indeed now matured to the point where live services can be deployed with today’s production technology. The forum also demonstrated several developing technologies that will mature in the next 12 to 18 months to further build on today’s technology and enhance the customer experience in video.

The Forum has been working on its UHD guidelines since 2015. The current version (1.4) focuses on what we term “Phase A” technologies that are available to deploy in 2016/2017. These can also be what standards organisations refer to as “Phase 1”. Phase A covers 4K resolution at 50 or 60fps, HDR based on static metadata or HLG, with Wide Color Gamut, and multichannel sound solutions. Future Phase B technologies include further developments in HDR to include Dynamic Metadata, High Frame Rate (>100fps) and object audio based solutions such as AC4, DTS-X and MPEG-H.

The booth had five thematic Pods and this is the third of five blogs about those Pods and their demos. I’ll describe the lessons we learned and how the demos were received at NAB17.

This third pod was focused on the near future, Dynamic Metadata.

Dynamic Metadata

Current PQ based HDR video delivery may use static metadata as defined in SMPTE ST.2086, which covers the content and the display that the content was mastered with. The main values that are of interest are called MaxCLL and MaxFALL, which loosely translate to the maximum and average values of the luminance of the content. These values are available to be used by displays when rendering the content to best reproduce the original artistic intent of the producer on the specific display the consumer has, which may have widely varying maximum luminance capabilities and color characteristics.

This static metadata approach however may not be able to provide sufficient information to cover all scene types in the content as the metadata values are defined to apply for the entire duration of the program, whether the program has sunny beach and dark murder scenes intermixed throughout its duration. These simple static values not actually optimizing all the scene types including the more average scenes of the witness interviews.

This is one of the reasons dynamic metadata can have a key role. It is provided through the SMPTE ST.2094 set of standards. The set of standards represent a number of vendor implementations of dynamic metadata approaches that seek to allow metadata to change according to the scene or even the individual frame of the video content. Allowing it to change, tunes metadata parameters to track more closely to the content rendering needs, bringing best possible content reproduction across widely varying HDR displays.

Dolby demonstrated the capability of dynamic metadata using three professional 2000 nit displays capable of almost rendering the original camera capture. They showed the original capture video side by side with versions of the content with static and dynamic (ST.2094-10) metadata set to reproduce the content as if it was on 400nit displays. The reproduction of the contrast of the original content was shown to degrade with static metadata, but improve significantly with dynamic metadata. Visitors were clearly interested to see this improvement for themselves.

Our next blog will look at the fourth demonstration pod… a look into the near future with HDR and SDR Improvements that are just around the corner.

Ultra HD Forum

Ultra HD Forum
5177 Brandin Court
Fremont, CA 94538
UNITED STATES

Tel. +1-510-492-4050

Privacy Policy 2024 -  Web design, development and management by Squeaky Carrot

    If you would like to contact us, please send us a message using the following form