How Does NextGen TV Deliver Both Quality and Reach?

With the television landscape rapidly evolving toward cinema-grade home experiences, the underlying broadcast technology must also make a generational leap. We sat down with Christopher Hailstone, a leading expert in next-generation digital television systems, to explore a transformative architecture that promises to redefine how content is delivered. Our conversation delves into the powerful synergy between Versatile Video Coding (VVC) and the flexible physical layer of ATSC 3.0. We’ll explore how this pairing moves beyond the old “one-size-fits-all” model, offering broadcasters unprecedented efficiency and viewers a superior, more reliable experience, whether they are watching on a 100-inch screen in a perfect reception zone or on a mobile device deep inside a building.

Legacy broadcast models often forced a single technical compromise, potentially leaving viewers with either poor reception or premium displays underserved. How does the synthesis of layered video coding and Physical Layer Pipes change this “one-size-fits-all” approach, and what new service possibilities does this unlock?

That’s really the core of this entire architectural shift. For decades, we were stuck in a paradigm where we had to pick one transmission configuration. You’d engineer for a single operating point, which was inevitably a compromise. If you made it robust enough for the weakest signal areas, you were wasting precious bandwidth and failing to deliver the premium quality that high-end displays are capable of. If you optimized for peak quality, viewers on the fringe of your coverage area would get nothing. What we’re doing now is breaking free from that monolithic approach. By pairing a layered codec like VVC with ATSC 3.0’s Physical Layer Pipes, or PLPs, we can serve both ends of the spectrum simultaneously from a single RF channel. We can put a universal, highly reliable base video layer in a very robust PLP, ensuring everyone gets a picture. Then, we can stack enhancement layers for UHD, HDR, and high frame rates into less robust, higher-capacity PLPs. A viewer’s receiver simply decodes as many layers as its signal quality allows. This unlocks the ability to offer graceful degradation of service—no more digital cliff—while also delivering a true, premium, cinema-quality experience to those who can receive it.

A key strategy involves mapping a base video layer to a highly robust PLP and enhancement layers to less robust ones. Could you detail the engineering process for this mapping and discuss the critical trade-offs when configuring each PLP’s modulation and code rate?

The engineering process is a fascinating exercise in balancing robustness against payload capacity. It all starts with the service goals. We first define our baseline service—say, a 1080p SDR signal that absolutely must get through to everyone. We map this Base Layer to our most robust PLP. To achieve that robustness, we’d use a very conservative modulation scheme like QPSK and a strong forward error correction code rate. The trade-off is that this PLP won’t carry a lot of data, but its signal can survive a journey through dense urban canyons or deep into a concrete building. Next, we design our enhancement layers. The first EL might upgrade the picture to 1080p with HDR, and we’d map that to a PLP with a higher-order modulation like 256-QAM and a less aggressive code rate. This pipe carries more data but requires a better signal. A final EL could carry the delta for a full 2160p UHD experience, mapped to an even higher-capacity, less-robust PLP. The critical decision-making involves analyzing your market’s geography and reception patterns to define the right number of layers and the precise configuration of each PLP, ensuring a smooth transition between quality tiers for the viewer.

Broadcasters have traditionally used simulcasting to serve different quality tiers, which can be spectrum-inefficient. How does a layered model achieve greater efficiency, and could you share an example of the payload savings when upgrading a 1080p service to UHD with this architecture?

Simulcasting is incredibly wasteful because you’re sending huge amounts of redundant information over the air. If you broadcast a 1080p stream and a separate 2160p stream of the same program, the majority of the bits in both streams are describing the exact same foundational image. The layered model is far more intelligent and efficient because it embraces data-level sharing. Instead of sending two full, independent streams, we send a single, complete 1080p base layer. Then, for the UHD tier, we only need to transmit an enhancement layer that contains the delta—just the additional data required to upgrade that 1080p picture to 2160p. You can immediately see the savings. The enhancement layer is a fraction of the size of a full, self-contained 2160p stream. This reduction in redundant payload is the direct source of our spectrum efficiency. It allows a broadcaster to offer multiple service tiers without the massive overhead of simulcasting, freeing up bandwidth for more channels or more robust services.

Layered Division Multiplexing (LDM) can provide a significant SNR gain for a core service, ensuring robust reception in challenging environments. Please explain the mechanism behind LDM and describe specific scenarios, like mobile or deep indoor viewing, where its impact is most transformative.

LDM is one of the most powerful tools in the ATSC 3.0 physical layer, and it perfectly complements the layered video strategy. While PLPs can be multiplexed in time or frequency, LDM allows us to transmit two layers—a robust Core and a less robust Enhanced layer—on the exact same frequencies at the exact same time. We achieve this by transmitting the Core layer at a higher power level and the Enhanced layer at a lower power level, essentially superimposed on top. A receiver first decodes the high-power Core layer. Then, using sophisticated signal cancellation, it subtracts that Core signal from what it received, revealing the lower-power Enhanced layer underneath. The magic here is that this process provides a huge boost to the Core layer’s robustness, often giving it a 3 to 9 dB SNR advantage. This is transformative for those really tough reception scenarios. Think about someone watching on a tablet deep inside an office building or a passenger viewing on a phone in a moving vehicle. In these situations, a standard signal would be lost, but the LDM-boosted Core PLP gets through, delivering a stable, uninterrupted base picture.

Successfully deploying this architecture requires precise alignment of codec packetization, transport signaling, and receiver buffering. What are the biggest technical hurdles broadcasters face in this area, and what steps ensure a seamless viewer experience without playback stalls or decoding errors?

This is where the real engineering artistry comes in. Getting these three domains—the codec, the IP transport, and the receiver—to dance in perfect synchronization is the biggest challenge. First, at the codec level, the VVC encoder has to be configured to slice the video into Network Abstraction Layer units that align perfectly with the payload capacity of their assigned PLPs. You can’t have a crucial piece of a frame split awkwardly across packets. Second, the transport signaling must be flawless. The receiver needs to be told explicitly, via SLS/LLS signaling, that the data in PLP-1 is an enhancement for the service in PLP-0, not a separate channel. Without that dependency map, the receiver can’t reconstruct the full-quality picture. Finally, and perhaps most critically, is timing and buffering. The base layer has to arrive and be ready for presentation on time, every time. This means the receiver must have a smart buffering strategy to hold onto the base layer frames just long enough to see if the corresponding enhancement layer data arrives from a less reliable PLP, then merge them without ever causing a stutter or stall on screen. It’s a delicate, end-to-end system design challenge that requires meticulous planning to make the viewer’s experience completely seamless.

What is your forecast for the adoption of layered VVC and PLP architectures in broadcasting over the next five years?

I am incredibly optimistic. I foresee a steady and accelerating adoption over the next five years, moving from early-adopter trials to mainstream deployment. The standards are now mature and formalized, with VVC defined in A/345 for use in ATSC 3.0, which provides a stable foundation for manufacturers and broadcasters. The driving force won’t just be the desire to offer UHD and HDR, but the powerful operational and economic benefits. The spectrum efficiency gained by replacing simulcast is a massive incentive. It allows broadcasters to do more with the same bandwidth—offer more services, reach more viewers reliably, and provide a fundamentally better product. As more broadcasters deploy ATSC 3.0, they will quickly realize that this layered architecture is not just an optional feature but the key to unlocking the standard’s full potential. It solves the core, decades-old problem of serving a diverse audience with varying reception conditions, and that’s a compelling proposition that is simply too powerful to ignore.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later