For some time now the consumer electronics industry has been grappling with how to improve the performance and efficiency of display interfaces, especially in light of more recent increases in display resolution. Through the eras of DVI, LVDS/LDI, HDMI, and DisplayPort, video has been transmitted from source to sink as raw, uncompressed data, a conceptually simple setup that ensures high quality and low latency but requires an enormous amount of bandwidth. The introduction of newer interface standards such as HDMI and DisplayPort have in turn allowed manufacturers to meet those bandwidth requirements so far. But display development is reaching a point where both PC and mobile device manufacturers are concerned about their ability to keep up with the bandwidth requirements of these displays, and their ability to do so at reasonable cost and resource requirements.

In order to address these concerns the PC and mobile device industries – through their respective VESA and MIPI associations – have been working together to create new technologies and standards to handle the expected bandwidth requirements. The focus of that work has been on the VESA's Display Stream Compression (DSC) standard, a descriptively named standard for image compression that has been in development at the VESA since late 2012. With that in mind, the VESA and MIPI have announced today that DSC development has been completed and version 1.0 of the DSC standard has been ratified, with both organizations adopting it for future display interface standards.

As alluded to by the name, DSC is an image compression standard designed to reduce the amount of data that needs to be transmitted. With DisplayPort 1.2 already pushing 20Gbps and 1.3 set to increase that to over 30Gbps, display interfaces are already the highest bandwidth interfaces in a modern computer, creating practical limits on how much further they can be improved. With limited headroom for increasing interface bandwidth, DSC tackles the issue from the other end of the problem by reducing the amount of bandwidth required in the first place through compression.

Since DSC is meant to be used at the final transmission stage, DSC itself is designed to be “visually lossless”. That is to say that it’s intended to be very high quality and should be unnoticeable to users across wide variety of content, including photos/video, subpixel text, and potentially problematic patterns. But with that said visually lossless is not the same as mathematically lossless, so while DSC is a high quality codec it’s still mathematically a lossy codec.

In terms of design and implementation DSC is a fixed rate codec, an obvious choice to ensure that the bandwidth requirements for a display stream are equally fixed and a link is never faced with the possibility of running out of bandwidth. Hand-in-hand with the fixed rate requirement, the VESA’s standard calls for visually lossless compression with as little as 8 bits/pixel, which would represent a 66% bandwidth savings over today’s uncompressed 24 bits/pixel display streams. And while 24bit color is the most common format for consumer devices, DSC is also intended work with higher color depths, including 30bit and 36bit (presumably at higher DSC bitrates), allowing it to be used even with deep color displays.

We won’t get too much into the workings of the DSC algorithm itself – the VESA has a brief but insightful whitepaper on the subject – but it’s interesting to point out the unusual requirements the VESA has needed to meet with DSC. Image and video compression is a well-researched field, but most codecs (like JPEG and H.264) are designed around offline encoding for distribution, rather than real-time encoding as part of a display standard. DSC on the other hand needed to be computationally cheap (to make implementation cheap) and low latency, all the while still offering significant compression ratios and doing so with minimal image quality losses. The end result is an interesting algorithm that uses a combination of delta pulse code modulation and indexed color history to achieve the fast compression and decompression required.

Moving on, with the ratification of the DSC 1.0 standard, both the VESA and MIPI will be adopting it for some of their respective standards. On the VESA side, eDP 1.4 will be the first VESA standard to include it, while we also expect DSC’s inclusion in the forthcoming DisplayPort 1.3. MIPI in turn will be including DSC in their Display Serial Interface (DSI) 1.2 specification for mobile devices.

With the above in mind, it’s interesting how both groups ended up at the same standard despite their significant differences in goals. The VESA is primarily concerned with driving ultra high resolutions such as 8K@60Hz, which would require over 50Gbps of uncompressed video and something not even DisplayPort 1.3 would be able to achieve. MIPI on the other hand is not concerned about resolutions as much as they are concerned about power and cost requirements; a DisplayPort-like interface could supply mobile devices with plenty of bandwidth, but high bitrate interfaces are expensive to implement and are typically very power hungry, both on an absolute basis and a per-bit basis.

Display Bandwidth Requirements, 24bpp (Uncompressed)
Resolution Bandwidth Minimum DisplayPort Version
1920x1080@60Hz 3.5Gbps 1.1
2560x1440@60Hz 6.3Gbps 1.1
3840x2160@60Hz (4K) 14Gbps 1.2
7680x4320@60Hz (8K) >50Gbps 1.3 + DSC

DSC in turn solves both of their problems, allowing the VESA to drive ultra high resolutions over DisplayPort while allowing MIPI to drive high resolution mobile displays over low cost, low power interfaces. In fact it’s surprising (and almost paradoxical) that even with the additional manufacturing costs and encode/decode overhead of DSC, that in the end DSC is both cheaper to implement and lower power than a higher bandwidth interface.

Wrapping things up, while DSC enabled devices are still some time off – the fact that the standard was just ratified means new display controllers still need to be designed and built – DSC is something we’re going to have to watch closely. Display compression is not something to be taken lightly due to the potential compromises to both image quality and latency, and while it’s unlikely the average consumer will notice it’s definitely going to catch the eyes of enthusiasts. The VESA and MIPI are going in the right direction by targeting visually lossless compression rather than accepting a significant image quality tradeoff for better bandwidth savings, but it remains to be seen just how lossless/lossy DSC really is. At a fundamental level DSC can never beat the quality of uncompressed display streams, but that doesn’t rule out other tradeoffs that will make compression worth the cost.

Source: VESA

Comments Locked

85 Comments

View All Comments

  • dabotsonline - Wednesday, April 23, 2014 - link

    "... while DSC enabled devices are still some time off – the fact that the standard was just ratified means new display controllers still need to be designed and built... "

    Why will this be a concern given that DisplayPort 1.3 hasn't yet been announced?

    "... we also expect DSC’s inclusion in the forthcoming DisplayPort 1.3."

    This suggests that DSC will be included from the start. Unless DisplayPort 1.3 controllers are being worked on already?

    What's interesting is that the recently-leaked Thunderbolt 3, scheduled for release alongside Skylake in autumn/fall 2015, only has support for DP 1.2. Therefore, we will probably have to wait for Thunderbolt 4, which will presumably be released in autum/fall 2017, for DP 1.3 and DSC support.
  • willis936 - Wednesday, April 23, 2014 - link

    What it means is that your brand new Samsung Galaxy S6 will likely implement DSC.
  • piroroadkill - Wednesday, April 23, 2014 - link

    The diagram is totally awful.

    It's a diagram with vectors, rendered as a JPEG.

    Looks horrible.

    Not to mention the final output image looks like it has been run through a JPEG compressor on low settings about fifty times.

    I don't want compression from someone who thinks this is a reasonable slide on a presentation.
  • TristanSDX - Wednesday, April 23, 2014 - link

    This compression is totally nonsense. High res displays (4K, 8K) shows biggest advantage while displaying edges (fonts, lines, sharp borders of regions etc). And these elements are ALWAYS blurred by ANY non-loseless compression alghoritms. This way 4/8K will be visually downgraded to FHD (or less) with blurred edges.
  • savagemike - Wednesday, April 23, 2014 - link

    So we'll have a really clear high res look at crappy compressed information. I just hope it has a bypass with zero compression if using a display below a given factor. Then that factor will be my upward bounding limit for displays I buy.
  • watersb - Wednesday, April 23, 2014 - link

    Are gamers really all that different from audiophiles? I hope so.

    There are obvious things than can be done to result in dramatic compression of the frame buffer. You don't need 32 bits of color information per line. Worst case, each pixel is a different color. That's 8192 colors on a line. 13 bits...

    (Reminds me of Amiga HAM mode.)

    Is there anything at all in the spec that addresses panel self-refresh?
  • NikAwesome - Saturday, April 26, 2014 - link

    Excuse me, and how many bits do you employ to code the dictionary for addressing 8192 colors. Bear in mind that each line can have a different dictionary. So, you are missing lots of bits in your math
  • watersb - Saturday, April 26, 2014 - link

    Hmm... Yes, you are right, I need a look-up-table for each line. But I won't need 32 bits for each pixel on the line. I still think I can get at least one order of magnitude reduction of data (factor of 10) with a per-line LUT... I will try it. If I'm wrong, I will post a reply!
  • Communism - Thursday, April 24, 2014 - link

    Sounds like the 1440p @ 120hz ASUS ROG Swift will be the last monitor upgrade I will get before the world devolves into full retard mode.
  • androticus - Friday, April 25, 2014 - link

    Don't use "the" with acronyms, it is clunky and makes the text difficult to read. If the reader is going to just pronounce the acronym and not try to unpack it into its components, then "the" is incorrect.

Log in

Don't have an account? Sign up now