Color Space

A color space is a subset of entire visible light spectrum, and the reason that it’s a subset is that being able to reproduce the entire visible spectrum is technologically impossible at this time. Various color spaces will all target a different subset of colors, although they may overlap with each other. A color space is also known as a color gamut.

The most common color space on a PC today is sRGB, and almost all laptops and monitors target this color range. But in the past several years, we’ve seen more and more devices able to target a gamut with a wider range of colors, such as Adobe RGB, and there are advantages and disadvantages to using these wider gamuts on a PC.


The sRGB Color Space - Source: SpectraCal

The sRGB, or standard Red Green Blue gamut is an RGB color space which is the standard used on all PCs and the Internet. If you look at an image on the internet, it’s more than likely an image within the sRGB color space. Windows defaults to using the sRGB color space for everything.

Other common gamuts used in the PC industry are the Adobe RGB gamut, developed by Adobe in 1998, and the P3 color space, both of which are wider than sRGB. Wider means that they cover a range of colors larger than the sRGB gamut.


The Adobe RGB Color Space - Source: SpectraCal


The P3 D65 Color Space - Source: SpectraCal


The BT.2020 HDR Color Space - Source: SpectraCal

But being wider isn’t always better, especially when using Windows as your operating system. Windows defaults to sRGB for pretty much everything, and doesn’t really have any sort of robust color management system (CMS) that can transform colors targeting one space into another. Applications can build and use their own CMS, but that’s at the application level only, and requires the developer to do much of the legwork. If you think of an sRGB color represented to be displayed on an 8-bit display, if you want to display 100% red you’d use 8,0,0 on that pixel so that red is on the maximum brightness (typically 255 or 1.0f). But if Windows opens a photo that was created as sRGB on a display that covers the P3 gamut, it will still tell the display to use 100% red, zero green, zero blue. But 100% red in sRGB is closer to 80% red in a P3 display, so the colors will be overblown, and the image won’t look correct.

Incorrect color on the Left (Photos) vs correct color on the Right (Adobe Photoshop Elements)

To Microsoft's credit, Windows is getting better. But progress here is slow due to legacy matters and the need to avoid breaking existing software, so using wider color spaces on the desktop is still a dicey proposition.

The end result is that on a laptop, offering a wider gamut display can cause problems for most workloads. And for that reason, unless you do a lot of work in a color managed application like something from Adobe, it’s generally not ideal, unless your laptop has the ability to switch the color gamut the display uses through software.

Finally, there are even wider gamuts available, like the Rec. 2020 color space. But at the moment there’s no technology available in a laptop-sized implementation that can create all of the colors in that space.

White Point

The white point on a PC display is generally going to be D65, which corresponds roughly to sunlight at midday in western or northern Europe. When discussing the color temperature in Kelvin, D65 is close to 6504 K. When discussing a DCI-P3 display, the target white point is D63, which is more green than D65. In the PC space, you’ll generally be dealing with a P3 D65 gamut, rather than DCI-P3 which is used in the cinema. Though there are a few PCs, such as the Surface Studio, that can target either DCI-P3 or P3 D65.

How many bits?

Most displays are 8 bits-per-channel, meaning each red, green, and blue channel can have 256 (2^8) steps, which provides 16,777,216 (256 * 256 * 256) different color combinations. That sounds like a lot of colors, and it is, but on each channel there’s still a fairly large jump between steps, since red can only be 0-255, for example.

Less expensive displays may even reduce this more to 6-bit with Frame Rate Control (FRC) which uses temporal dithering which quickly switches between two colors to simulate the color in between to simulate the full 8-bit levels. Often TN displays are only 6-bit with FRC, and lower priced IPS such as e-IPS can also be 6-bit.

Meanwhile with the wider color gamuts available now, the jump in a color step for each increase from 0-255 can be even wider. So lower-bit panels are more susceptible to color banding on the display, which as the name implies, is a situation where distinct bands of colors appear when the intention is for a smooth gradient. This occurs when two colors next to each other are supposed to be slightly different, but end up looking exactly the same or more different than intended, since the display can’t create the color in-between. To counteract this, more bits are needed to represent smaller graduations from one step to the next, so often a wider gamut display will be 10-bit, or 8-bit with FRC. A true 10-bit display offers 1024 levels for each channel, which creates 1,073,741,824 different color options.

Although all modern GPUs support 10-bit, NVIDIA currently restricts 10-bit on OpenGL to their Quadro lineup, so applications using OpenGL like Adobe’s suite require a professional GPU.

High Dynamic Range

High dynamic range, or HDR, is one of the best new features to come to displays in quite some time. HDR brings a wider range of brightness levels, and requires a higher bit-depth as well, to keep the detail available in darker scenes. To provide that, displays certified for HDR have to achieve higher brightness levels than traditional displays as well, so it can be a large benefit.

Source: Samsung

VESA has announced the DisplayHDR specification, which has three levels: DisplayHDR 400, 600, and 1000, which each level corresponding to the peak brightness of the display in nits.

There are two main competing HDR transport standards right now, although only one is available on the PC. HDR10 is a 10-bit format with static metadata for the HDR info, and Dolby Laboratories Dolby Vision is a 12-bit color depth with dynamic metadata which allows the mapping of the pixel data to the luminance to be adjusted frame by frame. PCs only support HDR10, and not only do you need an HDR monitor, you also need at least a Kaby Lake iGPU or newer, an NVIDIA 900 or newer series GPU, or AMD Radeon RX 400 series or higher, as well as DisplayPort 1.4 or HDMI 2.0.

There are only a couple of laptops on the market which offer an HDR display at the moment, but with the eDP connector and complete control over the product stack, HDR in a laptop is likely to be easier for the consumer to use. As of WIndows 10 version 1803, the built-in display needs to have a resolution of 1920x1080 or higher, with a recommended brightness of 300 nits or more. The device has to have integrated graphics with PlayReady hardware DRM for protected HDR, and of course the correct codecs for video which is generally going to be HEVC. Kaby Lake or newer support this. The nice thing will be consumers won't have to worry about any of this, since it will be up to the OEM to ensure everything is in order, rather than with a desktop system where the user has to make sure all of the i's are dotted and the t's are crossed to enable HDR.

There’s a lot of new HDR products coming to market soon, and when it becomes mainstream it’ll be a welcome benefit.

Building the Transistors and Lighting the Display How We Test Displays
Comments Locked

49 Comments

View All Comments

  • linuxgeex - Thursday, July 12, 2018 - link

    "Driving the extra pixels with the GPU and other components is a tiny difference. That's a common misconception you've stumbled upon."

    Going from 1920x1080 to 3840x2160 is 4x the rendering cost, minimum (recognise that given more than 2 layers to composite you can easily exceed the CPU's L3 cache size with a 4k display), and that is 4x the amount of time that the CPU and all related subsystems can't drop to C7 sleep.

    It's not a tiny difference at all. If it was negligible then why is the OS trying to use PSR (Panel Self Refresh) and FBC (FrameBuffer Compression) to reduce the IO channel and RAM access overheads, while those costs are negligible compared to keeping the CPU and GPU spinning with rasterizing and compositing.

    What's keeping your OS and apps compositing constantly? Your browser which now does full-page 60hz updates of every pixel, changed or not, so the OS can't send only the damaged pixels to the display device as in earlier versions. Why? Because modern machines are fast enough and it's a "small difference" but keeps the render pathways hot in the caches so less frames are dropped. Welcome to 2018, when your battery life got slaughtered and people haven't quite clued in yet.
  • erple2 - Sunday, July 22, 2018 - link

    PSR and FBC tasks are tackling the 20% case, though, namely the parts at idle where 80% of the power consumed is just directly from keeping the backlight bright enough that the LCD can be seen. Note also that PSR and FBC doesn't make that much of a difference in battery life overall. I've seen up to about 10% in some cases. And that's consistent with doubling the GPU rendering pipeline efficiency _at idle_ for the entire display pipeline. Doubling the efficiency of 20% of your overall budget decreases power consumption by around 10%.

    Note that much of the compositing engine is offloaded (in modern GPUs) from the heavyweight parts of the 3D rendering pipeline, so those costs aren't that high in comparison. It's not like you're keeping all 2048 stream processors (or however many equivalent GPU processors) active 60 times a second. That was the first "revolution" in GPU efficiency gains a while back - you didn't need to keep your entire GPU rendering silicon active all the time if they weren't being used.
  • linuxgeex - Wednesday, July 11, 2018 - link

    "Less expensive displays may even reduce this more to 6-bit with Frame Rate Control (FRC) which uses the dithering of adjacent pixels to simulate the full 8-bit levels."

    No. FRC uses Temporal dithering. It shows the pixel brighter or darker across multiple frames which average out to the intended intensity. On displays with poor response times this actually works out quite nicely. On TN displays, you can actually see the patterns flickering when you are close to a large display and cast your gaze around the display. Particularly in your peripheral vision which is more responsive to high-speed motion changes.

    VA - You mentioned MVA, which is one type of PVA arrangement. PVA is Patterned Vertical Alignment, where not all of the VA pixels/subpixels are aligned in the same plane. Almost all VA displays are PVA. PVA allows to directly trade display brightness for wider viewing angles, and to choose in which direction those tradeoffs will be made. For example a PVA television will trade off mostly in the horizontal direction because that allows people to sit in various places around the room and still see the display well. They don't need to increase the vertical viewing angle so that the roof has a good view of the tv. ;-) But for a laptop just the opposite is true. You want to still see the display well when you stop slouching or stand up, but you don't really care if the people to your sides can see your display well. In fact, people purchase privacy guard overlays that reduce the side viewing angles intentionally.
  • Brett Howse - Wednesday, July 11, 2018 - link

    Excellent info thanks!
  • linuxgeex - Thursday, July 12, 2018 - link

    The author was obviously in a hurry, saw the word "dithering", and jumped to the conclusion that it was spatial error distribution dithering as is commonly used in static images to create an appearance of a larger palette. ie GIFs, printers. But for video there's a 3rd dimension to perform dithering in which doesn't trade off resolution or cause edge flickering artefacts, so of course they're going to use FRC (Frame Rate Control which is basically a form of PWM) instead of spatial dithering.
  • linuxgeex - Thursday, July 12, 2018 - link

    Oh Brett, lol that's you. ;-)
  • UtilityMax - Friday, July 13, 2018 - link

    WTF, you guys still test laptop displays at the time when more than half of personal computing has already moved onto mobile devices, like phones or tables, which you no longer review? Mmmokay.
  • linuxgeex - Friday, July 13, 2018 - link

    Actually they have reviewed new phones within the last 30 days... Mmmokay.
  • Zan Lynx - Saturday, July 14, 2018 - link

    A tablet is just a gimped laptop without a keyboard.
  • madskills42001 - Tuesday, July 17, 2018 - link

    Given that contrast is the most important factor in subjective image quality tests, why is more discussion not given to it in this article?

Log in

Don't have an account? Sign up now