Display

As always, the display of any mobile device is a critical part of the overall user experience. A poor display in any way is often going to sour the entire experience.

On a personal note, there are a number of mobile devices that I’ve used over the course of the previous year that frankly just weren’t good enough for me to use as a daily driver because the display just wasn’t good enough. My laptop is quite closely calibrated to sRGB and it’s used to edit all of my device photos, so I’ve really come to appreciate a device that has sufficiently accurate color that I can actually use a phone or tablet as a reference monitor of sorts to verify that images look the way I want them to.

In order to test this critical portion of the user experience, we turn to our standard test suite which uses SpectraCal’s CalMAN 5, a custom workflow for testing basic metrics like brightness, contrast, and calibration accuracy, and X-Rite’s i1Pro2 and i1DisplayPro.

Starting off with a microscope's view of the Galaxy S7's display, it looks like Samsung has elected to keep most aspects of the display constant when comparing the Galaxy S6 and S7. At a high level, the display is the same 5.1” display size that we’ve seen for a few generations now, and the 1440p resolution is shared with previous devices. Samsung continues to use their diamond PenTile layout, but it’s hard for me to say whether there’s been an adjustment to the size of the emitters as the microscope I have on hand isn’t quite sufficient for making such measurements. It’s likely that under the hood there are changes to the display driver IC in order to enable features like Always-On Display, but as we’ll soon see it’s rather unlikely that there are any generational changes in things like the emitter material or TFT backplane.

Display - Max Brightness

One of our first tests here is a pretty standard test of maximum luminance. Here, we see that the Galaxy S7 and S7 edge both are in the same general ballpark as the Galaxy Note5, which suggests that both devices are likely to be in the same generation of AMOLED panel. This brightness was achieved by using the auto-brightness mode, so it’s important to note that the max luminance in manual mode will be much lower. Of course, this brightness figure was determined with a full white display so reducing the APL will result in a higher maximum luminance as the power budget can we spent on fewer pixels which means that a higher duty cycle can be achieved in each pixel.


Galaxy S7


Galaxy S7 edge

Display - Grayscale Accuracy

Display - White Point

The next part of our testing is grayscale. As always, we target the industry standard of a 2.2 power gamma with 6504k white point. Relative to the Galaxy S6 and Note5, we see a pretty significant improvement in white point accuracy as it’s pretty much consistently quite close to a neutral white rather than a warmer color balance. Unfortunately though, in both review units I received the display has a noticeable green tint for many shades of grey, which seems to be somewhat of a perpetual problem with Samsung AMOLED displays. This really does affect quite a bit of the UI, as Material Design greys have this noticeable green tint to them that really makes things look off.

The same issue seems to not be present on the Galaxy S7 edge, which leads to a significant improvement overall in calibration quality for this portion of the testing, but both devices have a noticeably lower gamma than expected, which does have some effect on accuracy but for the most part can help to serve as a compensation mechanism for reflectance when dealing with ambient light. It’s likely that the green tint issue may only appear on a device to device basis, but to see that such issues haven’t been resolved for years is somewhat concerning given that phones costing hundreds of dollars less don’t seem to have the same problems.


Galaxy S7


Galaxy S7 edge

Display - Saturation Accuracy

The next portion of our testing is the standard saturation sweep test. Here, the Galaxy S7 and S7 edge are basically perfect. It’s great to see that Samsung continues to provide their Basic color mode with a real focus on providing accurate color calibration for those that care about these things, and the user experience with getting to the right color calibration is pretty much as painless as it can be compared to some other devices where things like saturation curves, white balance, and other parts of a display calibration can only be adjusted using unitless sliders that basically require a spectrophotometer to actually use.


Galaxy S7


Galaxy S7 edge

Display - GMB Accuracy

In our Gretag MacBeth ColorChecker test, we see that there are some issues with grayscale accuracy, but overall color accuracy remains quite good. In terms of overall display quality, I don’t really think there’s any meaningful improvement over the Galaxy S6, but that’s mostly because the Galaxy S6 set a ridiculously high bar for display quality.

However, I don’t believe that Samsung has run out of things to improve for future AMOLD displays. In addition to the grayscale problems mentioned earlier, Samsung clearly has not resolved issues with color shifting that occurs with viewing angle changes. LCDs definitely have more luminance degradation as you move away from the normal of the display plane, but at almost every angle change I can see whites get noticeably colder and interference patterns, in addition to a general color shift that is noticeably more than most LCDs used in high end smartphones and tablets. It’s obvious that this is a hard problem to solve due to uneven subpixel aging, but for things like tablets, laptops, and desktops color shifting is going to be a much more significant issue.

GPU Performance Software UX
Comments Locked

202 Comments

View All Comments

  • jjj - Tuesday, March 8, 2016 - link

    As for battery tests, as long as you don't simulate a bunch of social, IM, news apps in the backgroud, you can't get close to good results when you got so many diff core configs.
  • retrospooty - Tuesday, March 8, 2016 - link

    "s long as you don't simulate a bunch of social, IM, news apps in the backgroud, you can't get close to good results when you got so many diff core configs"
    You have to have consistent methodology to test against other phones. The point is not to show what you might get with your particular social/newsfeed apps running, the point is to test against other phones to see how it compares under the same set of circumstances so you know which one gets better life.
  • jjj - Tuesday, March 8, 2016 - link

    Sorry but you failed to understand my points.
    It would be consistent ,if it is simulated.
    The point is to have relatively relevant data and only by including at least that you can get such data. These apps have major impact on battery life- and it's not just FB even if they were the only ones that got a lot of bad press recently.
    The different core configs - 2 big cores, 4 big cores in 2+2, then 4+4, 2+4+4, or just 8 will result in very different power usage for these background apps, as a % of total battery life, sometimes changing the rankings. Here for example, chances are the Exynos would get a significant boost vs the SD820 if such tasks were factored in.

    How many such simulated tasks should be included in the test is debatable and also depends on the audience, chances are that the AT reader has a few on average.
  • retrospooty - Tuesday, March 8, 2016 - link

    And you are missing mine as well... If you have a million users, you will have 10,000 different sets of apps. You cant just randomly pick and call it a benchmark. The methodology and the point it to measure a simple test against other phones without adding too many variables. I get what you want, but its a bit like saying "i wish your test suite tested my exact configuration" and that just isnt logical from a test perspective.
  • jjj - Tuesday, March 8, 2016 - link

    What i want is results with some relevance.
    The results have no relevance as they are, as actual usage is significantly different. In actual usage the rankings change because the core configs are so different. The difference is like what VW had in testing vs road conditions, huge difference.
    To obtain relevant results you need somewhat realistic scenarios with a methodology that doesn't ignore big things that can turn the rankings upside down. Remember that the entire point of bigLITTLE is power and these background tasks are just the right thing for the little cores.
  • retrospooty - Tuesday, March 8, 2016 - link

    relevant to who? I use zero social media apps and have no news feeds running at all until I launch feedly. Relevant to you is not relevant to everyone or even to most people. This site is a fairly high traffic site (for tech anyhow) and they have to test for the many, not the few. The methodology is sound. I see what you want and why you want it, but it doesn't work for "the many"
  • jjj - Tuesday, March 8, 2016 - link

    Relevant to the average user. I don't use social at all but that has no relevance as the tests should be relevant to the bulk of the users. And the bulk of the users do use those (that's a verifiable fact) and more. This methodology just favors fewer cores and penalizes bigLITTLE.
  • retrospooty - Tuesday, March 8, 2016 - link

    Its still too unpredictable. One persons Facebook feed may go nuts all day while anothers is relatively calm. This is also why specific battery ratings are never given by manufacturers... Because usage varies too much. This is why sites test a (mostly) controllable methodology against other phones to see which fares the best. I find it highly useful and when you get into the nuts and bolts, it's necessary. If you had a bunch of phones and actually started trying to test as you mentioned you would find a can of worms and inconsistent results at the end of your work...
  • jjj - Wednesday, March 9, 2016 - link

    "Its still too unpredictable" - that's the case with browsing usage too, even more so there but you can try to select a load that is representative. You might have forgotten that i said simulate such apps, there wouldn't be any difference between runs.
    Yes testing battery life is complex and there are a bunch of other things that could be done for better results but these apps are pretty universal and a rather big drain on resources.They could be ignored if we didn't had so many core configs but we do and that matters. Complexity for the sake of it is not a good idea but complexity that results in much better results is not something we should be afraid of.
    10 years later smartphone benchmarking is just terrible. All synthetic and many of those apps are not even half decent. Even worse, after last year's mess 99% of reviews made no attempt to look at throttling. That wouldn't have happened in PC even 10 years ago.
  • retrospooty - Wednesday, March 9, 2016 - link

    I think you are a little too hung up on benchmarks. It is just a sampling, an attempt at measuring with the goal to compare to others to help people decide, but what really matters is what it does and how well it does it. I find it highly useful even if not exact to my own usage. If unit A lasts 20 hours and unit B lasts 16 hours in the standard tests, but I know my own usage gets 80% of what standard is I can estimate my usage will be 16 hours on unit A and 12.8 hours on unit B (give or take). It really doesn't need to be more complicated than that since testing batteries is not an exact science, not even on PC/laptops as usage varies there just hte same. That is why there are no exact guarantees. "Up to X hours" is a standard explanation. It is what it is.

Log in

Don't have an account? Sign up now