LCD Testing: A Feast for Your Eyes

Let’s start out the testing by going straight to the biggest draw with the Galaxy Pro tablets: the WQXGA displays. Even without testing, I could see by looking that the colors on the Pro 10.1 looked a bit better/more natural than on the Pro 8.4, but I was curious to see if the colors were truly accurate or merely not as oversaturated. Depending on your display setting, it’s a little of both.

I tested the Pro 10.1 in four modes (“Auto”, Dynamic, Standard, and Movie), and contrary to what I’ve seen reported elsewhere, the Movie mode resulted in the most accurate colors. Most tablets and laptops often use white points that are far too hot (blue), and that applies to the 10.1 on the Dynamic and Standard modes, though Standard is a bit better perhaps; it also applies to the Pro 8.4 display. The Movie mode on the other hand clearly reduces the saturation levels and ends up being very good overall. Here are five sets of galleries showing the testing results for the various display modes on the 10.1 as well as the sole mode on the 8.4.

As for brightness, contrast, and DeltaE results, both models do reasonably well, again with the color accuracy advantage going to the 10.1. Keep in mind that the only other tablets in these charts just happen to be some of the best displays on the market, with the iPad Air being factory calibrated and the Nexus 7 being one of the best non-Apple devices in terms of color accuracy.

CalMAN Display Performance - Grayscale

CalMAN Display Performance - Gretag Macbeth

CalMAN Display Performance - Gamut

CalMAN Display Performance - Saturations

CalMAN Display Performance - White Point Average

Display Contrast Ratio

Display Brightness - White Level

Display Brightness - Black Level

While none of the results are necessarily standouts (other than the grayscale dE 2000 on the Pro 10.1), we again have to keep in mind the fact that these are 2560x1600 panels in 10.1 and 8.4 inch devices. Factory calibration would push them over the top, but even without that they’re going to provide a wow factor to anyone used to lower resolution, lower quality displays.

Samsung Galaxy Pro Software Performance Benchmarks
Comments Locked

125 Comments

View All Comments

  • Wilco1 - Monday, March 24, 2014 - link

    What is claimed this is CPU performance at maximum frequency, not a latency test of bursty workloads. It would be interesting to see Anand's browsing test reporting both power and performance/latency results as it seems a reasonable test of actual use. However SunSpider is not like a real mobile workload.

    The datasets for most of the benchmarks in Geekbench are actually quite large, into 20-30MBytes range. That certainly does not fit into the L2 on any SoC I know, let alone on L1. So I suggest that Geekbench gives a far better idea of mobile performance than a benchmark that only measures the set of JIT optimization tricks to get a good SunSpider score.

    Intel doesn't have magic that makes frequency scaling 10-100 times faster - PLLs and voltage regulators all use the same physics (until recently Intel was using the same industry-standard voltage regulators as everybody else). The issue is one of software, the default governor is not recognizing repeated patterns of bursty behaviour and keeping clocks high for longer when necessary. Intel avoids the Linux governor issues by using a separate microcontroller. I have no doubt that it has been well tuned to the kind of bursty behaviour that SunSpider exhibits.
  • virtual void - Monday, March 24, 2014 - link

    So you are suggesting that the performance counters in Sandy Bridge is reporting the wrong thing when it reports 97% L1D$-hit rate in Geekbench? They seem to work quite well on "real" programs.

    The performance counters also suggest that Geekbench contains trivial to predict branches, while program developed with dynamic languages and/or OOP languages usually contains a lot of indirect and even conditional indirect calls that is quite hard to predict. Only the most advanced CPU-designs keep history on conditional indirect calls, so a varying branch target on a indirect call will always result in a branch-prediction miss on mobile CPUs.

    The sampling frequency of CPU-load and the aggressiveness the Linux kernel switches P-state is based on the reported P-state switch latency. All modern Intel CPUs report a switching latency of 10µs while I haven't seem any ARM SoC report anything lower than 0.1ms. The _real_ effect of this is that Intel platforms will react about ten times as fast to a sudden burst in CPU-load when running Linux-kernel.
  • Wilco1 - Monday, March 24, 2014 - link

    SPEC2006 has ~96% average L1D hit rate, so do you also claim SPEC has a small working set and runs almost entirely out of L1? The issue is not about the correctness of the performance counters but your interpretation of them. The fact that modern CPUs can run at multiple GHz despite DRAM internally running at ~50MHz still is precisely because caches and branch predictors work pretty well.

    C++ and GUI code typically only has a limited number of distinct targets, which are easy to predict on modern mobile CPUs (pretty much any ARM CPU since Cortex-A8 has had indirect predictors, and since A15 they support multiple targets). I've never seen conditional indirect calls being emitted by compilers, so I can imagine some CPUs may ignore this case, but it's not in any way hard to predict. The conditional indirect branches you do get in real code are conditional return (trivial to predict) and switch statements on some ARM compilers.

    Well if there is such a large difference then there must be a bug - I did once glance over the Samsung cpufreq drivers and they seemed quite a mess. It is essential to sample activity at a high resolution, if you sample at Nx slower rate then you do indeed react N times slower to a burst of activity - irrespectively of how fast the actual frequency/voltage scaling is done.
  • Egg - Monday, March 24, 2014 - link

    Alright, I'll admit I didn't actually read the article. It just seemed you were unaware of what Brian had said previously.
  • UltraWide - Saturday, March 22, 2014 - link

    The Galaxy Note 10.1 2014 has 3GB of RAM.
  • JarredWalton - Sunday, March 23, 2014 - link

    It's not clear if all 10.1 Note 2014 come with 3GB, or just the 32GB models, but I'm going to go with 3GB (and hopefully that's correct, considering the cost increase for the Note). I had the Samsung specs pages open when putting together that table, and unfortunately they didn't list RAM on the 10.1 16GB I was looking at. Weird.
  • Reflex - Saturday, March 22, 2014 - link

    " If you want another option, the Kindle Fire HDX 7” ($200) and Kindle Fire HDX 8.9” ($379) pack similar performance with their Snapdragon 800 SoCs, but the lack of Google Play Services is a pretty massive drawback in my book."

    For many of us that's actually the Kindle line's largest advantage. Android and a good chunk of its app ecosystem, without compromising our privacy and exposing ourselves to all the malware. Plus we got these specs six months ago with the HDX line, and for a lower price in a better package.
  • A5 - Saturday, March 22, 2014 - link

    Yeah, because the best way to avoid malware is to bypass the Play Store and install an APK from a random website to get Youtube to work.

    And you're only fooling yourself if you think Amazon is any better for your privacy than Google.
  • Reflex - Saturday, March 22, 2014 - link

    Have you actually read their privacy policies and compared? Or taken a look at their profit models? There is a significant difference between the two for their approaches to privacy.

    And no, if I really care to get an app like that I can get it from a third party market if I must. There are some that mirror the Play store. But that said, there are very few needs that are not met via apps already available in the Amazon store.
  • R0H1T - Sunday, March 23, 2014 - link

    So you're saying that Amazon has no record of you in their database whatsoever OR that they don't track your browsing history through their Silk browser, using Amazon's own servers, & never target (ads/promos) you based on your buying/browsing history ?

    I'd say you're deluding yourself if you think that Yahoo, twitter, FB, bing or even Amazon are any different than Google when it comes to tracking their users or targeting them with specific ads/promos based on their (recorded) history ):

Log in

Don't have an account? Sign up now