While Qualcomm already announced the Snapdragon 821, with the announcement details were rather sparse. Fortunately, today Qualcomm followed up with more details. Those that followed the announcement might recall that the only information disclosed at the time was that the CPU big cluster was now at 2.4 GHz. Today, we also get a GPU clock disclosure, with full details as seen below.

  Snapdragon 820 Snapdragon 821
CPU Perf Cluster 2x Kryo 2.15 GHz 2x Kryo 2.34 GHz
CPU Power Cluster 2x Kryo 1.59 GHz 2x Kryo 2.19 GHz
GPU Adreno 530 624 MHz Adreno 530 653 MHz

Interestingly enough, Qualcomm is also claiming a 5% bump in power efficiency, which sounds like it’s actually referring to platform power but could just be overall SoC efficiency. Other marketing bullet points include support for Snapdragon VR SDK which allows for Daydream support as well as dual phase detection. I’m not sure what this is unless this is referring to support for two separate phase detect auto focus systems similar to the Sony Alpha SLT-A99, but Qualcomm is claiming that this will improve autofocus speed compared to a traditional PDAF solution. The ISP also now supports extended ranges for laser AF, so systems like those seen in the LG G5 and HTC 10 will be able to better guide contrast AF for devices where PDAF isn’t available or can’t be used.

Qualcomm is also citing some interesting statistics for user experience with the Snapdragon 821, such as 10% faster boot speed, 10% faster app loads, and some BSP changes combined with faster processing to enable smoother scrolling and improved web browsing performance. The Snapdragon 821 SoC is already shipping in devices like the ASUS ZenFone 3, so we shouldn’t be far off from seeing major launches using this SoC. It's interesting to note here that last year we got details of Snapdragon 820 by September but we have yet to see what Qualcomm plans to launch for next year's flagships. It'll be interesting to see whether they stay with a custom CPU core or elect to go with an ARM Cortex big.LITTLE configuration similar to the Kirin 950.

POST A COMMENT

32 Comments

View All Comments

  • jjj - Wednesday, August 31, 2016 - link

    The claim of 5% increased efficiency is problematic as it indicates that they upped clocks without a sufficient decline in power.

    My bet for next year is a semi-custom A73 , the "Built on ARM Cortex Technology" license.
    As MTK X30 is in a rush to arrive, it will be interesting to see who is first to market, if the X30 uses A73 or A72 (they need A73) and if Qualcomm uses FO-WLP (X30 doesn't).

    But before all that, hopefully we see Kirin very very soon with A73 at 2.6GHz and 2x their current GPU perf, on 16ff ofc. https://gfxbench.com/resultdetails.jsp?resultid=4x...
    Reply
  • MrSpadge - Wednesday, August 31, 2016 - link

    The big cores are clocked 9% higher, so that's a *problematic* 4% increase in peak power consumption. The small cores are obviously going to need noticeably more power when under full load. But if the temperature becomes an issue under sustained load (gaming on all cores?) the chip can be clocked down to ~5% higher speed than SD820 and nothing is lost while peak performance is gained. Rest assured they would have liked to improve the power efficiency more, but they're not magicians. SD821 looks like simply stricter binned SD820, and hence didn't cost much to create. Reply
  • jjj - Wednesday, August 31, 2016 - link

    The 820 was already clocked too high and the power increase with clocks is not linear.

    They push clocks too far to keep up with the competition in burst perf as A72 is substantially better. That's a problem as you don't get what you think you are getting and they provide no info on this Turbo mode and sustained perf.
    Reply
  • MrSpadge - Wednesday, August 31, 2016 - link

    How do you judge that SD820 is clocked "too high"? Reply
  • jjj - Wednesday, August 31, 2016 - link

    Fair question.
    Based on burst perf vs sustained perf and the competition.
    I guess anything that is not sustainable long term and not fully detailed is too high.
    SoC makers and device makers should be forced to disclose "base clocks" and "boost clocks" and how it all works.
    Review sites could easily fully load 1, 2,3, 4 cores and provide a much clearer picture about sustained perf for each device.
    AT has been promising a proper SD820 review for 6 months now and in their average reviews they don't include any kind of CPU tests ( PCMark and Diiscomark are very software dependent and Java perf is not representative for a CPU's perf) and no data on sustained CPU perf .
    SD820 can't even fully load the 2 high clocks cores for long. Ofc there are substantial differences between devices. The worst i've seen might be the Zuk , thin body and glass back are a problem.
    Reply
  • MrSpadge - Wednesday, August 31, 2016 - link

    If the peak performance was identical to the sustained performance, I'd say that the device is badly designed for general usage, i.e. it leaves responsiveness on the table by not boosting higher. But that's not what you're talking about. If the device vendors would switch from advertising max clock speeds to "base clock and typical boost clock", like nVidia does it, that would be a lot more useful and less misleading. There'd be debate over what "typical useage" is, but that's pretty much inevitable. Reply
  • jjj - Wednesday, August 31, 2016 - link

    @ MrSpadge
    That's why base clocks should be what can be sustained long term.
    As for turbo or boost or w/e term one uses, there should be sufficient details.
    At least Intel provides some details on Turbo,when it comes to phones nobody provides info and review sites don't try much either.
    In phones there is more than TDP to factor in too. Peak perf is limited by that but you also care about battery life. One needs to find a balance between perf and efficiency. The Exynos does better in that area if you compare the SD820 and Exynos 8890 versions of the S7.

    Using real world apps to test if a SoC is pushed too far and if power management does is job well would be good too as synthetic might or might not fit a real world scenario.
    Reply
  • name99 - Wednesday, August 31, 2016 - link

    What Intel says is becoming ever less useful every iteration. Kaby Lake 4.5W has a frequency range from around 1 to 3GHz, and who knows what you'll get in that range? It depends on skin temperature, whether the device is vertical or horizontal, how active the radios are, etc.

    The point is --- when Intel was selling something that ran at, say 2GHz, turboing up to 2.2, that was an informative speed range. When the range is 1 to 3GHz, don't tell me that's more informative than QC's 2.34 GHz. Both are not especially helpful, but QC's seems mildly more helpful. And in both cases, you can only get a real feel for performance once you see the benchmarks in REAL systems.
    Reply
  • MrSpadge - Wednesday, August 31, 2016 - link

    Intels statement is more useful. QCs "2.34 GHz" would be equivalent to Intel specifying the Kaby 4.5 W i7 as "3.6 GHz". But the fact that its base clock is much lower than for the 15 W models, by about a factor of 2, gives you a feeling for the difference in sustained performance between those chips. Reply
  • jjj - Wednesday, August 31, 2016 - link

    @ name99
    Yes Intel overcomplicated it and it becomes difficult to figure it out but at least they provide some info, they don't pretend it's not there.never said that Intel does it well, just that they, at least, provide some info.
    In phones there is no info at all, SOC makers just list peak clocks and that's it.
    In PC the testing is also far more relevant,in phone it's all synthetic and the user gets almost no valid data points, besides burst perf ranking.
    Reply

Log in

Don't have an account? Sign up now