Designing Denver

Diving into the depths of Denver, Denver is in a lot of ways exactly the kind of CPU you’d expect a GPU company to build. NVIDIA’s traditional engineering specialty is in building wide arrays of simple in-order processors, a scheme that maps well to the ridiculously parallel nature of graphics. Whether intentional to tap their existing expertise or just a result of their plan to go in such a divergent route from “traditional” CPUs, Denver makes you stop and ponder GPUs for a moment when looking at its execution workflow.

The results of NVIDIA’s labors in designing Denver has been a wide but in-order processor. With the potential to retire up to 7 operations per cycle, Denver measured front-to-back is wider than A15/A57 and wider than Cyclone. Officially NVIDIA calls this a “7+” IPC architecture, alluding to Denver’s binary translation and code optimization step, and the potential to merge operations as part of the process.

Meanwhile the existence of this code optimizer is the first sign we see that Denver is not a traditional CPU by the standards of ARM/Apple or Intel/AMD. To understand why that is we must first discuss Out of Order Execution (OoOE), why it exists, and why Denver doesn’t have it.

In traditional CPU designs, we make a distinction between in-order designs and out-of-order designs. As appropriately named, in-order designs will execute instructions in the order they receive them, and meanwhile out-of-order designs have the ability to rearrange instructions within a limited window, so long as the altered order doesn’t change the results. For the kinds of tasks that CPUs work with, OoOE improves throughput, but it does come at a cost.

Overall OoOE is considered the next logical step after in-order execution has reached its natural limits. Superscalar in-order execution can potentially scale up to a few instructions at once, but actually achieving that is rare, even with the help of good compilers. At some point other constraints such as memory accesses prevent an instruction from executing, holding up the entire program. In practice once you need performance exceeding a traditional in-order design, then you switch to out-of-order. With OoOE then it becomes possible to scale performance out further, with the ability to use the reodering process to fill wider processors and to keep from losing performance due to stalls.

K1-64 Die Shot Mock-up (NVIDIA)

The cost of OoOE is complexity, die size, and power consumption. The engines to enable OoOE can be quite large, being tasked with queuing instructions, identifying which instructions can be reordered, and ensuring instructions are safe to execute out-of-order. Similarly, there is a power cost to these engines, and that means adding OoOE to a processor can make it much larger and more power hungry, even without actually adding further units for the OoOE engines to fill. Make no mistake, the benefits of OoOE are quite large, but then so is the cost of implementing it.

As such, while OoOE has been treated as the next step after in-order processors it is not the only solution to the problem being pursued. The fundamental problems in-order processors face are a combination of hardware and software; hardware issues such as memory stalls, and software issues such as poor instruction ordering. It stands to reason then that if the performance scaling problem can be solved in hardware with OoOE, then can it be solved in software as well? It’s this school of thought that NVIDIA is pursuing in Denver.

Perhaps the critical point in understanding Denver then is that it is non-traditional for a high-performance CPU due to its lack of OoOE hardware, and for that reason it’s a CPU unlike any of its contemporaries. We’ll get back to the software aspects of Denver in a bit, but for now it’s enough to understand why NVIDIA has not pursued an OoOE design and what they have pursued instead.

Denver’s Deep Details

Due to NVIDIA’s choice not to pursue OoOE on Denver and simultaneously pursue a large, high performance core, Denver is by consumer standards a very wide CPU. With no OoOE hardware NVIDIA has been able to fill out Denver with execution units, with 7 slots’ worth of execution units backed by a native decoder wide enough to feed all of those units at once. The native decoder in particular is quite notable here, as most other CPU designs have narrower decoders that put a lower limit on their theoretical IPC. The Cortex-A15 cores in Tegra K1-32 for example only feature 3-wide decoders, despite having many more slots’ worth of execution units. Consequently a large decoder not only opens up the ability to increase IPC, but it is a sign that the CPU developer believes that their design is capable of keeping that many execution units busy enough to justify the cost of the wider decoder.

NVIDIA CPU Core Comparison
  K1-32 K1-64
CPU Cortex-A15 NVIDIA Denver
ARM ISA ARMv7 (32-bit) ARMv8 (32/64-bit)
Issue Width 3 micro-ops 2 (ARM) or 7 (Native) micro-ops
Pipeline Length 18 stages 15 stages
Branch Mispredict Penalty 15 cycles 13 cycles
Integer ALUs 2 4
Load/Store Units 1 + 1 (Dedicated L/S) 2 (Shared L/S)
Branch Units 1 1
FP/NEON ALUs 2x64-bit 2x128-bit
L1 Cache 32KB I$ + 32KB D$ 128KB I$ + 64KB D$
L2 Cache 2MB 2MB

These execution units themselves are fairly unremarkable, but none the less are very much at the heart of Denver. Compared again to Terga 4, there are twice as many load/store units, and the NEON units have been extended from 64-bits wide to 128-bits wide, allowing them to retire up to twice as much work per cycle if they can be completely filled.

Internally Denver executes instructions using the Very Long Instruction Word (VLIW) format, which is an instruction format that these days is more common with GPUs than it is CPUs, making it another vaguely GPU-like aspect of Denver. In VLIW all instructions are packed into a single word and sent through the pipeline at once, rather than handing each slot its own instruction. Each VLIW instruction is variable in length, and in turn the length of the operation is similarly variable, depending in part on factors such as the number of registers any given instruction operates upon. With a maximum VLIW instruction size of 32 bytes, this means that the number of operations a single instruction can contain is dependent on the operations, and it’s possible for large operations to fill out the VLIW early.

Another one of Denver’s unusual aspects is its internal instruction format, which is very different from ARMv7 or ARMv8. Though the specific format is beyond the scope of this article, it has long been rumored that Denver was originally meant to be an x86 design, with Denver’s underlying design and binary translation pairing intended to allow for an x86 implementation without infringing on any x86 hardware patents. Whether that is true or not, the end result of Denver is that owing to NVIDIA’s decision to solve their needs in software, NVIDIA was able to create an architecture whose design is decoupled from the actual instruction set it is executing.

Yet in spite of this architectural choice, Denver still needs to be able to execute ARM code as well as native code from binary translation, which leads to one more interesting wrinkle to Denver’s design. Denver has not one but two decoders, the native decoder and a proper ARM decoder. Designed to work in situations where Denver’s software optimizer is not worth running or can’t translate in time – such as with brand new code segments – the ARM decoder allows for Denver to directly decode ARM instructions.

The ARM decoder is not quite a backup, but it is not intended to be the main source of operations for Denver over the long run. Rather the bulk of the work for Denver should come from its binary translator, and only a small fraction of infrequently used code should hit the ARM decoder. At only 2 instructions wide this decoder is narrower than even A15’s decoder, not to mention it forms an entirely in-order pipeline that misses out on the instruction rescheduling and other optimizing benefits of the software code optimizer. Never the less it serves an important role in situations where Denver can’t use native code by giving it a means to immediately begin executing ARM code. This as a result makes Denver a kind of hybrid design, capable of executing either ARM instructions or NVIDIA’s own internal microcode.

Meanwhile Denver’s overall pipeline stands at 15 stages deep. Despite the overall width of Denver this actually makes the pipeline shorter than the 18 stage A15 by a few stages. And similarly, the penalty for branch mispredictions is down from 15 cycles in A15 to 13 cycles in Denver.

Last but not least, on the logical level NVIDIA has also been working to further reduce their power consumption through a new mode called CC4. CC4 is essentially a deeper state of sleep that’s not quite power-gating the entire CPU, but none the less results in most of the CPU being shut off. What ends up being retained in CC4 is the cache and what NVIDIA dubs the “architectural state” of the processor, a minimal set of hardware that allows the core voltage to drop below traditional Vmin and instead hold at just enough voltage to retain the contents of the cache and state, as no work needs to be done in this state. It's worth noting that we've seen similar power collapse states as far back as the A15 though, so the idea isn't necessarily new.

CC4 as a result is intended to be a relatively fast sleep state given its depth, with Denver able to enter and exit it faster than power-gating, and consequently it can be used more frequently. That said since it is deeper than other sleep states it is also slower than them, meaning the CPUIdle governor needs to take this into account and only select CC4 when there’s enough time to take advantage of it. Otherwise if Denver enters CC4 and has to come out of it too soon, the processor can end up wasting more power setting up CC4 than a very short CC4 duration would save.

Of course CC4 is just one of many factors in Denver’s power consumption. Hardware and software alike plays a role, from the silicon itself and the leakage characteristics of the physical transistors to the binary translation layer necessary for Denver to operate at its peak. And that brings us to the final and more crucial piece of the Denver puzzle: the binary translation layer.


SoC Architecture: NVIDIA's Denver CPU The Secret of Denver: Binary Translation & Code Optimization


View All Comments

  • techcrazy - Friday, February 6, 2015 - link

    Best Nexus 9 review i read. Excellent work anandtech team. Reply
  • RobilarOCN - Friday, February 6, 2015 - link

    How does the Tab S fall short of the Nexus 9? I've owned both. Video playback battery life overwhelmingly supports the Tab S, it has a far superior screen (AMOLED...), It has a micro SD slot, it has the ability to connect to HDMI via MHL adapter. The only way the Nexus 9 can output video as it has no available adapter and no onboard MHL support is via 3rd party such as the Chromecast. The 16GB Nexus 9 and 16GB Tab S 8.4 are in the same price range but of course you can expand the memory on the Tab S via a micro SD card. The 32GB Nexus 9 sits in the same price range as the Tab S 10.1 and again the 10.1 can have cheap memory added to it.

    The only places the Nexus 9 wins is if you want a 4:3 format (and in that case the first gen IPad Air 64GB is cheaper and a better device) or if you absolutely have to have Lollipop which will eventually get to the Tab S.
  • UtilityMax - Sunday, February 8, 2015 - link

    In my opinion Tab S will be eventually remembered as a flop. Yes, it has a great wide screen and good battery life for video playback. So it's great for watching videos, which is why I bought one (and would buy it again). Unfortunately, videos is the only thing that Tab S does truly well. The Tab S forums on the web are filled with discussions about "lag" and why Chrome can be so slow. For a flagship tablet, the CPU/GPU performance scores could have been a little better, and the standby as well as web browsing battery life could be A LOT better. The other day I was stuck in a library for hours with this tablet and came to realization that I am not sure if this thing can last for 5 hours of web browsing on a full battery charge, which is horrendous. I have a Samsung laptop with a quad core i7 CPU and 17 inch screen that could work longer on a battery charge.

    Basically, this tablet gives you a great screen, SD card slot, good build quality, and not much else. I am still glad I got a 10.5 Tab S on a sale for $400. However, I don't think it's really worth the "regular" price of +500 dollars.
  • Impulses - Monday, February 9, 2015 - link

    5 hours? Yikes... My Atom netbook from half a decade ago could manage that... Reply
  • UtilityMax - Sunday, February 15, 2015 - link

    But amazingly, the Tab S 10.5 can play a 720p video for something like 10 hours on a full charge. Go figure. Reply
  • mkozakewich - Friday, February 6, 2015 - link

    Those NVidia charts obviously show the IPC measured in a 'ratio'. They're not going to tell us what exact IPC they get.

    So yeah, the highest it goes is less than 2.0, which means their IPC for optimized code isn't quite double the performance of regular ARM stuff. I'd suppose the regular code could get up to 3 IPC, which means the optimized stuff could get up to 6 IPC (out of the maximum 7). It seems to check out.

    I'd have expected you not to throw caution to the wind when reading first-party benchmark slides.
  • flamingspartan3 - Friday, February 6, 2015 - link

    The Nexus 7 2013 is still competitive in many of these benchmarks. It's remarkable how great the device is even after almost two years. Reply
  • UtilityMax - Sunday, February 8, 2015 - link

    The criticism that there aren't enough apps for the big screen is somewhat misplaced. I suspect that web browsing, videos, ebooks, and productivity apps are the prime applications for the large screen tablets. Why bother with the facebook app, when you can just login into facebook from chrome, and with the biggish screen have access to the full facebook web site? Reply
  • Impulses - Monday, February 9, 2015 - link

    Chrome alone probably accounts for like 80% of my tablet use (and I've had an Android tablet since the OG TF) seems that's not necessarily the norm tho... Reply
  • Jumangi - Monday, February 9, 2015 - link

    Then why pay for a device with such high end components like the K1 SoC if your just gonna use the browser? Maybe this is what some do because the android marketplace is so limited for large tablet apps but doesn't mean its ok. Reply

Log in

Don't have an account? Sign up now