CPU & GPU Hardware Analyzed

Although Microsoft did its best to minimize AMD’s role in all of this, the Xbox One features a semi-custom 28nm APU designed with AMD. If this sounds familiar it’s because the strategy is very similar to what Sony employed for the PS4’s silicon.

The phrase semi-custom comes from the fact that AMD is leveraging much of its already developed IP for the SoC. On the CPU front we have two Jaguar compute units, each one with four independent processor cores and a shared 2MB L2 cache. The combination of the two give the Xbox One its 8-core CPU. This is the same basic layout of the PS4‘s SoC.

If you’re not familiar with it, Jaguar is the follow-on to AMD’s Bobcat core - think of it as AMD’s answer to the Intel Atom. Jaguar is a 2-issue OoO architecture, but with roughly 20% higher IPC than Bobcat thanks to a number of tweaks. In ARM terms we’re talking about something that’s faster than a Cortex A15. I expect Jaguar to be close but likely fall behind Intel’s Silvermont, at least at the highest shipping frequencies. Jaguar is the foundation of AMD’s Kabini and Temash APUs, where it will ship first. I’ll have a deeper architectural look at Jaguar later this week. Update: It's live!

Inside the Xbox One, courtesy Wired

There’s no word on clock speed, but Jaguar at 28nm is good for up to 2GHz depending on thermal headroom. Current rumors point to both the PS4 and Xbox One running their Jaguar cores at 1.6GHz, which sounds about right. In terms of TDP, on the CPU side you’re likely looking at 30W with all cores fully loaded.

The move away from PowerPC to 64-bit x86 cores means the One breaks backwards compatibility with all Xbox 360 titles. Microsoft won’t be pursuing any sort of a backwards compatibility strategy, although if a game developer wanted to it could port an older title to the new console. Interestingly enough, the first Xbox was also an x86 design - from a hardware/ISA standpoint the new Xbox One is backwards compatible with its grandfather, although Microsoft would have to enable that as a feature in software - something that’s quite unlikely.

Microsoft Xbox One vs. Sony PlayStation 4 Spec comparison
  Xbox 360 Xbox One PlayStation 4
CPU Cores/Threads 3/6 8/8 8/8
CPU Frequency 3.2GHz 1.6GHz (est) 1.6GHz (est)
CPU µArch IBM PowerPC AMD Jaguar AMD Jaguar
Shared L2 Cache 1MB 2 x 2MB 2 x 2MB
GPU Cores   768 1152
Peak Shader Throughput 0.24 TFLOPS 1.23 TFLOPS 1.84 TFLOPS
Embedded Memory 10MB eDRAM 32MB eSRAM -
Embedded Memory Bandwidth 32GB/s 102GB/s -
System Memory 512MB 1400MHz GDDR3 8GB 2133MHz DDR3 8GB 5500MHz GDDR5
System Memory Bus 128-bits 256-bits 256-bits
System Memory Bandwidth 22.4 GB/s 68.3 GB/s 176.0 GB/s
Manufacturing Process   28nm 28nm

On the graphics side it’s once again obvious that Microsoft and Sony are shopping at the same store as the Xbox One’s SoC integrates an AMD GCN based GPU. Here’s where things start to get a bit controversial. Sony opted for an 18 Compute Unit GCN configuration, totaling 1152 shader processors/cores/ALUs. Microsoft went for a far smaller configuration: 768 (12 CUs).

Microsoft can’t make up the difference in clock speed alone (AMD’s GCN seems to top out around 1GHz on 28nm), and based on current leaks it looks like both MS and Sony are running their GPUs at the same 800MHz clock. The result is a 33% reduction in compute power, from 1.84 TFLOPs in the PS4 to 1.23 TFLOPs in the Xbox One. We’re still talking about over 5x the peak theoretical shader performance of the Xbox 360, likely even more given increases in efficiency thanks to AMD’s scalar GCN architecture (MS quotes up to 8x better GPU performance) - but there’s no escaping the fact that Microsoft has given the Xbox One less GPU hardware than Sony gave the PlayStation 4. Note that unlike the Xbox 360 vs. PS3 era, Sony's hardware advantage here won't need any clever developer work to extract - the architectures are near identical, Sony just has more resources available to use.

Remember all of my talk earlier about a slight pivot in strategy? Microsoft seems to believe that throwing as much power as possible at the next Xbox wasn’t the key to success and its silicon choices reflect that.

Introduction Memory Subsystem
Comments Locked

245 Comments

View All Comments

  • Shawn74 - Tuesday, September 10, 2013 - link

    mmmmm....
    Custom CPU (6 operations per clock compared to the 4 of PS4) and now overclocked.
    GPU (now overclocked)
    eSram (ultra fast memory with extremely low access time, we will see it's real function soon)
    DDR3 (extremely fast access time memory)
    Maybe this combination may become a nightmare for the PS4 owners?? xD
    Yes, i really think YES.

    And please don't forget the new pulse triggers (apparently fantastic and a must have for a completely-new experience)

    YES, my final decision is for the ONE
  • Shad0w59 - Wednesday, September 11, 2013 - link

    I don't really trust Microsoft with all that overclocking after Xbox 360's high failure rate.
  • Shawn74 - Wednesday, September 11, 2013 - link

    Shadow, have you seen the cooling system? It's giant..
    Have you seen the case? It's giant..(a lot of fresh air inside ;-)
    Have you seen the Xbox One will detect heat, power down to avoid meltdown? http://www.vg247.com/2013/08/13/xbox-one-will-dete...
    And the very heat power supply is outside.....

    A perfect system for overclocking.... obviously for me....

    Ah, for my first message here a reply to PS4 team made directly by Albert Penello (Microsoft Director of Product Planning):

    "*******************************************************************************************
    I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

    I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

    So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

    I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

    So, here are couple of points about some of the individual parts for people to consider:

    • 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
    • Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
    • We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
    • We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
    • We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
    • Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

    Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

    I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

    Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

    Thanks again for letting my participate. Hope this gives people more background on my claims.
    "*****************************************************************************

    Once again i would like to warn PS4 fan........Everytime Sony announced a new console, Sony have publicized it as the most powerful.... every time Xbox does the job better....

    In my opinion

    P.S. Sorry for my bad english, i'm italian
  • Shawn74 - Wednesday, September 11, 2013 - link

    Penello's post is here:
    http://67.227.255.239/forum/showthread.php?p=80951...
  • tipoo - Saturday, September 21, 2013 - link

    Regarding the eDRAM, it's now known not to be an automatically managed cache, from developer comments about having to code specifically to use it.

Log in

Don't have an account? Sign up now