CPU & GPU Hardware Analyzed

Although Microsoft did its best to minimize AMD’s role in all of this, the Xbox One features a semi-custom 28nm APU designed with AMD. If this sounds familiar it’s because the strategy is very similar to what Sony employed for the PS4’s silicon.

The phrase semi-custom comes from the fact that AMD is leveraging much of its already developed IP for the SoC. On the CPU front we have two Jaguar compute units, each one with four independent processor cores and a shared 2MB L2 cache. The combination of the two give the Xbox One its 8-core CPU. This is the same basic layout of the PS4‘s SoC.

If you’re not familiar with it, Jaguar is the follow-on to AMD’s Bobcat core - think of it as AMD’s answer to the Intel Atom. Jaguar is a 2-issue OoO architecture, but with roughly 20% higher IPC than Bobcat thanks to a number of tweaks. In ARM terms we’re talking about something that’s faster than a Cortex A15. I expect Jaguar to be close but likely fall behind Intel’s Silvermont, at least at the highest shipping frequencies. Jaguar is the foundation of AMD’s Kabini and Temash APUs, where it will ship first. I’ll have a deeper architectural look at Jaguar later this week. Update: It's live!

Inside the Xbox One, courtesy Wired

There’s no word on clock speed, but Jaguar at 28nm is good for up to 2GHz depending on thermal headroom. Current rumors point to both the PS4 and Xbox One running their Jaguar cores at 1.6GHz, which sounds about right. In terms of TDP, on the CPU side you’re likely looking at 30W with all cores fully loaded.

The move away from PowerPC to 64-bit x86 cores means the One breaks backwards compatibility with all Xbox 360 titles. Microsoft won’t be pursuing any sort of a backwards compatibility strategy, although if a game developer wanted to it could port an older title to the new console. Interestingly enough, the first Xbox was also an x86 design - from a hardware/ISA standpoint the new Xbox One is backwards compatible with its grandfather, although Microsoft would have to enable that as a feature in software - something that’s quite unlikely.

Microsoft Xbox One vs. Sony PlayStation 4 Spec comparison
  Xbox 360 Xbox One PlayStation 4
CPU Cores/Threads 3/6 8/8 8/8
CPU Frequency 3.2GHz 1.6GHz (est) 1.6GHz (est)
CPU µArch IBM PowerPC AMD Jaguar AMD Jaguar
Shared L2 Cache 1MB 2 x 2MB 2 x 2MB
GPU Cores   768 1152
Peak Shader Throughput 0.24 TFLOPS 1.23 TFLOPS 1.84 TFLOPS
Embedded Memory 10MB eDRAM 32MB eSRAM -
Embedded Memory Bandwidth 32GB/s 102GB/s -
System Memory 512MB 1400MHz GDDR3 8GB 2133MHz DDR3 8GB 5500MHz GDDR5
System Memory Bus 128-bits 256-bits 256-bits
System Memory Bandwidth 22.4 GB/s 68.3 GB/s 176.0 GB/s
Manufacturing Process   28nm 28nm

On the graphics side it’s once again obvious that Microsoft and Sony are shopping at the same store as the Xbox One’s SoC integrates an AMD GCN based GPU. Here’s where things start to get a bit controversial. Sony opted for an 18 Compute Unit GCN configuration, totaling 1152 shader processors/cores/ALUs. Microsoft went for a far smaller configuration: 768 (12 CUs).

Microsoft can’t make up the difference in clock speed alone (AMD’s GCN seems to top out around 1GHz on 28nm), and based on current leaks it looks like both MS and Sony are running their GPUs at the same 800MHz clock. The result is a 33% reduction in compute power, from 1.84 TFLOPs in the PS4 to 1.23 TFLOPs in the Xbox One. We’re still talking about over 5x the peak theoretical shader performance of the Xbox 360, likely even more given increases in efficiency thanks to AMD’s scalar GCN architecture (MS quotes up to 8x better GPU performance) - but there’s no escaping the fact that Microsoft has given the Xbox One less GPU hardware than Sony gave the PlayStation 4. Note that unlike the Xbox 360 vs. PS3 era, Sony's hardware advantage here won't need any clever developer work to extract - the architectures are near identical, Sony just has more resources available to use.

Remember all of my talk earlier about a slight pivot in strategy? Microsoft seems to believe that throwing as much power as possible at the next Xbox wasn’t the key to success and its silicon choices reflect that.

Introduction Memory Subsystem
Comments Locked

245 Comments

View All Comments

  • Thermalzeal - Wednesday, May 29, 2013 - link

    Anand, any information on whether the Xbox One will utilize HMA (Hybrid Memory Access) in comparison to the PS4?
  • tipoo - Wednesday, May 29, 2013 - link

    Do you mean HUMA by any chance? Yes, both would have that.
  • Buccomatic - Friday, May 31, 2013 - link

    xbox one - everything we don't want in a video game console, except the controller.
    ps3 - everything we do want in a video game console, except the controller.

    that's how i see it.
  • Buccomatic - Friday, May 31, 2013 - link

    can anyone tell me if the following statement is correct or incorrect?

    pc games will be ports from the games made for consoles. both consoles (xbox one and ps4) will have 5gb vram in their gpu. so that means the system requirements for pc games, as early as december when they start porting games over from the consoles to pc, will require a pc gamer to have a video card of at least 5gb vram, or more, just to run the game.

    ?

    yes or no and why?
  • fteoath64 - Monday, June 10, 2013 - link

    Before the hardware is released and analysed, we have no idea how much of the PS4 GDDR5 ram is going to be shared and/or dedicated for gpu use and how much of those are going to be available to user data. It is anyone's guess at this stage. But the improvements in hUMA design with dual ported frame buffer for gpu and cpu makes it a rather quick gpu by PC standards. Since only one game is loaded at a time, there can be shared memory reconfiguration going on just before the game loading so it can depend on the game and how much ram it can grab. The cpu counts very little in the process and it is why it can be clocked at 1.6Ghz rather than storming at 3.6Ghz as in Trinity chips. Still with faster gpu and globs of ram now, there is certainly greater leeway in the development process and optimizing process for game developers. One can assume at 3X the Trinity gpu core counts, the PS4 must be at least 2.5X the speed of Trinity gpus since those ran at 900Mhz. With good cooling, the PS4 could well clock their gpu cores at 1.2Ghz since Intel is going 1.3Ghz on the GT3 core.
  • SnOOziie - Sunday, June 2, 2013 - link

    Looking at the motherboard they have use solder balls on CPU to BGA it's going to RROD
  • Wolfpup - Monday, June 3, 2013 - link

    This has never been an easier choice-Microsoft doesn't let you buy games, Sony does, and their system is 50% more powerful, more focused on games, while Microsoft's off doing yet more Kinnect.
  • SirGCal - Thursday, June 13, 2013 - link

    YUP, and as a cripple, what good is flailing my arms about and hopping around like a retard going to do me? Kinetic is about the dumbest thing I've seen people use. Accept for work-out stuff and kids stuff sure, makes sense. But then now they give those in the dial-up and cellular internet locations the finger and say 'stick with the 360' when they know damn well developers won't make games for it within a year... Morons. I'm done with M$. If I do get a new console, it will be the PS4. Besides, I've always loved the Kingdom Hearts series more then any others...
  • NoKidding - Monday, June 24, 2013 - link

    i am glad that these consoles have finally seen the light of day. though a bit underpowered compared to an average mid range rig, at least game developers will be forced to utilized each and every available core at such low clock rates on these consoles. heavily threaded games will finally be the norm and not just a mere exception. if the playing field no longer relies heavily on ipc advantages, will amd's "more cores but cheaper" strategy finally catch intel's superior ipc advantage? will amd finally reclaim the low to mid range market? no, not likely. but one can hope so. i yearn for the good old c2d days when intel was forced to pull all the stops.
  • kondor999 - Tuesday, July 16, 2013 - link

    Who gives a shit about heat and power consumption in a console? Both machines are miserly, and they're not notebooks for Gods sake. Looks like MS simply cheaped out to me. Letting them off the hook by pointing out the tiny heat/power savings as a "benefit" is a real reach. By this logic, why not just cut the compute power even more?

    No thanks.

Log in

Don't have an account? Sign up now