Memory Subsystem

With the same underlying CPU and GPU architectures, porting games between the two should be much easier than ever before. Making the situation even better is the fact that both systems ship with 8GB of total system memory and Blu-ray disc support. Game developers can look forward to the same amount of storage per disc, and relatively similar amounts of storage in main memory. That’s the good news.

The bad news is the two wildly different approaches to memory subsystems. Sony’s approach with the PS4 SoC was to use a 256-bit wide GDDR5 memory interface running somewhere around a 5.5GHz datarate, delivering peak memory bandwidth of 176GB/s. That’s roughly the amount of memory bandwidth we’ve come to expect from a $300 GPU, and great news for the console.

Xbox One Motherboard, courtesy Wired

Die size dictates memory interface width, so the 256-bit interface remains but Microsoft chose to go for DDR3 memory instead. A look at Wired’s excellent high-res teardown photo of the motherboard reveals Micron DDR3-2133 DRAM on board (16 x 16-bit DDR3 devices to be exact). A little math gives us 68.3GB/s of bandwidth to system memory.

To make up for the gap, Microsoft added embedded SRAM on die (not eDRAM, less area efficient but lower latency and doesn't need refreshing). All information points to 32MB of 6T-SRAM, or roughly 1.6 billion transistors for this memory. It’s not immediately clear whether or not this is a true cache or software managed memory. I’d hope for the former but it’s quite possible that it isn’t. At 32MB the ESRAM is more than enough for frame buffer storage, indicating that Microsoft expects developers to use it to offload requests from the system memory bus. Game console makers (Microsoft included) have often used large high speed memories to get around memory bandwidth limitations, so this is no different. Although 32MB doesn’t sound like much, if it is indeed used as a cache (with the frame buffer kept in main memory) it’s actually enough to have a substantial hit rate in current workloads (although there’s not much room for growth).

Vgleaks has a wealth of info, likely supplied from game developers with direct access to Xbox One specs, that looks to be very accurate at this point. According to their data, there’s roughly 50GB/s of bandwidth in each direction to the SoC’s embedded SRAM (102GB/s total bandwidth). The combination of the two plus the CPU-GPU connection at 30GB/s is how Microsoft arrives at its 200GB/s bandwidth figure, although in reality that’s not how any of this works. If it’s used as a cache, the embedded SRAM should significantly cut down on GPU memory bandwidth requests which will give the GPU much more bandwidth than the 256-bit DDR3-2133 memory interface would otherwise imply. Depending on how the eSRAM is managed, it’s very possible that the Xbox One could have comparable effective memory bandwidth to the PlayStation 4. If the eSRAM isn’t managed as a cache however, this all gets much more complicated.

Microsoft Xbox One vs. Sony PlayStation 4 Memory Subsystem Comparison
  Xbox 360 Xbox One PlayStation 4
Embedded Memory 10MB eDRAM 32MB eSRAM -
Embedded Memory Bandwidth 32GB/s 102GB/s -
System Memory 512MB 1400MHz GDDR3 8GB 2133MHz DDR3 8GB 5500MHz GDDR5
System Memory Bus 128-bits 256-bits 256-bits
System Memory Bandwidth 22.4 GB/s 68.3 GB/s 176.0 GB/s

There are merits to both approaches. Sony has the most present-day-GPU-centric approach to its memory subsystem: give the GPU a wide and fast GDDR5 interface and call it a day. It’s well understood and simple to manage. The downsides? High speed GDDR5 isn’t the most power efficient, and Sony is now married to a more costly memory technology for the life of the PlayStation 4.

Microsoft’s approach leaves some questions about implementation, and is potentially more complex to deal with depending on that implementation. Microsoft specifically called out its 8GB of memory as being “power friendly”, a nod to the lower power operation of DDR3-2133 compared to 5.5GHz GDDR5 used in the PS4. There are also cost benefits. DDR3 is presently cheaper than GDDR5 and that gap should remain over time (although 2133MHz DDR3 is by no means the cheapest available). The 32MB of embedded SRAM is costly, but SRAM scales well with smaller processes. Microsoft probably figures it can significantly cut down the die area of the eSRAM at 20nm and by 14/16nm it shouldn’t be a problem at all.

Even if Microsoft can’t deliver the same effective memory bandwidth as Sony, it also has fewer GPU execution resources - it’s entirely possible that the Xbox One’s memory bandwidth demands will be inherently lower to begin with.

CPU & GPU Hardware Analyzed Power/Thermals, OS, Kinect & TV
Comments Locked

245 Comments

View All Comments

  • elitewolverine - Thursday, May 23, 2013 - link

    its the same gpu at heart, sure shaders are lower, because of eSram. You might want to rethink how internals work. Advantage will be very minimal
  • alex@1234 - Friday, May 24, 2013 - link

    In every place its mentioned 32% higher GPU power, I don't think A GTX 660 TI and GTX 680 are equal. For sure PS4 holds the advantage. Lower shaders and lower in everything compared to PS4, DDR3 Xbox one-PS4 DDR5. For ESRAM, I will tell you something have a SSD, have 32 GB RAM, it cannot make it for a better GPU.
  • cjb110 - Thursday, May 23, 2013 - link

    In some ways this is the opposite to the previous generation. The 360 screamed games (at least its original dashboard), whereas the PS3 had all the potential media support (the xbar interface though let it down) as well as being an excellent blu-ray player (which is the whole reason I got mine).

    This time around MS have gone all out entertainment, that can do games, where as Sony seems to have gone games first. I'm imagining that physically the PS4 is more flashy too like the PS3 and 360 where...game devices not family entertainment boxes.

    Personally I'm keeping the 360 for my games library, and the One will likely replace the PS3.
  • Tuvok86 - Thursday, May 23, 2013 - link

    Xbox One ~ 7770 Ghz
    PS4 ~ 7850
  • jnemesh - Thursday, May 23, 2013 - link

    One of my biggest concerns with the new system is the Kinect requirement. I have my Xbox and other electronics in a rack in the closet. I would need to extend the USB 3.0 (and I am assuming this time around, the Kinect is using a standard USB connector on all models) over 40 feet to get the wire from my closet to the location beneath or above my wall mounted TV. With the existing Kinect for the 360, I never bothered with it, but you COULD buy a fairly expensive USB over cat5 extender (Gefen makes one of the more reliable models, but it's $499!). I know of no such adapter for USB 3.0, and since Kinect HAS to be used for the console to operate, this means I won't be buying an Xbox One! Does anyone know of a product that will extend USB 3.0 over a cat5 or cat6 cable? Or any solution?
  • epobirs - Saturday, May 25, 2013 - link

    There are USB 3.0 over fiber solutions available but frankly, I doubt anyone at MS is losing sleep over those few homes with such odd arrangements.
  • Panzerknacker - Thursday, May 23, 2013 - link

    Is it just me or are these new gen consoles seriously lacking in CPU performance? According to the benchmarks of the A4-5000, of which you could say the consoles have two, the CPU power is not even going to come close to any i5 or maybe even i3 chip.

    Considering the fact they are running the X86 platform this time, which probably is not the most efficient to run games (probably the reason why consoles in the past never used x86), and the fact that they run lots of secondary applications next to the game (which leaves maybe 6/8 cores left for the game on average), I think CPU performance is seriously lacking. CPU intensive games will be a no-no on this next gen on consoles.
  • Th-z - Saturday, May 25, 2013 - link

    The first Xbox used x86 CPU. Cost was the main reason not many consoles used x86 CPU in the past, unlike IBM Power and ARM, x86 doesn't give out license to whatever company to make their own CPU. But this time they probably see benefit has outweighed the cost (or even less cost) with x86 APU design from AMD - good performance per dollar/per watt for both CPU and GPU. I am not sure if Power today can reach this kind of performance per dollar/per watt for a CPU, or ARM has the CPU performance to run high end games. Also bear in mind that consoles use less CPU cycle to run games than PC.
  • hfm - Thursday, May 23, 2013 - link

    "Differences in the memory subsytems also gives us some insight into each approach to the next-gen consoles. Microsoft opted for embedded SRAM + DDR3, while Sony went for a very fast GDDR5 memory interface. Sony’s approach (especially when combined with a beefier GPU) is exactly what you’d build if you wanted to give game developers the fastest hardware. Microsoft’s approach on the other hand looks a little more broad. The Xbox One still gives game developers a significant performance boost over the previous generation, but also attempts to widen the audience for the console."

    I don't quite understand how their choice of memory is going to "widen the audience for the console". Unless it's going to cause the XBox One to truly be cheaper, which I doubt. Or if you are referring to the entire package with Kinect, though it didn't seem so in the context of the statement.
  • FloppySnake - Friday, May 24, 2013 - link

    It's my understanding (following an AMD statement during a phone conference over 8000m announcement) that ZeroCore had been enhanced for graceful fall-back, powering-down individual GPU segments not just the entire GPU. If this is employed we could see the PS4 delivering power as needed (not sure what control they'll have over GDDR5 clocks if any), but potentially not power hungry unless it needs to be. Perhaps warrants further investigation?

    I agree with the article that if used appropriately, the 32MB SRAM buffer could compensate for limited bandwidth, but only in a traditional pipeline; it could severely limit GPGPU potential as there's limited back-and-forth bandwidth between the CPU and GPU, a buffer won't help here.

    For clarity, the new Kinect uses a time-of-flight depth sensor, completely different technology to the previous Kinect. This offers superior depth resolution and fps but the XY resolution is actually something like 500x500 (or some combination that adds up to 250,000 pixels).

Log in

Don't have an account? Sign up now