Memory Subsystem

With the same underlying CPU and GPU architectures, porting games between the two should be much easier than ever before. Making the situation even better is the fact that both systems ship with 8GB of total system memory and Blu-ray disc support. Game developers can look forward to the same amount of storage per disc, and relatively similar amounts of storage in main memory. That’s the good news.

The bad news is the two wildly different approaches to memory subsystems. Sony’s approach with the PS4 SoC was to use a 256-bit wide GDDR5 memory interface running somewhere around a 5.5GHz datarate, delivering peak memory bandwidth of 176GB/s. That’s roughly the amount of memory bandwidth we’ve come to expect from a $300 GPU, and great news for the console.

Xbox One Motherboard, courtesy Wired

Die size dictates memory interface width, so the 256-bit interface remains but Microsoft chose to go for DDR3 memory instead. A look at Wired’s excellent high-res teardown photo of the motherboard reveals Micron DDR3-2133 DRAM on board (16 x 16-bit DDR3 devices to be exact). A little math gives us 68.3GB/s of bandwidth to system memory.

To make up for the gap, Microsoft added embedded SRAM on die (not eDRAM, less area efficient but lower latency and doesn't need refreshing). All information points to 32MB of 6T-SRAM, or roughly 1.6 billion transistors for this memory. It’s not immediately clear whether or not this is a true cache or software managed memory. I’d hope for the former but it’s quite possible that it isn’t. At 32MB the ESRAM is more than enough for frame buffer storage, indicating that Microsoft expects developers to use it to offload requests from the system memory bus. Game console makers (Microsoft included) have often used large high speed memories to get around memory bandwidth limitations, so this is no different. Although 32MB doesn’t sound like much, if it is indeed used as a cache (with the frame buffer kept in main memory) it’s actually enough to have a substantial hit rate in current workloads (although there’s not much room for growth).

Vgleaks has a wealth of info, likely supplied from game developers with direct access to Xbox One specs, that looks to be very accurate at this point. According to their data, there’s roughly 50GB/s of bandwidth in each direction to the SoC’s embedded SRAM (102GB/s total bandwidth). The combination of the two plus the CPU-GPU connection at 30GB/s is how Microsoft arrives at its 200GB/s bandwidth figure, although in reality that’s not how any of this works. If it’s used as a cache, the embedded SRAM should significantly cut down on GPU memory bandwidth requests which will give the GPU much more bandwidth than the 256-bit DDR3-2133 memory interface would otherwise imply. Depending on how the eSRAM is managed, it’s very possible that the Xbox One could have comparable effective memory bandwidth to the PlayStation 4. If the eSRAM isn’t managed as a cache however, this all gets much more complicated.

Microsoft Xbox One vs. Sony PlayStation 4 Memory Subsystem Comparison
  Xbox 360 Xbox One PlayStation 4
Embedded Memory 10MB eDRAM 32MB eSRAM -
Embedded Memory Bandwidth 32GB/s 102GB/s -
System Memory 512MB 1400MHz GDDR3 8GB 2133MHz DDR3 8GB 5500MHz GDDR5
System Memory Bus 128-bits 256-bits 256-bits
System Memory Bandwidth 22.4 GB/s 68.3 GB/s 176.0 GB/s

There are merits to both approaches. Sony has the most present-day-GPU-centric approach to its memory subsystem: give the GPU a wide and fast GDDR5 interface and call it a day. It’s well understood and simple to manage. The downsides? High speed GDDR5 isn’t the most power efficient, and Sony is now married to a more costly memory technology for the life of the PlayStation 4.

Microsoft’s approach leaves some questions about implementation, and is potentially more complex to deal with depending on that implementation. Microsoft specifically called out its 8GB of memory as being “power friendly”, a nod to the lower power operation of DDR3-2133 compared to 5.5GHz GDDR5 used in the PS4. There are also cost benefits. DDR3 is presently cheaper than GDDR5 and that gap should remain over time (although 2133MHz DDR3 is by no means the cheapest available). The 32MB of embedded SRAM is costly, but SRAM scales well with smaller processes. Microsoft probably figures it can significantly cut down the die area of the eSRAM at 20nm and by 14/16nm it shouldn’t be a problem at all.

Even if Microsoft can’t deliver the same effective memory bandwidth as Sony, it also has fewer GPU execution resources - it’s entirely possible that the Xbox One’s memory bandwidth demands will be inherently lower to begin with.

CPU & GPU Hardware Analyzed Power/Thermals, OS, Kinect & TV
Comments Locked

245 Comments

View All Comments

  • tipoo - Wednesday, May 22, 2013 - link

    I wonder how close the DDR3 plus small fast eSRAM can get to the GDDR5s peak performance from the PS4. The GDDR5 will be better in general for the GPU no doubt, but how much will be offset by the eSRAM? And how much will GDDRs high latency hurt the CPU in the PS4?
  • Braincruser - Wednesday, May 22, 2013 - link

    The cpu is running on low frequency ~ 1.6 GHz which is half the frequency of most mainstream processors. And the GDDRs latency shouldn't be more than double the DDR3 latency. So in effect the latency stays the same, relativelly speaking.
  • MrMilli - Wednesday, May 22, 2013 - link

    GDDR5 actually has around ~8-10x worse latency compared to DDR3. So the CPU in the PS4 is going to be hurt. Everybody's talking about bandwidth but the Xbox One is going to have such a huge latency advantage that maybe in the end it's going to be better off.
  • mczak - Wednesday, May 22, 2013 - link

    gddr5 having much worse latency is a myth. The underlying memory technology is all the same after all, just the interface is different. Though yes memory controllers of gpus are more optimized for bandwidth rather than latency but that's not gddr5 inherent. The latency may be very slightly higher, but it probably won't be significant enough to be noticeable (no way for a factor of even 2 yet alone 8 as you're claiming).
    I don't know anything about the specific memory controller implementations of the PS4 or Xbox One (well other than one using ddr3 the other gddr5...) but I'd have to guess latency will be similar.
  • shtldr - Thursday, May 23, 2013 - link

    Are you talking latency in cycles (i.e. relative to memory's clock rate) or latency in seconds (absolute)? Latency in cycles is going to be worse, latency in seconds is going to be similar. If I understand it correctly, the absolute (objective) latency expressed in seconds is the deciding factor.
  • MrMilli - Thursday, May 23, 2013 - link

    I got my info from Beyond3D but I went to dig into whitepapers from Micron and Hynix and it seems that my info was wrong.
    Micron's DDR3 PC2133 has a CL14 read latency specification but possibly set as low as CL11 on the XBox. Hynix' GDDR5 (I don't know which brand GDDR5 the PS4 will use but they'll all be more or less the same) has a CL18 up to CL20 for GDDR5-5500.
    So even though this doesn't give actual latency information since that depends a lot on the memory controller, it probably won't be worse than 2x.
  • tipoo - Monday, May 27, 2013 - link

    Nowhere near as bad as I thought GDDR5 would be given what everyone is saying about it to defend DDR3, and given that it runs at such a high clock rate the CL effect will be reduced even more (that's measured in clock cycles, right?).
  • Riseer - Sunday, June 23, 2013 - link

    For game performance,GDDR5 has no equal atm.Their is a reason why it's used in Gpu's.MS is building a media center,while Sony is building a gaming console.Sony won't need to worry so much about latency for a console that puts games first and everything else second.Overall Ps4 will play games better then Xbone.Also ESram isn't a good thing,the only reason why Sony didn't use it is because it would complicate things more then they should be.This is why Sony went with GDDR5 it's a much simpler design that will streamline everything.This time around it will be MS with the more complicated console.
  • Riseer - Sunday, June 23, 2013 - link

    Also lets not forget you only have 32mb worth of ESRAM.At 1080p devs will push for more demanding effects.On Ps4 they have 8 gigs of ram that has around 70GB's more bandwidth.Since DDR3 isn't good for doing graphics,that only leaves 32mb of true Vram.That said Xbone can use the DDR3 ram for graphics,issue being DDR3 has low bandwidth.MS had no choice but to use ESRam to claw back some performance.
  • CyanLite - Sunday, May 26, 2013 - link

    I've been a long-term Xbox fan, but the silly Kinect requirement scares me. It's only a matter of time before somebody hacks that. And I'm a casual sit-down kind of gamer. Who wants to stand up and wave arm motions playing Call of Duty? Or shout multiple voice commands that are never recognized the first time around?

    If PS4 eliminates the camera requirement, get rids of the phone-home Internet connections, and lets me buy used games then I'm willing to reconsider my console loyalty.

Log in

Don't have an account? Sign up now