Memory Subsystem

With the same underlying CPU and GPU architectures, porting games between the two should be much easier than ever before. Making the situation even better is the fact that both systems ship with 8GB of total system memory and Blu-ray disc support. Game developers can look forward to the same amount of storage per disc, and relatively similar amounts of storage in main memory. That’s the good news.

The bad news is the two wildly different approaches to memory subsystems. Sony’s approach with the PS4 SoC was to use a 256-bit wide GDDR5 memory interface running somewhere around a 5.5GHz datarate, delivering peak memory bandwidth of 176GB/s. That’s roughly the amount of memory bandwidth we’ve come to expect from a $300 GPU, and great news for the console.

Xbox One Motherboard, courtesy Wired

Die size dictates memory interface width, so the 256-bit interface remains but Microsoft chose to go for DDR3 memory instead. A look at Wired’s excellent high-res teardown photo of the motherboard reveals Micron DDR3-2133 DRAM on board (16 x 16-bit DDR3 devices to be exact). A little math gives us 68.3GB/s of bandwidth to system memory.

To make up for the gap, Microsoft added embedded SRAM on die (not eDRAM, less area efficient but lower latency and doesn't need refreshing). All information points to 32MB of 6T-SRAM, or roughly 1.6 billion transistors for this memory. It’s not immediately clear whether or not this is a true cache or software managed memory. I’d hope for the former but it’s quite possible that it isn’t. At 32MB the ESRAM is more than enough for frame buffer storage, indicating that Microsoft expects developers to use it to offload requests from the system memory bus. Game console makers (Microsoft included) have often used large high speed memories to get around memory bandwidth limitations, so this is no different. Although 32MB doesn’t sound like much, if it is indeed used as a cache (with the frame buffer kept in main memory) it’s actually enough to have a substantial hit rate in current workloads (although there’s not much room for growth).

Vgleaks has a wealth of info, likely supplied from game developers with direct access to Xbox One specs, that looks to be very accurate at this point. According to their data, there’s roughly 50GB/s of bandwidth in each direction to the SoC’s embedded SRAM (102GB/s total bandwidth). The combination of the two plus the CPU-GPU connection at 30GB/s is how Microsoft arrives at its 200GB/s bandwidth figure, although in reality that’s not how any of this works. If it’s used as a cache, the embedded SRAM should significantly cut down on GPU memory bandwidth requests which will give the GPU much more bandwidth than the 256-bit DDR3-2133 memory interface would otherwise imply. Depending on how the eSRAM is managed, it’s very possible that the Xbox One could have comparable effective memory bandwidth to the PlayStation 4. If the eSRAM isn’t managed as a cache however, this all gets much more complicated.

Microsoft Xbox One vs. Sony PlayStation 4 Memory Subsystem Comparison
  Xbox 360 Xbox One PlayStation 4
Embedded Memory 10MB eDRAM 32MB eSRAM -
Embedded Memory Bandwidth 32GB/s 102GB/s -
System Memory 512MB 1400MHz GDDR3 8GB 2133MHz DDR3 8GB 5500MHz GDDR5
System Memory Bus 128-bits 256-bits 256-bits
System Memory Bandwidth 22.4 GB/s 68.3 GB/s 176.0 GB/s

There are merits to both approaches. Sony has the most present-day-GPU-centric approach to its memory subsystem: give the GPU a wide and fast GDDR5 interface and call it a day. It’s well understood and simple to manage. The downsides? High speed GDDR5 isn’t the most power efficient, and Sony is now married to a more costly memory technology for the life of the PlayStation 4.

Microsoft’s approach leaves some questions about implementation, and is potentially more complex to deal with depending on that implementation. Microsoft specifically called out its 8GB of memory as being “power friendly”, a nod to the lower power operation of DDR3-2133 compared to 5.5GHz GDDR5 used in the PS4. There are also cost benefits. DDR3 is presently cheaper than GDDR5 and that gap should remain over time (although 2133MHz DDR3 is by no means the cheapest available). The 32MB of embedded SRAM is costly, but SRAM scales well with smaller processes. Microsoft probably figures it can significantly cut down the die area of the eSRAM at 20nm and by 14/16nm it shouldn’t be a problem at all.

Even if Microsoft can’t deliver the same effective memory bandwidth as Sony, it also has fewer GPU execution resources - it’s entirely possible that the Xbox One’s memory bandwidth demands will be inherently lower to begin with.

CPU & GPU Hardware Analyzed Power/Thermals, OS, Kinect & TV
Comments Locked

245 Comments

View All Comments

  • Shawn74 - Tuesday, September 10, 2013 - link

    mmmmm....
    Custom CPU (6 operations per clock compared to the 4 of PS4) and now overclocked.
    GPU (now overclocked)
    eSram (ultra fast memory with extremely low access time, we will see it's real function soon)
    DDR3 (extremely fast access time memory)
    Maybe this combination may become a nightmare for the PS4 owners?? xD
    Yes, i really think YES.

    And please don't forget the new pulse triggers (apparently fantastic and a must have for a completely-new experience)

    YES, my final decision is for the ONE
  • Shad0w59 - Wednesday, September 11, 2013 - link

    I don't really trust Microsoft with all that overclocking after Xbox 360's high failure rate.
  • Shawn74 - Wednesday, September 11, 2013 - link

    Shadow, have you seen the cooling system? It's giant..
    Have you seen the case? It's giant..(a lot of fresh air inside ;-)
    Have you seen the Xbox One will detect heat, power down to avoid meltdown? http://www.vg247.com/2013/08/13/xbox-one-will-dete...
    And the very heat power supply is outside.....

    A perfect system for overclocking.... obviously for me....

    Ah, for my first message here a reply to PS4 team made directly by Albert Penello (Microsoft Director of Product Planning):

    "*******************************************************************************************
    I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

    I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

    So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

    I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

    So, here are couple of points about some of the individual parts for people to consider:

    • 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
    • Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
    • We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
    • We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
    • We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
    • Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

    Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

    I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

    Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

    Thanks again for letting my participate. Hope this gives people more background on my claims.
    "*****************************************************************************

    Once again i would like to warn PS4 fan........Everytime Sony announced a new console, Sony have publicized it as the most powerful.... every time Xbox does the job better....

    In my opinion

    P.S. Sorry for my bad english, i'm italian
  • Shawn74 - Wednesday, September 11, 2013 - link

    Penello's post is here:
    http://67.227.255.239/forum/showthread.php?p=80951...
  • tipoo - Saturday, September 21, 2013 - link

    Regarding the eDRAM, it's now known not to be an automatically managed cache, from developer comments about having to code specifically to use it.

Log in

Don't have an account? Sign up now