Balancing The System With Other Hardware Features

The biggest technological advantage consoles have over PCs is that consoles are a fully-integrated fixed platform specified by a single manufacturer. In theory, the manufacturer can ensure that the system is properly balanced for the use case, something PC OEMs are notoriously bad at. Consoles generally don't have the problem of wasting a large chunk of the budget on a single high-end component that the rest of the system cannot keep up with, and consoles can more easily incorporate custom hardware when suitable off-the-shelf components aren't available. (This is why the outgoing console generation didn't use desktop-class CPU cores, but dedicated a huge amount of the silicon budget to the GPUs.)

By now, PC gaming has thoroughly demonstrated that increasing SSD speed has little or no impact on gaming performance. NVMe SSDs are several times faster than SATA SSDs on paper, but for almost all PC games that extra performance goes largely unused. In part, this is due to bottlenecks elsewhere in the system that are revealed when storage performance is fast enough to no longer be a serious limitation. The upcoming consoles will include a number of hardware features designed to make it easier for games to take advantage of fast storage, and to alleviate bottlenecks that would be troublesome on a standard PC platform. This is where the console storage tech gets actually interesting, since the SSDs themselves are relatively unremarkable.

Compression: Amplifying SSD Performance

The most important specialized hardware feature the consoles will include to complement storage performance is dedicated data decompression hardware. Game assets must be stored on disk in a compressed form to keep storage requirements somewhat reasonable. Games usually rely on multiple compression methods—some lossy compression methods specialized for certain types of data (eg. audio and images), and some lossless general-purpose algorithm, but almost everything goes through at least one compression method that is fairly computationally complex. GPU architectures have long included hardware to handle decoding video streams and support simple but fast lossy texture compression methods like S3TC and its successors, but that leaves a lot of data to be decompressed by the CPU. Desktop CPUs don't have dedicated decompression engines or instructions, though many instructions in the various SIMD extensions are intended to help with tasks like this. Even so, decompressing a stream of data at several GB/s is not trivial, and special-purpose hardware can do it more efficiently while freeing up CPU time for other tasks. The decompression offload hardware in the upcoming consoles is implemented on the main SoC so that it can unpack data after it traverses the PCIe link from the SSD and resides in the main RAM pool shared by the GPU and CPU cores.

Decompression offload hardware like this isn't found on typical desktop PC platforms, but it's hardly a novel idea. Previous consoles have included decompression hardware, though nothing that would be able to keep pace with NVMe SSDs. Server platforms often include compression accelerators, usually paired with cryptography accelerators: Intel has done such accelerators both as discrete peripherals and integrated into some server chipsets, and IBM's POWER9 and later CPUs have similar accelerator units. These server accelerators more comparable to what the new consoles need, with throughput of several GB/s.

Microsoft and Sony each have tuned their decompression units to fit the performance expected from their chosen SSD designs. They've chosen different proprietary compression algorithms to target: Sony is using RAD's Kraken, a general-purpose algorithm which was originally designed to be used on the current consoles with relatively weak CPUs but vastly lower throughput requirements. Microsoft focused specifically on texture compression, reasoning that textures account for the largest volume of data that games need to read and decompress. They developed a new texture compression algorithm and dubbed it BCPack in a slight departure from their existing DirectX naming conventions for texture compression methods already supported by GPUs.

Compression Offload Hardware
  Microsoft
Xbox Series X
Sony
Playstation 5
Algorithm BCPack Kraken (and ZLib?)
Maximum Output Rate 6 GB/s 22 GB/s
Typical Output Rate 4.8 GB/s 8–9 GB/s
Equivalent Zen 2 CPU Cores 5 9

Sony states that their Kraken-based decompression hardware can unpack the 5.5GB/s stream from the SSD into a typical 8-9 GB/s of uncompressed data, but that can theoretically reach up to 22 GB/s if the data was redundant enough to be highly compressible. Microsoft states their BCPack decompressor can output a typical 4.8 GB/s from the 2.4 GB/s input, but potentially up to 6 GB/s. So Microsoft is claiming slightly higher typical compression ratios, but still a slower output stream due to the much slower SSD, and Microsoft's hardware decompression is apparently only for texture data.

The CPU time saved by these decompression units sounds astounding: the equivalent of about 9 Zen 2 CPU cores for the PS5, and about 5 for the Xbox Series X. Keep in mind these are peak numbers that assume the SSD bandwidth is being fully utilized—real games won't be able to keep these SSDs 100% busy, so they wouldn't need quite so much CPU power for decompression.

The storage acceleration features on the console SoCs aren't limited to just compression offload, and Sony in particular has described quite a few features, but this is where the information released so far is really vague, unsatisfying and open to interpretation. Most of this functionality seems to be intended to reduce overhead, handling some of the more mundane aspects of moving data around without having to get the CPU involved as often, and making sure the hardware decompression process is invisible to the game software.

DMA Engines

Direct Memory Access (DMA) refers to the ability for a peripheral device to read and write to the CPU's RAM without the CPU being involved. All modern high-speed peripherals use DMA for most of their communication with the CPU, but that's not the only use for DMA. A DMA Engine is a peripheral device that exists solely to move data around; it usually doesn't do anything to that data. The CPU can instruct the DMA engine to perform a copy from one region of RAM to another, and the DMA engine does the rote work of copying potentially gigabytes of data without the CPU having to do a mov (or SIMD equivalent) instruction for every piece, and without polluting CPU caches. DMA engines can also often do more than just offload simple copy operations: they commonly support scatter/gather operations to rearrange data somewhat in the process of moving it around. NVMe already has features like scatter/gather lists that can remove the need for a separate DMA engine to provide that feature, but the NVMe commands in these consoles are acting mostly on compressed data.

Even though DMA engines are a peripheral device, you usually won't find them as a standalone PCIe card. It makes the most sense for them to be as close to the memory controller as possible, which means on the chipset or on the CPU die itself.The PS5 SoC includes a DMA engine to handle copying around data coming out of the compression unit. As with the compression engines, this isn't a novel invention so much as a feature missing from standard desktop PCs, which means it's something custom that Sony has to add to what would otherwise be a fairly straightforward AMD APU configuration.

IO Coprocessor

The IO complex in the PS5's SoC also includes a dual-core processor with its own pool of SRAM. Sony has said almost nothing about the internals of this: Mark Cerny describes one core as dedicated to SSD IO, allowing games to "bypass traditional file IO", while the other core is described simply as helping with "memory mapping". For more detail, we have to turn to a patent Sony filed years ago, and hope it reflects what's actually in the PS5.

The IO coprocessor described in Sony's patent offloads portions of what would normally be the operating system's storage drivers. One of its most important duties is to translate between various address spaces. When the game requests a certain range of bytes from one of its files, the game is looking for the uncompressed data. The IO coprocessor figures out which chunks of compressed data are needed and sends NVMe read commands to the SSD. Once the SSD has returned the data, the IO coprocessor sets up the decompression unit to process that data, and the DMA engine to deliver it to the requested locations in the game's memory.

Since the IO coprocessor's two cores are each much less powerful than a Zen 2 CPU core, they cannot be in charge of all interaction with the SSD. The coprocessor handles the most common cases of reading data, and the system falls back to the OS running on the Zen 2 cores for the rest. The coprocessor's SRAM isn't used to buffer the vast amounts of game data flowing through the IO complex; instead this memory holds the various lookup tables used by the IO coprocessor. In this respect, it is similar to an SSD controller with a pool of RAM for its mapping tables, but the job of the IO coprocessor is completely different from what an SSD controller does. This is why it will be useful even with aftermarket third-party SSDs.

Cache Coherency

The last somewhat storage-related hardware feature Sony has disclosed is a set of cache coherency engines. The CPU and GPU on the PS5 SoC share the same 16 GB of RAM, which eliminates the step of copying assets from main RAM to VRAM after they're loaded from the SSD and decompressed. But to get the most benefit from the shared pool of memory, the hardware has to ensure cache coherency not just between the several CPU cores, but also with the GPU's various caches. That's all normal for an APU, but what's novel with the PS5 is that the IO complex also participates. When new graphics assets are loaded into memory through the IO complex and overwrite older assets, it sends cache invalidation signals to any relevant caches—to discard only the stale data, rather than flush the entire GPU caches.

What about the Xbox Series X?

There's a lot of information above about the Playstation 5's custom IO complex, and it's natural to wonder whether the Xbox Series X will have similar capabilities or if it's limited to just the decompression hardware. Microsoft has lumped the storage-related technologies in the new Xbox under the heading of "Xbox Velocity Architecture":

Microsoft defines this as having four components: the SSD itself, the compression engine, a new software API for accessing storage (more on this later), and a hardware feature called Sampler Feedback Streaming. That last one is only distantly related to storage; it's a GPU feature that makes partially resident textures more useful by allowing shader programs to keep a record of which portions of a texture are actually being used. This information can be used to decide what data to evict from RAM and what to load next—such as a higher-resolution version of the texture regions that are actually visible at the moment.

Since Microsoft doesn't mention anything like the other PS5 IO complex features, it's reasonable to assume the Xbox Series X doesn't have those capabilities and its IO is largely managed by the CPU cores. But I wouldn't be too surprised to find out the Series X has a comparable DMA engine, because that's kind of feature has historically shown up in many console architectures.

SSD Details: Xbox Series X and Playstation 5 What To Expect From Next-gen Games
Comments Locked

200 Comments

View All Comments

  • almighty15 - Sunday, June 14, 2020 - link

    It should read "By the time it ships, the PS5 SSD's read performance will be unremarkable – matched by other high-end SSDs ON PAPER"

    In terms of real world performance a 5Gb/s NVMe drive can't beat a 550Mb/s SATA III SSD and yet Anandtech somehow think they'll compete with console?
  • Eliadbu - Saturday, June 13, 2020 - link

    This change might in few years make games to require SSD at certain speed as a base requirement. I'm for it since I feel my NVME SSDs are not helpful in gaming more than my SATA ssd. In many cases you need the consoles to make a move for PCs to enjoy it, I see this as definitely a case of such.
  • vol.2 - Saturday, June 13, 2020 - link

    So, is the PS5 officially the ugliest console ever created?
  • wrkingclass_hero - Sunday, June 14, 2020 - link

    It's up there.
  • wolfesteinabhi - Saturday, June 13, 2020 - link

    with main cpu/gpu being very identical in both camps they are running out of ways to differentiate.

    storage is important though .. but i wish they woukd stick to some common standard ....and over time we get games that can be played on either of the consoles or they can be htpcs in itself that can do a lot more from their h/w than just games .... given current hardware they have ... i feel its a bit wasted when they are only limited to games(that too a very limited amlunt..especially Sony/PS)
  • almighty15 - Sunday, June 14, 2020 - link

    I don't normally comment on article like this but feel I have due to the tone you have regarding PC 'catching up' to consoles.

    That is a long way away, I'm talking years! As I explained on Twitter a 5Gb/s NVMe drives loads games no faster then a 550Mb/s SATA III SSD, and while some of that is down to having to cater to machines that still run mechanical drives most of it is due to Windows just having a file and I/O systems that decades old.

    If we want to send a texture to a GPU on PC this is the 'hardware' path it has to take:

    SSD > system bus > Chipset (South bridge) > system bus > CPU > system bus > main RAM >
    system bus > CPU (North bridge) > PCIEX bus > VRAM

    To get a texture to GPU memory on PlayStation 5 it goes:

    SSD > system bus > I/O controller > system bus > VRAM

    On PC these system buses and chips all run and communicate with each other at different speeds which causes bottlenecks as data is moved through all that hardware.

    On PS5 the path is so much straight forward and the I/O block runs at the same speed as the CPU clock so it's all super faster and efficient.

    And then on PC there's the software side of it, which again is a HUGE problem that Microsoft can't fix with a little Windows update. The hardware on PC can not directly talk to other hardware, meaning your GPU can not directly talk to the storage driver and ask it for a texture, it has to ask Windows, who then ask the chipset driver, who then asks the storage driver for a file.......

    It requires a complete RE-WRITE of Windows storage drivers and kernal which is a process that takes years as they have to send any new idea's over to developers and software owners so they can do their own testing and plan patching their existing software.

    When Apple updated their file system for SSD it them 3 years! And they way less legacy hardware and hardware configurations to worry about then Micorosft.

    There is currently nothing in the develop changes about a new version of Windows or a new file system in the works meaning that it's at least 3-4 years away.

    This article doesn't even scratch the surface as to why storage and I/O is so slow and bottlenecked on PC and make it out to be like it's a simple fix to get PC SSD's performing like consoles.

    PC's will catch up, they always do, but do not let articles like this one trick you in to thinking it's a quick fix as it most certainly isn't.
  • eddman - Sunday, June 14, 2020 - link

    As mentioned in this very article, they have this new DirectStorage API on XSX and plan to bring it to windows. They haven't released any specific details but it might even be some sort of a direct GPU/VRAM-to-storage solution.

    Whatever it is, it's surely bound to improve the file transfer performance, and since it'd be part of the Directx suite, developers should have an easy time taking advantage of it.
  • Billy Tallis - Sunday, June 14, 2020 - link

    Your description of how the data paths differ between a standard PC and the PS5 is wildly inaccurate.
  • eddman - Sunday, June 14, 2020 - link

    Yea, the path is wrong. For one, RAM is not connected to the CPU through the system bus. It's something like this for desktops, IINM:

    1. Intel/Ryzen (SSD connected to the chipset):
    SSD > PCIe > Chipset > system bus (DMI/PCIe x4) > CPU > memory channel > RAM > memory channel > CPU(*) > PCIe x16 > VRAM

    2. Ryzen (SSD connected directly to the CPU):
    SSD > PCIe > CPU > memory channel > RAM > memory channel > CPU(*) > PCIe x16 > VRAM

    (*) at this step, perhaps the CPU has already done the I/O calculations, so the data goes directly from the system RAM (through the memory controller and then PCIe x16) to the VRAM (without wasting CPU cycles)?

    (I don't know that much about hardware at such low levels, so please correct me if I'm wrong.)

    With a GPU-to-storage direct access, it should look like these:
    1. SSD > PCIe > Chipset > system bus (DMI/PCIe x4) > CPU(!) > PCIe x16 > VRAM
    2. SSD > PCIe > CPU(!) > PCIe x16 > VRAM

    The second option doesn't look much different from the PS5.

    (!) just passing through CPU's System Agent/Infinity Fabric with minimal CPU overhead.
  • Billy Tallis - Sunday, June 14, 2020 - link

    It is important to make a distinction between when data hits the CPU die but doesn't actually require attention from a CPU core. DMA is important! Data coming in from the SSD can be forwarded to RAM or to the GPU (P2P DMA) by the PCIe root complex without involvement from a CPU core. The CPU just needs to initiate the transaction and handle the completion interrupt (which often involves setting up the next DMA transfer).

    On the PS5, there will also be a DMA round-trip from RAM to the decompression unit back to RAM, with either a CPU core or the IO coprocessor setting up the DMA transfers.

Log in

Don't have an account? Sign up now