What To Expect From Next-gen Games

NVMe SSDs can easily be 50 times faster than hard drives for sequential reads and thousands of times faster for random reads. It stands to reason that game developers should be able to do things differently when they no longer need to target slow hard drives and can rely on fast storage. Workarounds for slow hard drive performance can be discarded, and new ideas and features that would never work on hard drives can be tried out. Microsoft and Sony are in pretty close agreement about what this will mean for the upcoming console generation, and they've touted most of the same benefits for end users.

Most of the game design changes enabled by abandoning hard drives will have little impact on the gaming experience from one second to the next; removing workarounds for slow storage won't do much to help frames per second, but it will remove some other pain points in the overall console experience. For starters, solid state drives can tolerate a high degree of fragmentation with no noticeable performance impact, so game files don't need to be defragmented after updates. Defragmentation is something most PC users no longer need to give even a passing thought, but it's still an occasionally necessary (albeit automatic) maintenance process on current consoles.


Not as obsolete as you might have thought. But soon.

Since game developers no longer need to care so much about maintaining spatial locality of data on disk, it will also no longer be necessary for data that's reused in several parts of a game to be duplicated on several parts of the disk. Commonly re-used sounds, textures and models will only need to be included once in a game's files. This will have at least a tiny effect on slowing the growth of game install sizes, but it probably won't actually reverse that trend except where a studio has been greatly abusing the copy and paste features in their level editors.


Clone tool abuse

Warnings to not turn off the console while a game is saving first showed up when consoles moved away from cartridges with built-in solid state storage, and those warnings continue to be a hallmark of many console games and half-assed PC ports. The write speeds of SSDs are fast enough that saving a game takes much less time than reaching for a power switch, so ideally those warnings should be reduced, if not gone for good.


How to spot a console port

But NVMe SSDs have write speeds that go far beyond even that requirement, and that allows changes in how games are saved. Instead of summarizing the player's progress in a file that is mere megabytes, new consoles will have the freedom to dump gigabytes of data to disk. All of the RAM used by a game can be saved to a NVMe SSD in a matter of seconds, like the save state features common to console emulators. If the static assets (textures, sounds, etc.) that are unmodified are excluded from the save file, we're back down to near instant save operations. But now the save file and in-use game assets can be simply read back into RAM to resume the whole game state in no more than a second or two, bypassing all the usual start-up and load work done by games.


Xbox Series X Quick Resume menu

The deduplication of game assets is a benefit that will trivially carry over to PC ports, and the lack of defragmentation is something PC gamers with SSDs have already been enjoying for years—and neither of those changes requires a cutting edge SSD. The instant save and resume capabilities can work just fine (albeit not quite so "instant") on even a SATA SSD, but implementing this well requires a bit of help from the operating system, so it may be a while before this feature becomes commonplace on PC games. (Desktop operating systems have long supported hibernate and restore of the entire OS image, but doing it per-application is unusual.)

But those are all pretty much convenience features that do not make the core game experience itself any richer. The reduction or elimination of loading screens be a welcome improvement for many games—but many more games have already gone to great lengths to eliminate loading screens as much as possible. This most often takes the form of level design that obscures what would have been a loading screen. The player's movement and field of view are temporarily constrained, drastically reducing the assets that need to stay in RAM and allowing everything else to be swapped out, while retaining some illusion of player freedom. Narrow hallways and canyons, elevator and train rides, and airlocks—frequently one-way trips—are all standard game design elements to make it less obvious where a game world is divided.


Level design for 64MB of RAM

Open-world games get by with fewer such design elements by making the world less detailed and restricting player movement speed so that no matter where the player chooses to move, the necessary assets can be streamed into RAM on the fly. With a fast SSD, game designers and players will both have more freedom. But some transition sequences will still be required for major scene changes. The consoles cannot reload the entire contents of RAM from one frame to the next; that will still take several seconds.

SSD As RAM?

Finally, we come to what may be the most significant consequence of making SSDs standard and required for games, but is also the most overstated benefit: Both Microsoft and Sony have made statements along the lines of saying that the SSD can be used almost like RAM. Whatever qualifiers and caveats those statement came with quickly get dropped by fans and even some press. So let's be clear here: the console SSDs are no substitute for RAM. The PS5's SSD can supply data at 5.5 GB/s. The RAM runs at 448 GB/s, *81 times faster*. The consoles have 16 GB of GDDR6 memory. If a game needs to use more than 16 GB to render a scene, framerates will drop down to Myst levels because the SSD is not fast enough. The SSDs are inadequate in both throughput and latency.

It's certainly possible for a level to use more than 16GB of assets, but not all on screen at once. The technical term here is working set: how much memory is really being actively used at once. What the SSD changes somewhat is the threshold for what can be considered active. With a fast SSD, the assets that need to be kept in DRAM aren't much more than what's currently on screen, and the game doesn't need to prefetch very far ahead. Textures for an object in the same room but currently off-screen may be safe to leave on disk until the camera starts moving in that direction, whereas a hard drive based system will probably need to keep assets for the entire room and adjacent rooms in RAM to avoid stuttering. This difference means an SSD-based console (especially with NVMe performance) can free up some VRAM and allow for some higher-resolution assets. It's not a huge change; it's not like the SSD increases effective VRAM size by tens of GBs, but it is very plausible that it allows games to use an extra few GB of RAM for on-screen content rather than prefetching off-screen assets. Mark Cerny has approximated it as saying the game now only needs to pre-fetch about a second ahead, rather than about 30 seconds ahead.


(Not to scale)

There's another layer to this benefit: partially resident textures has been possible on other platforms, but becomes more powerful now. What was originally developed for multi-acre ground textures can now be effectively used on much smaller objects. Sampler feedback allows the GPU to provide the application with more detailed and accurate information about which portions of a texture are actually being displayed. The game can use that information to only issue SSD read requests for those portions of the file. This can be done with a granularity of 128kB blocks of the (uncompressed) texture, which is small enough to allow for meaningful RAM savings by not loading texels that won't be used, while at the same time still issuing SSD reads that are large enough to be a good fit for SSD performance characteristics.

Microsoft has stated that these capabilities add up to the effect of a 2x or 3x multiplier of RAM capacity and SSD bandwidth. I'm not convinced. Sure, a lot of SSD bandwidth can be saved over short timescales by incrementally loading a scene. But I doubt these features will allow the Series X with its ~10GB of VRAM to handle the kind of detailed scenery you could draw on a PC GPU with 24GB of VRAM. They're welcome to prove me wrong, though.

Balancing the System: Other Hardware Features What's Necessary to Get Full Performance Out of a Solid State Drive?
Comments Locked

200 Comments

View All Comments

  • eddman - Monday, June 15, 2020 - link

    Yes, I added the CPU in the paths simply because the data goes through the CPU complex, but not necessarily through the cores.

    "Data coming in from the SSD can be forwarded .... to the GPU (P2P DMA)"

    You mean the data does not go through system RAM? The CPU still has to process the I/O related operations, right?

    It seems nvidia has tackled this issue with a proprietary solution for their workstation products:
    https://developer.nvidia.com/gpudirect
    https://devblogs.nvidia.com/gpudirect-storage/

    They talk about the data path between GPU and storage.

    "The standard path between GPU memory and NVMe drives uses a bounce buffer in system memory that hangs off of the CPU.

    GPU DMA engines cannot target storage. Storage DMA engines cannot target GPU memory through the file system without GPUDirect Storage.

    DMA engines, however, need to be programmed by a driver on the CPU."

    Maybe MS' DirectStorage is similar to nvidia's solution.
  • Oxford Guy - Monday, June 15, 2020 - link

    "Consoles" are nothing more than artificial walled software gardens that exist because of consumer stupidity.

    They offer absolutely nothing the PC platform can't offer, via Linux + Vulkan + OpenGL.

    Period.
  • Oxford Guy - Monday, June 15, 2020 - link

    "but also going a step beyond the PC market to get the most benefit out of solid state storage."

    In order to justify their existence. Too bad it doesn't justify it.

    It's more console smoke and mirrors. People fall for it, though.
  • Oxford Guy - Monday, June 15, 2020 - link

    Consoles made sense when personal computer hardware was too expensive for just playing games, for most consumers.

    Back in the day, when real consoles existed, even computer expansion modules didn't take off. Why? Cost. Those "consoles" were really personal computers. All they needed was a keyboard, writable storage, etc. But, people didn't upgrade ANY console to a computer in large numbers. Even the NES had an expansion port on the bottom that sat unused. Lots of companies had wishful thinking about turning a console into a PC and some of them used that in marketing and were sued vaporware/inadequateanddelayedware (Intellivision).

    Just the cost of adding a real keyboard was more than consumers were willing to pay. Even inexpensive personal computers (PCs!) had chicklet keyboards, like the Atari 400. That thing cost a lot to build because of the stricter EMI emissions standards of its time but Atari used a chicklet keyboard anyway to save money. Sinclair also used them. Many inexpensive "home" computers that had full-travel keyboards were so mushy they were terrible to use. Early home PCs like the VideoBrain highlight just how much companies tried to cut corners just on the keyboard.

    Then, there is the writable storage. Cassettes were too slow and were extremely unreliable. Floppy drives were way too expensive for most PC consumers until the Apple II (where Wozniak developed a software controller to reduce cost a great deal vs. a mechanical one). They remained too expensive for gaming boxes, with the small exception of the shoddy Famicom FDS in Japan.

    All of these problems were solved a long time ago. Writable storage is dirt cheap. Keyboards are dirt cheap. Full-quality graphics display hardware is dirt cheap (as opposed to the true console days when a computer with more pixels/characters would cost a bundle and "consoles" would have much less resolution).

    The only thing remaining is the question: "Is the PC software ecosystem good enough". The answer was a firm no when it was Windows + DirectX. Now that we have Vulkan, though, there is no need for DirectX. Developers can use either the low-latency lower-level Vulkan or the high-level OpenGL, depending upon their needs for specific titles. Consumers and companies don't have to pay the Microsoft tax because Linux is viable.

    There literally is no credible justification for the existence of non-handheld "consoles" anymore. There hasn't been for some time now. The hardware is the same. In the old days a console would have much less RAM memory, due to cost. It would have much lower resolution, typically, due to cost. It wouldn't have high storage capacity, due to cost.

    All of that is moot. There is NOT ONE IOTA of difference between today's "console" and a PC. The walled software garden can evaporate. All it takes is Dorothy to use her bucket of water instead of continuing to drink the Kool-Aid.
  • Oxford Guy - Monday, June 15, 2020 - link

    Back in the day:

    A console had:

    much lower-resolution graphics, designed for TV sets at low cost
    much less RAM
    no floppy drive
    no keyboard
    no hard disk

    A quality personal computer had:

    more RAM, plus expansion (except for Jobs perversities like the original Mac)
    80 column character-based or, later, high-resolution bitmapped monitor graphics
    (there were some home PCs that used televisions but had things like disk drives)
    floppy drive support
    hard disk support (except, again, for the first Mac, which was a bad joke)
    a full-travel full-size non-mushy keyboard
    expansion slots (typically — not the first Mac!)
    an operating system and first-party software (both of which cost)
    thick paperbook manuals
    typically, a more powerful CPU (although not always)

    Today:

    A console has:

    Nothing a PC doesn’t have except for a stupid walled software garden.

    A PC has:

    Everything a console has except for the ludicrous walled software garden, a thing that offers no added value for consumers — quite the opposite.
  • Oxford Guy - Monday, June 15, 2020 - link

    The common claim that "consoles" of today offer more simplicity is a lie, too.

    In the true console days, you'd stick a cartridge in, turn on the power, and press start.

    Today, just as with the "PC" (really the same thing) — you have a complex operating system that needs to be patched relentlessly. You have games that have to be patched relentlessly. You have microtransactions. You have log-ins/accounts and software stores. Literally, you have games on disc that you can't even play until you patch the software to be compatible with the latest OS DRM. Developers also helpfully use that as an opportunity to drastically change gameplay (as with PS3 Topspin) and you have no choice in the matter. Remember, it's always an "upgrade".

    The hardware is identical. Even the controllers, once one of the few advantages of consoles (except for some, like the Atari 5200, which were boneheaded), are the same. They use the same USB ports and such. There is no difference. Even if there were, the rise of Chinese manufacturing and the Internet means you could get a cheap and effective adapter with minimal fuss.

    You want fast storage so badly? You can get it on the PC. You want software that is honed to be fast and efficient? Easily done. It's all x86 stuff.

    Give me justified elaborate custom chips (not frivolous garbage like Apple's T2), truly novel form factors that are needed for special gameplay, and things like that and then, maybe, you might be able to sell to people on the higher end of the Bell curve.

    If I were writing an article on consoles I'd use a headline something like this: "Consoles of 2020: The SSD Speed Gimmick — Betting on the Bell Curve"

    It would be bad enough if there were only one extra stupid walled garden (beyond Windows + DirectX). But to have three is even more irksome.
  • edzieba - Monday, June 15, 2020 - link

    "partially resident textures"

    Megatexturing is back!

    "The most interesting bit about DirectStorage is that Microsoft plans to bring it to Windows, so the new API cannot be relying on any custom hardware and it has to be something that would work on top of a regular NTFS filesystem. "

    The latter does not imply the former. API support just means that the API calls will not fail. It doesn't mean they will be as fast as a system using dedicated hardware to handle those calls. Just like with DXR: you can easily support DXR calls on a GPU without dedicated BVH traversal hardware, they'll just be as slow as unaccelerated raytracing has always been.
    Soft API support for DirectStorage makes sense to aid in Microsoft's quest for 'cross play' between PC and XboX. If the same API calls can be used for both developers are more likely to work into implementing DirectStorage. As long as DirectStorage doesn't have too large a penalty when used on PC without dedicated hardware, the reduction in dev overhead is attractive.
  • eddman - Monday, June 15, 2020 - link

    "The latter does not imply the former. API support just means that the API calls will not fail. It doesn't mean they will be as fast as a system using dedicated hardware to handle those calls."

    True, but apparently nvidia's GPUDirect Storage, which enables direct transfer between GPU and storage, is a software only solution and doesn't require specialized hardware.

    If that's the case, then there's a good chance MS' DirectStorage is a software solution too.

    AFA I can tell, the custom I/O chips in XSX and PS5 are used for compressing the assets to increase the bandwidth, not enable direct GPU-to-storage access.

    We'll know soon enough.
  • ichaya - Monday, June 15, 2020 - link

    You have to ask: What is causing low FPS for current gen games? I think loading textures are by far the largest culprit, and even in cases where it's only a few levels or a few sections of a few levels, it does affect the overall immersion and playability of games where all of this storage tech should help.
  • Oxford Guy - Monday, June 15, 2020 - link

    I love how people forget how there is fast storage available on the "PC" (in quotes because, except for the Switch, these Sony/MS "consoles" are PCs with smoke and mirrors trickery to disguise that fact — the fact that all they are are stupidity taxes).

    Yes, stupidity taxes. That's exactly what "consoles" are, except for the Switch, which has a form factor that differs from PC.

Log in

Don't have an account? Sign up now