Image Quality - Xbox One vs. PlayStation 4

This is the big one. We’ve already established that the PS4 has more GPU performance under the hood, but how does that delta manifest in games? My guess is we’re going to see two different situations. The first being what we have here today. For the most part I haven’t noticed huge differences in frame rate between Xbox One and PS4 versions of the same game, but I have noticed appreciable differences in resolution/AA. This could very well be the One’s ROP limitations coming into play. Quality per pixel seems roughly equivalent across consoles, the PS4 just has an easier time delivering more of those pixels.

The second situation could be one where an eager developer puts the PS4’s hardware to use and creates a game that doesn’t scale (exclusively) in resolution, but also in other aspects of image quality as well. My guess is the types of titles to fall into this second category will end up being PS4 exclusives (e.g. Uncharted 4) rather than something that’s cross-platform. There’s little motivation for a cross-platform developer to spend a substantial amount of time in optimizing for one console.

Call of Duty: Ghosts

Let’s start out with Call of Duty: Ghosts. Here I’m going to focus on two scenes: what we’ve been calling internally Let the Dog Drive, and the aliasing test. Once again I wasn’t able to completely normalize black levels across both consoles in Ghosts for some reason.

In motion both consoles look pretty good. You really start to see the PS4’s resolution/AA advantages at the very end of the sequence though (PS4 image sample, Xbox One image sample). The difference between these two obviously isn’t as great as from the 360 to Xbox One, but there is a definite resolution advantage to the PS4. It’s even more obvious if you look at our aliasing test:

Image quality otherwise looks comparable between the two consoles.

NBA 2K14

NBA 2K14 is one cross platform title where I swear I could sense slight frame rate differences between the two consoles (during high quality replays) but it’s not something I managed to capture on video. Once again we find ourselves in a situation where there is a difference in resolution and/or AA levels between the Xbox One and PS4 versions of the game.

Both versions look great. I’m not sure how much of this is the next-gen consoles since the last time I played an NBA 2K game was back when I was in college, but man have console basketball games significantly improved in their realism over the past decade. On a side note, NBA 2K14 does seem to make good use of the impulse triggers on the Xbox One’s controller.



Battlefield 4

I grabbed a couple of scenes from early on in Battlefield 4. Once again the differences here are almost entirely limited to the amount of aliasing in the scene as far as I can tell. The Xbox One version is definitely more distracting. In practice I notice the difference in resolution, but it’s never enough to force me to pick one platform over another. I’m personally more comfortable with the Xbox One’s controller than the PS4’s, which makes for an interesting set of tradeoffs.

Image Quality - Xbox 360 vs. Xbox One Power Consumption
Comments Locked

286 Comments

View All Comments

  • Flunk - Wednesday, November 20, 2013 - link

    That's intensely stupid, you're saying that because something is traditional it has to be better. That's a silly argument, not only that it's not even true. The consoles you mentioned all have embedded RAM but all the others from the same generations don't.

    At this point, arguing that the Xbox One is more powerful or even equivalently powerful is just trolling. The Xbox One and PS4 have very similar hardware, the PS4 just has more GPU units and a higher-performing memory subsystem.
  • 4thetimebeen - Saturday, November 23, 2013 - link

    Flunk right now if your saying that the PS4 is more powerful then obviously you base your info in current spec sheet tech and not on the architectural design, but what you don't understand is what's underlining all that new architectural design that has to be learned at the same time it's been used, will only improve exponentially in the future. The PS4 it's straight forward a PC machine with a little mod in the CPU to take better advantage of the GPU but it's pretty much straight forward old design or better said "current architecture GPU design". Which is the reason many say it's easier to program than the Xbox One but right now that "weaker system that you so much swear and affirm is the Xbox One " has a couple game that have been pretty much design for it from the ground up been claim to be the most technical looking advance games on the market right now and you can guess which I'm talking about, that not even that I house 1st party game from Sony can't even compete in looks "KSF". I'm not saying that it's not awesome looking, it is actually but even compared to crisis3 it fails in comparison to that game. So it's suppose to be more easier to develop for, it's suppose to be more powerful and called a super computer, but when looking for that power gap in 1st party games that had the time to invest in its power, the "weaker system" with the hardest to develop architecture show a couple of games that trounces what the "superior machine" was able to show. Hmmm hopefully for you, time will tell and the games will tell the whole story!
  • Owls - Wednesday, November 20, 2013 - link

    Calling people names? Haha. How utterly silly for you to say the two different RAM types can be added for a total of 274GB/s. Hey guys it looks like I now have 14400 RPM hard drives now too!
  • smartypnt4 - Wednesday, November 20, 2013 - link

    Traditional cache-based architectures rely on all requests being serviced by the cache. This is slightly different, though. I'd be wary of adding both together, as there's no evidence that the SoC is capable of simultaneously servicing requests to both main memory and the eSRAM in parallel. Microsoft's marketing machine adds them together, but the marketing team doesn't know what the hell it's talking about. I'd wait for someone to reverse engineer exactly how this thing works before saying one way or the other, I suppose.

    It's entirely possible that Microsoft decided to let the eSRAM and main memory be accessed in parallel, but I kind of doubt it. There'd be so little return on the investment required to get that to work properly that it's not really worth the effort. I think it's far more likely that all memory requests get serviced as usual, but if the address is inside a certain range, the access is thrown at the eSRAM instead of the main memory. In this case, it'd be as dumb to add the two together as it would be to add cache bandwidth in a consumer processor like an i5/i7 to the bandwidth from main memory. But I don't know anything for sure, so I guess I can't say you don't get it (since no one currently knows how the memory controller is architected).
  • hoboville - Thursday, November 21, 2013 - link

    smartypnt4's description of eSRAM is very much how typical cache works in a PC, such as L1, L2, L3. It should also be mentioned that L2 cache is almost always SRAM. Invariably, this architecture is just like typical CPU architecture, because that's what AMD Jaguar is. Calls to cache that aren't in the cache address range get forwarded to the SDRAM controller. There is no way Microsoft redesigned the memory controller. That would require changing the base architecture of the APU.

    Parallel RAM access only exists in systems where there is more than one memory controller or the memory controller is spanned across multiple channels. People who start adding bandwidth together don't understand computer architectures. These APUs are based on existing x86 architectures, with some improvements (look up AMD Trinity). These APUs are not like the previous gen which used IMB POWER cores which are largely different.
  • rarson - Saturday, November 23, 2013 - link

    But Microsoft's chip isn't an APU, it's an SoC. There's silicon on the chip that isn't at all part of the Jaguar architecture. The 32 MB of eSRAM is not L2, Jaguar only supports L2 up to 2 MB per four cores. So it's not "just like a typical CPU architecture."

    What the hell does Trinity have to do with any of this? Jaguar has nothing to do with Trinity.
  • 4thetimebeen - Saturday, November 23, 2013 - link

    Actually if you read and I apologized for up butting in but if you read the digital foundry interview of the Microsoft Xbox One architects that they heavily modified that GPU and it is a DUAL PIPELINE GPU! So your theory is not really far away from the truth!
    The interview,
    http://www.eurogamer.net/articles/digitalfoundry-t...
  • 4thetimebeen - Saturday, November 23, 2013 - link

    Plus to add; the idea of adding that DDR3 to the eSRAM kind of acceptable because unlike the PS4 simple straight architecture design like very much the One pool GDDR5 you have 4 modules of DDR3 running at 60- 65gb/s and they each can be used for specific simultaneous request which makes it a lot more advance and more like a future DDR4 way of behaving plus killing that bottleneck people that don't understand, think it has. It's a new tech people and it will take some time to learn its advantages but not hard to program. It's a system design to have less error and be more effective and perform way better than supposedly higher flops GPUS cause it can achieve same performance with less resources! Hope you guys can understand a little and not trying to offend anyone!
  • melgross - Wednesday, November 20, 2013 - link

    You really don't understand this at all, do you?
  • fourthletter - Wednesday, November 20, 2013 - link

    All the other consoles you mentioned (apart from the PS2) are based on IBM Power PC chips, you are comparing their setup to X86 on the new consoles - silly boy.

Log in

Don't have an account? Sign up now