Power Consumption

As always I ran the Xbox One through a series of power consumption tests. I’ve described the tests below:

Off - Console is completely off, standby mode is disabled
Standby - Console is asleep, can be woken up by voice commands (if supported). Background updating is allowed in this mode.
Idle - Ethernet connected, no disc in drive, system idling at dashboard.
Load (BF4) - Ethernet connected, Battlefield 4 disc in drive, running Battlefield 4, stationary in test scene.
Load (BD Playback) - Ethernet connected, Blu-ray disc in drive, average power across Inception test scene.
CPU Load - SunSpider - Ethernet connected, no disc in drive, running SunSpider 1.0.2 in web browser.
CPU Load - Kraken - Ethernet connected, no disc in drive, running Kraken 1.1 in web browser

Power Consumption Comparison
Total System Power Off Standby Idle Load (BF4) Load (BD Playback)
Microsoft Xbox 360 Slim 0.6W - 70.4W 90.4W (RDR) -
Microsoft Xbox One 0.22W 15.3W 69.7W 119.0W 79.9W
Sony PlayStation 4 0.45W 8.59W 88.9W 139.8W 98.0W

When I first saw the PS4’s idle numbers I was shocked. 80 watts is what our IVB-E GPU testbed idles at, and that’s with a massive 6-core CPU and a Titan GPU. Similarly, my Haswell + Titan CPU testbed has a lower idle power than that. The Xbox One’s numbers are a little better at 69W, but still 50 - 80% higher than I was otherwise expecting.

Standby power is also surprisingly high for the Xbox One. Granted in this mode you can turn on the entire console by saying Xbox On, but always-on voice recognition is also something Motorola deployed on the Moto X and did so in a far lower power budget.

The only good news on the power front is really what happens when the console is completely off. I’m happy to report that I measured between 0.22 and 0.45W of draw while off, far less than previous Xbox 360s.

Power under load is pretty much as expected. In general the Xbox One appears to draw ~120W under max load, which isn’t much at all. I’m actually surprised by the delta between idle power and loaded GPU power (~50W). In this case I’m wondering if Microsoft is doing much power gating of unused CPU cores and/or GPU resources. The same is true for Sony on the PS4. It’s entirely possible that AMD hasn’t offered the same hooks into power management that you’d see on a PC equipped with an APU.

Blu-ray playback power consumption is more reasonable on the Xbox One than on the PS4. In both cases though the numbers are much higher than I’d like them to be.

I threw in some browser based CPU benchmarks and power numbers as well. Both the Xbox One and PS4 ship with integrated web browsers. Neither experience is particularly well optimized for performance, but the PS4 definitely has the edge at least in javascript performance.

Power Consumption Comparison
Lower is Better SunSpider 1.0.2 (Performance) SunSpider 1.0.2 (Power) Kraken 1.1 (Performance) Kraken 1.1 (Power)
Microsoft Xbox One 2360.9 ms 72.4W 111892.5 ms 72.9W
Sony PlayStation 4 1027.4 ms 114.7W 22768.7 ms 114.5W

Power consumption while running these CPU workloads is interesting. The marginal increase in system power consumption while running both tests on the Xbox One indicates one of two things: we’re either only taxing 1 - 2 cores here and/or Microsoft isn’t power gating unused CPU cores. I suspect it’s the former, since IE on the Xbox technically falls under the Windows kernel’s jurisdiction and I don’t believe it has more than 1 - 2 cores allocated for its needs.

The PS4 on the other hand shows a far bigger increase in power consumption during these workloads. For one we’re talking about higher levels of performance, but it’s also possible that Sony is allowing apps access to more CPU cores.

There’s definitely room for improvement in driving down power consumption on both next-generation platforms. I don’t know that there’s huge motivation to do so outside of me complaining about it though. I would like to see idle power drop below 50W, standby power shouldn’t be anywhere near this high on either platform, and the same goes for power consumption while playing back a Blu-ray movie.

Image Quality - Xbox One vs. PlayStation 4 Final Words
Comments Locked

286 Comments

View All Comments

  • Flunk - Wednesday, November 20, 2013 - link

    That's intensely stupid, you're saying that because something is traditional it has to be better. That's a silly argument, not only that it's not even true. The consoles you mentioned all have embedded RAM but all the others from the same generations don't.

    At this point, arguing that the Xbox One is more powerful or even equivalently powerful is just trolling. The Xbox One and PS4 have very similar hardware, the PS4 just has more GPU units and a higher-performing memory subsystem.
  • 4thetimebeen - Saturday, November 23, 2013 - link

    Flunk right now if your saying that the PS4 is more powerful then obviously you base your info in current spec sheet tech and not on the architectural design, but what you don't understand is what's underlining all that new architectural design that has to be learned at the same time it's been used, will only improve exponentially in the future. The PS4 it's straight forward a PC machine with a little mod in the CPU to take better advantage of the GPU but it's pretty much straight forward old design or better said "current architecture GPU design". Which is the reason many say it's easier to program than the Xbox One but right now that "weaker system that you so much swear and affirm is the Xbox One " has a couple game that have been pretty much design for it from the ground up been claim to be the most technical looking advance games on the market right now and you can guess which I'm talking about, that not even that I house 1st party game from Sony can't even compete in looks "KSF". I'm not saying that it's not awesome looking, it is actually but even compared to crisis3 it fails in comparison to that game. So it's suppose to be more easier to develop for, it's suppose to be more powerful and called a super computer, but when looking for that power gap in 1st party games that had the time to invest in its power, the "weaker system" with the hardest to develop architecture show a couple of games that trounces what the "superior machine" was able to show. Hmmm hopefully for you, time will tell and the games will tell the whole story!
  • Owls - Wednesday, November 20, 2013 - link

    Calling people names? Haha. How utterly silly for you to say the two different RAM types can be added for a total of 274GB/s. Hey guys it looks like I now have 14400 RPM hard drives now too!
  • smartypnt4 - Wednesday, November 20, 2013 - link

    Traditional cache-based architectures rely on all requests being serviced by the cache. This is slightly different, though. I'd be wary of adding both together, as there's no evidence that the SoC is capable of simultaneously servicing requests to both main memory and the eSRAM in parallel. Microsoft's marketing machine adds them together, but the marketing team doesn't know what the hell it's talking about. I'd wait for someone to reverse engineer exactly how this thing works before saying one way or the other, I suppose.

    It's entirely possible that Microsoft decided to let the eSRAM and main memory be accessed in parallel, but I kind of doubt it. There'd be so little return on the investment required to get that to work properly that it's not really worth the effort. I think it's far more likely that all memory requests get serviced as usual, but if the address is inside a certain range, the access is thrown at the eSRAM instead of the main memory. In this case, it'd be as dumb to add the two together as it would be to add cache bandwidth in a consumer processor like an i5/i7 to the bandwidth from main memory. But I don't know anything for sure, so I guess I can't say you don't get it (since no one currently knows how the memory controller is architected).
  • hoboville - Thursday, November 21, 2013 - link

    smartypnt4's description of eSRAM is very much how typical cache works in a PC, such as L1, L2, L3. It should also be mentioned that L2 cache is almost always SRAM. Invariably, this architecture is just like typical CPU architecture, because that's what AMD Jaguar is. Calls to cache that aren't in the cache address range get forwarded to the SDRAM controller. There is no way Microsoft redesigned the memory controller. That would require changing the base architecture of the APU.

    Parallel RAM access only exists in systems where there is more than one memory controller or the memory controller is spanned across multiple channels. People who start adding bandwidth together don't understand computer architectures. These APUs are based on existing x86 architectures, with some improvements (look up AMD Trinity). These APUs are not like the previous gen which used IMB POWER cores which are largely different.
  • rarson - Saturday, November 23, 2013 - link

    But Microsoft's chip isn't an APU, it's an SoC. There's silicon on the chip that isn't at all part of the Jaguar architecture. The 32 MB of eSRAM is not L2, Jaguar only supports L2 up to 2 MB per four cores. So it's not "just like a typical CPU architecture."

    What the hell does Trinity have to do with any of this? Jaguar has nothing to do with Trinity.
  • 4thetimebeen - Saturday, November 23, 2013 - link

    Actually if you read and I apologized for up butting in but if you read the digital foundry interview of the Microsoft Xbox One architects that they heavily modified that GPU and it is a DUAL PIPELINE GPU! So your theory is not really far away from the truth!
    The interview,
    http://www.eurogamer.net/articles/digitalfoundry-t...
  • 4thetimebeen - Saturday, November 23, 2013 - link

    Plus to add; the idea of adding that DDR3 to the eSRAM kind of acceptable because unlike the PS4 simple straight architecture design like very much the One pool GDDR5 you have 4 modules of DDR3 running at 60- 65gb/s and they each can be used for specific simultaneous request which makes it a lot more advance and more like a future DDR4 way of behaving plus killing that bottleneck people that don't understand, think it has. It's a new tech people and it will take some time to learn its advantages but not hard to program. It's a system design to have less error and be more effective and perform way better than supposedly higher flops GPUS cause it can achieve same performance with less resources! Hope you guys can understand a little and not trying to offend anyone!
  • melgross - Wednesday, November 20, 2013 - link

    You really don't understand this at all, do you?
  • fourthletter - Wednesday, November 20, 2013 - link

    All the other consoles you mentioned (apart from the PS2) are based on IBM Power PC chips, you are comparing their setup to X86 on the new consoles - silly boy.

Log in

Don't have an account? Sign up now