Power & Thermals

Microsoft made a point to focus on the Xbox One’s new power states during its introduction. Remember that when the Xbox 360 was introduced, power gating wasn’t present in any shipping CPU or GPU architectures. The Xbox One (and likely the PlayStation 4) can power gate unused CPU cores. AMD’s GCN architecture supports power gating, so I’d assume that parts of the GPU can be power gated as well. Dynamic frequency/voltage scaling is also supported. The result is that we should see a good dynamic range of power consumption on the Xbox One, compared to the Xbox 360’s more on/off nature.

AMD’s Jaguar is quite power efficient, capable of low single digit idle power so I would expect far lower idle power consumption than even the current slim Xbox 360 (50W would be easy, 20W should be doable for truly idle). Under heavy gaming load I’d expect to see higher power consumption than the current Xbox 360, but still less than the original 2005 Xbox 360.

Compared to the PlayStation 4, Microsoft should have the cooler running console under load. Fewer GPU ALUs and lower power memory don’t help with performance but do at least offer one side benefit.

OS
 

The Xbox One is powered by two independent OSes running on a custom version of Microsoft’s Hyper-V hypervisor. Microsoft made the hypervisor very lightweight, and created hard partitions of system resources for the two OSes that run on top of it: the Xbox OS and the Windows kernel.

The Xbox OS is used to play games, while the Windows kernel effectively handles all apps (as well as things like some of the processing for Kinect inputs). Since both OSes are just VMs on the same hypervisor, they are both running simultaneously all of the time, enabling seamless switching between the two. With much faster hardware and more cores (8 vs 3 in the Xbox 360), Microsoft can likely dedicate Xbox 360-like CPU performance to the Windows kernel while running games without any negative performance impact. Transitioning in/out of a game should be very quick thanks to this architecture. It makes a ton of sense.

Similarly, you can now multitask with apps. Microsoft enabled Windows 8-like multitasking where you can snap an app to one side of the screen while watching a video or playing a game on the other.

The hard partitioning of resources would be nice to know more about. The easiest thing would be to dedicate a Jaguar compute module to each OS, but that might end up being overkill for the Windows kernel and insufficient for some gaming workloads. I suspect ~1GB of system memory ends up being carved off for Windows.

Kinect & New Controller
 

All Xbox One consoles will ship with a bundled Kinect sensor. Game console accessories generally don’t do all that well if they’re optional. Kinect seemed to be the exception to the rule, but Microsoft is very focused on Kinect being a part of the Xbox going forward so integration here makes sense.

The One’s introduction was done entirely via Kinect enabled voice and gesture controls. You can even wake the Xbox One from a sleep state using voice (say “Xbox on”), leveraging Kinect and good power gating at the silicon level. You can use large two-hand pinch and stretch gestures to quickly move in and out of the One’s home screen.

The Kinect sensor itself is one of 5 semi-custom silicon elements in the Xbox One - the other four are: SoC, PCH, Kinect IO chip and Blu-ray DSP (read: the end of optical drive based exploits). In the One’s Kinect implementation Microsoft goes from a 640 x 480 sensor to 1920 x 1080 (I’m assuming 1080p for the depth stream as well). The camera’s field of view was increased by 60%, allowing support for up to 6 recognized skeletons (compared to 2 in the original Kinect). Taller users can now get closer to the camera thanks to the larger FOV, similarly the sensor can be used in smaller rooms.

The Xbox One will also ship with a new redesigned wireless controller with vibrating triggers:

Thanks to Kinect's higher resolution and more sensitive camera, the console should be able to identify who is gaming and automatically pair the user to the controller.

TV
 

The Xbox One features a HDMI input for cable TV passthrough (from a cable box or some other tuner with HDMI out). Content passed through can be viewed with overlays from the Xbox or just as you would if the Xbox wasn’t present. Microsoft built its own electronic program guide that allows you to tune channels by name, not just channel number (e.g. say “Watch HBO”). The implementation looks pretty slick, and should hopefully keep you from having to switch inputs on your TV - the Xbox One should drive everything. Microsoft appears to be doing its best to merge legacy TV with the new world of buying/renting content via Xbox Live. It’s a smart move.

One area where Microsoft is being a bit more aggressive is in its work with the NFL. Microsoft demonstrated fantasy football integration while watching NFL passed through to the Xbox One.

Memory Subsystem Final Words
Comments Locked

245 Comments

View All Comments

  • xaml - Thursday, May 23, 2013 - link

    If every third Xbox 360 user had to get at least one repaired and after that died, bought a new one until finally salvaged by the 'Slim'...
  • Niabureth - Wednesday, May 29, 2013 - link

    And just how do you expect them to do that? Decisions on what hardware to use was made a lot earlier than Sony's PS4 presentation, meaning that train has already left the station. I'm guessing AMD is massproducing the hardware by now. Mircosoft: Oh we saw that Sony is going for a much more powerful architecture and we don't want any of the million of APU's u've just produced for us!
  • JDG1980 - Wednesday, May 22, 2013 - link

    If AMD is using Jaguar here, isn't that basically an admission that Bulldozer/Piledriver is junk, at least for gaming/desktop usage? Why don't they use a scaled-up Jaguar in their desktop APUs instead of Piledriver? The only thing Bulldozer/Piledriver seems to be good for is very heavily threaded loads - i.e. servers. Most desktop users are well served by even 4 cores, and it looks like they've already scaled Jaguar to 8. And AMD is getting absolutely killed on the IPC front on the desktop - if Jaguar is a step in the right direction then by all means it should be taken. BD/PD is a sunk cost, it should be written off, or restricted to Opterons only.
  • tipoo - Wednesday, May 22, 2013 - link

    Too big.
  • Slaimus - Wednesday, May 22, 2013 - link

    Bulldozer/Piledriver needs SOI. Steamroller is not ready yet, and it is not portable outside of Globalfoundries gate-first 28nm architecture. Jaguar is bulk 28nm and gate-last, which can be made by TSMC in large quantities at lower cost per wafer.
  • JDG1980 - Wednesday, May 22, 2013 - link

    All the more reason for AMD to switch to Jaguar in their mass-market CPUs and APUs.
    I'd be willing to bet money that a 4-core Jaguar clocked up to 3 GHz would handily beat a 4-module ("8-core") Piledriver clocked to 4 GHz. BD/PD is AMD's Netburst, a total FAIL of an architecture that needs to be dropped before it takes the whole company down with it.
  • Exophase - Wednesday, May 22, 2013 - link

    Jaguar can't be clocked at 3GHz - 2GHz is closer to the hard limit as far as we currently know. It's clock limited by design, just look at the clock latency of FPU operations. IPC is at best similar to Piledriver (in practice probably a little worse), so in tasks heavily limited by single threaded performance Jaguar will do much worse. Consoles can bear limited single threaded performance to some extent but PCs can't.
  • Spunjji - Wednesday, May 22, 2013 - link

    It's effectively a low-power optimised Athlon 64 with added bits, so it's not going to scale any higher than Phenom did. That already ran out of steam on the desktop. Bulldozer/Piledriver may not have been the knockout blow AMD needed but they're scaling better than die-shrinking the same architecture yet again would have.
  • JDG1980 - Wednesday, May 22, 2013 - link

    Bobcat/Jaguar is a new architecture specifically designed for low-power usage. It's not the same as the K10 design, though it wouldn't surprise me if they did share some parts.
    And even just keeping K10 with tweaks and die-shrinks would have worked better on the desktop than the Faildozer series. Phenom II X6 1100T was made on an outdated 45nm process, and still beat the top 32nm Bulldozer in most benchmarks. A die-shrink to 28nm would not only be much cheaper to manufacture per chip than Bulldozer/Piledriver, but would perform better as well. It's only pride and the refusal to admit sunk costs that has kept AMD on their trail of fail.
  • kyuu - Wednesday, May 22, 2013 - link

    That's a nice bit of FUD there. K10 had pretty much been pushed as far as it was going to go. Die-shrinking and tweaking it was not going to cut it. AMD needed a new architecture.

    Piledriver already handily surpasses K10 in every metric, including single-threaded performance.

Log in

Don't have an account? Sign up now