Earlier this morning Microsoft lifted the embargo on press reviews of the Xbox One. I’ve been playing with final hardware and near final software for a few days now and I wanted to share some thoughts. This is by no means one of our usual thorough review jobs, just a side quest I found myself on over the past few days.

I like the look of the Xbox One. I wish it felt a little more durable, but perhaps that’s the mobile side of me speaking where materials are a bigger deal. Sitting on a stand, desk or rack from across the room the One looks clean, simple and honestly, it looks like an Xbox. The Xbox 360 was a journey into a weird sort of industrial design that was a significant departure from the original. The past couple of revisions of the 360 have moved towards sharper angles and away from the curves of the original 360. The One completes the journey back to its roots. Dare I say it almost looks like a PC, and if you crack open the chassis you’ll be reminded of the same.

I’ll start with IO on the sides and back. There’s a single USB 3.0 port on the left side of the chassis, with two more on the rear of the machine. Support for external storage is apparently on its way, as the Xbox One doesn’t allow end user upgrades of the internal 500GB hard drive. I have to say that I prefer Sony’s stance on this one.

Gigabit Ethernet and dual-band 802.11n WiFi handle Internet connectivity. I’m still shocked that the PS4 shipped with 2.4GHz only WiFi in 2013. On the AV front there’s an optical output and HDMI in/out. Kinect has its own port on the rear of the chassis and there’s an IR port as well. There’s a Kensington security slot to the right of all of the IO on the Xbox One.

The ring of light from the Xbox 360 is gone and replaced with a single, white, backlit Xbox logo on the front of the console. You’ll notice the controller position indicators are gone as well (not only from the One, but from the controllers themselves). A combination of the Kinect camera that comes with every Xbox One and IR transceivers in every controller is all you need to figure out player/controller mapping. Indeed, the Xbox One can actually log you into your appropriate Xbox Live account based on recognizing your face alone. I set the One up at my work area, so I had to awkwardly position my face in front of the Kinect camera to make the auto login work but if you’ve got a more normal setup I can see this being supremely convenient. If you live in a household with multiple Xbox users, the facial login will be one of the standout features of the new console. There’s a quick training process that you have to go through to have the console recognize your face, but after that I never had any issues with using my face to log me in. As long as I was sitting in front of the Kinect camera, I sort of forgot about needing to log in, it always just happened for me.

It’s very obvious to me that proper cooling and quiet operation were top priorities for the Xbox One. Big portions of the One’s top are covered in vents to provide air for the large fan inside. Plastic grills adorn the sides as well. The One is larger than the PlayStation 4, despite having a lower system TDP, but the chassis size is designed to keep the internals cooler and the system quieter. It’s a tradeoff we’ve seen time and time again. While I do appreciate the PS4’s size and fully expected the Xbox One to seem huge, it absolutely doesn’t in practice.

I won’t talk too much about the Xbox One’s HDMI input. I cut the cord a few years ago, so I’m not really in the best position to comment on cable TV set-top box integration with the Xbox One. What I will say is the One’s HDMI input can really be used for anything. In testing the One, I actually had my PS4 plugged in to the HDMI input and could quickly switch between consoles simply by saying “Xbox Watch TV”. The HDMI input properly (read: legally) handles HDCP content, so by default you can’t use it to circumvent the HDCP protection that’s enabled on the PS4 at launch unfortunately. The One’s HDMI output only applies HDCP to content that needs it. The dashboard, most apps and games stream through unencrypted. Unlike the PS4, the Xbox One does not support HDMI-CEC, relying instead exclusively on IR blasting to turn on your TV and cable box (if applicable).

Kinect and voice control are big parts of the Xbox One experience. Since every Xbox One comes with a Kinect in the box, developers can count on a 3D camera and always-on mic whenever they sell to an Xbox One customer. The early titles that I’ve played don’t really do a great job leveraging either of these things, but I suspect we’ll see some clever use cases in the future. I’ve never been a big Kinect user, but I did use the Xbox One’s voice control quite a bit. Just to set expectations, voice interaction with the Xbox isn’t as natural as what you’d see on an episode of Star Trek: TNG, but it’s not bad either. I found myself using voice as an augmentative interface rather than something I replaced the controller with. In fact, I typically used voice control as another pair of hands to deal with the Xbox One’s UI while I’m off doing something else. For me it was always quicker to hit the Xbox button to go home rather than telling the Xbox to go home, but for things like recording a clip of the last 30 seconds of gameplay the voice integration is irreplaceable. More than a few times I’d be particularly proud of something I did in Killer Instinct or Call of Duty, call out Xbox, record that, and it would immediately dump the last 30 seconds of gameplay into a temporary buffer. All of this would happen with no impact to game frame rate or performance. I’d just have to remember to go back into the Game DVR and actually save/commit these recordings otherwise they’d eventually be overwritten if I kept going. Unfortunately Microsoft’s Upload studio, the application needed to share these recordings, won’t be available until the official launch of the console so we’ll have to wait and see how all of that turns out.

I feel like we’re heading in the right direction as far as voice recognition goes, but we’re not quite there yet. The voice integration on the One still feels awkward and doesn’t make good use of natural, conversational language. I don’t want to feel like I’m issuing commands to my console, I want to sort of ask it to do things for me in whatever way I want and to have it respond accordingly. Hey Xbox, start downloading Dead Rising 3 and I want to play Battlefield 4 in the meantime - get me there and we’re in business.

One feature I absolutely loved using the Kinect camera for was to read QR codes to activate downloads in the Xbox Store. Recognition is extremely quick and it keeps me from having to type on the miserable on-screen keyboard (with no word suggestions like the PS4 nonetheless).

Those concerned about their privacy will be happy to know that Kinect isn’t required for use. You can boot the console and use it just fine even if you never connect Kinect. While at the dashboard you will get a little message telling you that Kinect is unplugged however.

The UI and Multitasking

The Xbox UI is much improved over its predecessor. Multitasking is no longer a painful endeavor. You can quickly move between playing a game, changing settings or even messing around with other applications. If anything, the most visible feature of the next-generation of game consoles is just how much better multitasking is. On the Xbox 360 if you wanted to mess around with any console settings while in a game you’d have to physically quit the game, and in some cases even log out of Xbox Live. With the One it’s all a matter of suspend and resume. It’s quick, unobtrusive and tremendously less frustrating.

Multitasking is also far less of a performance hog than on the 360. For starters the One has a ton of x86 cores to handle multitasking (8 total, I believe 2 of which are dedicated to OS work), and it’s now running two independent OSes with a hypervisor managing both. The results are quite evident. If you’re coming from a 360, you’ll appreciate better graphics in games today, but you’ll probably love the fact that you no longer have to fight the UI as much.

It’s not all roses though. I can definitely get the UI to drop frames, particularly when using the One’s new snap feature that lets you display two things on the screen at once. GPU power is time sliced between the the Windows kernel and Xbox OS (10% allocated to the Windows kernel). Microsoft hopes to eventually offer that remaining 10% up to game developers as well, but we’re not there yet. The time slice is quite obvious when you’re playing a game and Xbox notifications slowly animate at the bottom of the screen. When things do get sluggish however, the One mostly seems to drop UI frames and avoids increasing response time. I always dreaded doing anything while playing a game on the 360, the same is no longer true on the One.

The Xbox One’s UI isn’t always incredibly straightforward either. It’s good that Microsoft has been able to so quickly embrace and deploy a common UI across all platforms, but I’m not sure that it’s necessarily the best UI out there. There’s a large horizontal list of tiles, reminiscent of Windows 8. You can pin individual tiles to the left side of the dashboard, and depending on what screen you’re in the order of the tiles may vary depending on what you’ve done most recently. Transitions between tiles are also a bit odd. I expect to go into a tile when I select it, but in most cases the display will scroll up, transitioning to a new “window” when I activate a tile. The other thing I’d say is the tiled interface isn’t particularly pretty. Go into the movie, music or game stores and you get these beautifully integrated photos and artwork, but the bulk of what you stare at in the Xbox One UI are big blocks of color. I would’ve loved to have seen a more dramatic reimagining of what this could be (although it’s still worlds better than the Xbox 360’s UI).

The Controller

The Xbox One supports up to 8 wireless controllers, one of which is included in the box. By default the controllers take two AA batteries, although Microsoft will offer optional first party rechargeable batteries at launch. I didn’t get time to build a good battery life test for the One’s controllers, but Microsoft tells me you should expect somewhere around 30 hours of use on a single set of AAs.

The new controller is a clear evolution of the 360’s controller. There are three features that stood out to me in using the controller. For starters, the amount of initial resistance in each thumbstick has been dialed back considerably. The new thumbsticks are easier to use and less fatiguing as a result. Secondly, the new d-pad ditches the silly old platform that the old d-pad rested on. Instead you get a standard plus-arrangement. Each direction is accompanied by a shallow but pronounced click. It’s great for using the d-pad for issuing commands, but for fighting games I’d prefer something a little softer rather than discrete clicks. I feel like Microsoft likely picked the right tradeoff here as the days of Street Fighter, Mortal Kombat and Killer Instinct are long behind us, despite the latter making another appearance on the One.

The final improvement to the controller is the inclusion of what Microsoft is calling impulse triggers. The One’s controller features a total of four vibrating motors, one in each grip and one in each of the two big triggers (LT/RT). For games that choose to implement it, impulse trigger support is awesome. You get a subtle rumble that’s very focused on the triggers rather than your whole controller vibrating like crazy. I feel like force feedback in controllers is obviously here to stay, but it’s rarely used in a particularly subtle fashion. Instead what you normally run into is short bursts of feedback or long, agonizing vibration. NBA 2K14 appears to use the impulse triggers and the result is something in between, and something I really did appreciate.

For the first time since the introduction of the Xbox, Microsoft has done away with the back/start buttons and instead replaced them with view and menu buttons. They more or less serve the same functions, it’s just interesting to me how every major generation we come across an obvious desire to ditch select/start.

 

Performance - An Update
Comments Locked

286 Comments

View All Comments

  • psychobriggsy - Wednesday, November 20, 2013 - link

    Shame that it can only use that ESRAM bandwidth on a total of 1/256th of the system's memory... so you need to account for that in your sums. I.e., it's useless for most things except small data areas that are accessed a lot (framebuffer, z-buffer, etc).
  • smartypnt4 - Wednesday, November 20, 2013 - link

    Except you just said it... You store what's used the most, and you get to realize a huge benefit from it. It's the same theory as a cache, but it gives programmers finer control over what gets stored there. Giving the developers the ability to choose what they want to put in the super low-latency, high bandwidth eSRAM is really a good idea too.

    Computer architecture is mainly about making the common case fast, or in other words, making the things that are done the most the fastest operations in the system. In this case, accessing the z-buffer, etc. is done constantly, making it a good candidate for optimization via placing it in a lower latency, higher bandwidth storage space.
  • cupholder - Thursday, November 21, 2013 - link

    LOL. No. The majority of things that actually affect quality and frame rate are going to be larger in size than the ESRAM. 192 ENTIRE 8GB vs. 204 for a dinky amount of that... It's painfully obvious what the bottlenecks will be. Oh... Forgot the whole PS4 running a 7850 compared to the XB1's 7770.. Oh, and the 8GB ram vs. 5 true GB of ram(3 OSs take up 3GB).

    With that said, get the console that your friends will play, or has the games you want... Anyone pretending the XB1 is better in raw power is deluding themselves(it's hardly even close).
  • smartypnt4 - Friday, November 22, 2013 - link

    I'm simply describing how the eSRAM should work, given that this should be a traditional PC architecture. Nowhere did I comment on which is the more powerful console. I really don't feel I'm qualified in saying which is faster, but the GPU seems to indicate it's the PS4, as you rightly said.

    Now, it is true that the PS4 has larger bandwidth to main memory. My point was that if the eSRAM has a good hit rate, let's say 80%, you'll see an effective speed of 0.8*204 = 163GB/s. This is a horrible measure, as it's just theoretically what you'll see, not accounting for overhead.

    The other difference is that GDDR5's timings make it higher latency than traditional DDR3, and it will be an order of magnitude higher in latency than the eSRAM in the XB1. Now, that's not to say that it will make a big difference in games because memory access latency can be hidden by computing something else while you wait, but still. My point being that the XB1 likely won't be memory bandwidth bound. That was literally my only point. ROP/memory capacity/shader bound is a whole other topic that I'm not going to touch with a 10-foot pole without more results from actual games.

    But yes, buy the console your friends play, or buy the one with the exclusives you want.
  • rarson - Saturday, November 23, 2013 - link

    It's not even close to a traditional PC architecture. I mean, it totally is, if you completely ignore the eSRAM and custom silicon on the die.

    Test after test after test after test has shown that latency makes practically zero impact on performance, and that the increased speed and bandwidth of GDDR5 is much more important, at least when it comes to graphics (just compare any graphics card that has a DDR3 and GDDR5 variant). Latency isn't that much greater for GDDR5, anway.

    The eSRAM is only accessible via the GPU, so anything in it that the CPU needs has to be copied to DDR anyway. Further, in order to even use the eSRAM, you still have to put the data in there, which means it's coming from that slow-ass DDR3. The only way you'll get eSRAM bandwidth 80% of the time is if 80% of your RAM access is a static 32 MB of data. Obviously that's not going to be the majority of your graphics data, so you're not going to get anywhere near 80%.

    The most important part here is that in order for anyone to actually use the eSRAM effectively, they're going to have to do the work. Sony's machine is probably going to be more developer-friendly because of this. I can see how the eSRAM could help, but I don't see how it could possibly alleviate the DDR3 bottleneck. All of this is probably a moot point anyway, since the eSRAM seems to be tailored more towards all the multimedia processing stuff (the custom bits on the SoC) and has to be carefully optimized for developers to even use it anyway (nobody is going to bother to do this on cross-platform games).
  • 4thetimebeen - Saturday, November 23, 2013 - link

    I'm sorry to burst your bubble and I'm sorry to butt in but you are wrong about the eSRAM only available to the GPU cause if you look and read the digital foundry interview of the Microsoft Xbox One architectures and creators and the hot chips diagram IT SHOWS AND THEY SAID that the CPU has access to the eSRAM as well.
  • smartypnt4 - Monday, November 25, 2013 - link

    Yes, latency has very little impact on graphics workloads due to the ability to hide the latency by doing other work. Which is exactly what I said in my comment, so I'm confused as to why you're bringing it up...

    As far as the CPU getting access, I was under the impression that the XB1 and PS4 both have unified memory access, so the GPU and CPU share memory. If that's the case, then yes, the CPU does get access to the eSRAM.

    As far as the hit rate on that eSRAM, if the developer optimizes properly, then they should be able to get significant benefits from it. Cross platform games, as you rightly said, likely won't get optimized to use the eSRAM has effectively, so they won't realize much of a benefit.

    And yes, you do incur a set of misses in the eSRAM corresponding to first accesses. That's assuming the XB1's prefetcher doesn't request the data from memory before you need it.

    A nontrivial number of accesses from a GPU are indeed static. Things like the frame buffer and z-buffer are needed by every separate rendering thread, and hence may well be useful. 32MB is also a nontrivial amount when it comes to caching textures as well. Especially if the XB1 compresses the textures in memory and decodes them on the fly. If I recall correctly, that's actually how most textures are stored by GPUs anyway (compressed and then uncompressed on the fly as they're needed). I'm not saying that's definitely the case, because that's not how every GPU works, but still. 32MB is enough for the frame buffers at a minimum, so maybe that will help more than you think; maybe it will help far less than I think. It's incredibly difficult to tell how it will perform given that we know basically nothing about it.

    To actually say if eSRAM sucks, we need to know how often you can hit in the eSRAM. To know that, we need to know lots of things we have no clue about: prefetcher performance, how the game is optimized to make use of the eSRAM, etc.

    In general though, I do agree that the PS4 has more raw GPU horsepower and more raw memory bandwidth exposed to naive developers. My only point that I made was that the XB1 likely won't be that far off in memory bandwidth compared to the PS4 in games that properly optimize for the platform.

    There's a whole other thing about CPUs being very latency sensitive, etc., that I won't go into because I don't know nearly enough about it, but I think there's going to be a gap in CPU performance as well because things that are optimized to work on the XB1's CPU aren't going to perform the same on the PS4's, especially if they're using the CPU to decompress textures (which is something the 360 did).

    And with that, I reiterate: buy the console your friends buy or the one with the exclusives you want to play. Or if you're really into the Kinect or something.
  • Andromeduck - Wednesday, November 27, 2013 - link

    163 GB/s and hogging the main memory bandwidth - that data doesn't jut magically appear
  • smartypnt4 - Wednesday, November 20, 2013 - link

    Also, not saying the guy above you isn't an idiot for adding the two together. The effective rate Anand quotes takes into account approximately how often you go to the eSRAM vs. going all the way out to main memory. The dude above you doesn't get it.
  • bill5 - Wednesday, November 20, 2013 - link

    yes i do get it, dork.

    small caches of high speed memory are the norm in console design. ps2, gamecube, wii, x360, wii u, on and on.

    the gpu can read from both pools at once so technically they can be added. even if it's not exactly the same thing.

    peak bw, xone definitely has an advantage on ps4, especially on a per-flop basis due to feeding a weaker gpu to begin with.

Log in

Don't have an account? Sign up now