Final Words

It’s nearly impossible for the Xbox One not to be a substantial upgrade over the Xbox 360. The fact that Microsoft could ship a single integrated SoC instead of a multi-chip CPU+GPU solution this generation is telling enough. You don’t need to integrate anywhere near the fastest CPUs and GPUs to outperform the Xbox 360, something closer to the middle of the road works just fine.

Microsoft won’t have any issues delivering many times the performance of the Xbox 360. The Xbox One features far more compute power and memory bandwidth than the Xbox 360. Going to 8GB of RAM is also a welcome upgrade, especially since it’s identical to what Sony will ship on the PlayStation 4. As AMD is supplying relatively similar x86 CPU and GCN GPU IP to both consoles, porting between them (and porting to PCs) should be far easier than ever before. The theoretical performance comparison between the two next-gen consoles is where things get a bit sticky.

Sony gave the PS4 50% more raw shader performance, plain and simple (768 SPs @ 800MHz vs. 1152 SPs & 800MHz). Unlike last generation, you don't need to be some sort of Jedi to extract the PS4's potential here. The Xbox One and PS4 architectures are quite similar, Sony just has more hardware under the hood. We’ll have to wait and see how this hardware delta gets exposed in games over time, but the gap is definitely there. The funny thing about game consoles is that it’s usually the lowest common denominator that determines the bulk of the experience across all platforms.

On the plus side, the Xbox One should enjoy better power/thermal characteristics compared to the PlayStation 4. Even compared to the Xbox 360 we should see improvement in many use cases thanks to modern power management techniques.

Differences in the memory subsytems also gives us some insight into each approach to the next-gen consoles. Microsoft opted for embedded SRAM + DDR3, while Sony went for a very fast GDDR5 memory interface. Sony’s approach (especially when combined with a beefier GPU) is exactly what you’d build if you wanted to give game developers the fastest hardware. Microsoft’s approach on the other hand looks a little more broad. The Xbox One still gives game developers a significant performance boost over the previous generation, but also attempts to widen the audience for the console. It’s a risky strategy for sure, especially given the similarities in the underlying architectures between the Xbox One and PS4. If the market for high-end game consoles has already hit its peak, then Microsoft’s approach is likely the right one from a business standpoint. If the market for dedicated high-end game consoles hasn’t peaked however, Microsoft will have to rely even more on the Kinect experience, TV integration and its exclusive franchises to compete.

Arguably the most interesting thing in all of this is the dual-OS + hypervisor software setup behind the Xbox One. With the Windows kernel running alongside the Xbox OS, I wonder how much of a stretch it would be to one day bring the same setup to PCs. Well before the Xbox One hits the end of its life, mainstream PC APUs will likely be capable of delivering similar performance. Imagine a future Surface tablet capable of doing everything your Xbox One can do. That's really the trump card in all of this. The day Microsoft treats Xbox as a platform and not a console is the day that Apple and Google have a much more formidable competitor. Xbox One at least gets the software architecture in order, then we need PC/mobile hardware to follow suit and finally for Microsoft to come to this realization and actually make it happen. We already have the Windows kernel running on phones, tablets, PCs and the Xbox, now we just need the Xbox OS across all platforms as well.

Power/Thermals, OS, Kinect & TV
POST A COMMENT

245 Comments

View All Comments

  • JDG1980 - Wednesday, May 22, 2013 - link

    In terms of single-threaded performance *per clock*, Thuban > Piledriver. Sure, if you crank up the clock rate *and the heat and power consumption* on Piledriver, you can barely edge out Deneb and Thuban on single-threaded benchmarks. But if you clock them the same, the Thuban uses less power, generates less heat, and performs better. Tom's Hardware once ran a similar test with Netburst vs Pentium M, and his conclusion was quite blunt: the test called into question the P4's "right to exist". The same is true of the Bulldozer/Piledriver line.
    And I don't buy the argument that K10 is too old to be fixable. Remember that Ivy Bridge and Haswell are part of a line stretching all the way back to the original Pentium Pro. The one time Intel tried a clean break with the past (Netburst) it was an utter fail. The same is true of AMD's excavation equipment line and for the same reason - IPC is terrible so the only way to get acceptable performance is to crank up clock rate, power, noise, and thermals.
    Reply
  • silverblue - Wednesday, May 22, 2013 - link

    It's true that K10 is generally more effective per clock, but look at it this way - AMD believed that the third AGU was unnecessary as it was barely used, much like when VLIW4 took over from VLIW5 as the average slot utilisation within a streaming processor was 3.4 at any given time. Put simply, they made trade-offs where it made sense to make them. Additionally, K10 was most likely hampered by its 3-issue front end, but it also lacked a whole load of ISAs - SSE4.1 and 4.2 are good examples.

    Thuban compares well with the FX-8150 in most cases and favourably so when we're considering lighter workloads. The work done to rectify some of Bulldozer's ills shows that Piledriver is not only about 7% faster per clock, but can clock higher within the same power envelope. AMD was obviously aiming for more performance within a given TDP. The FX-83xx series is out of reach of Thuban in terms of performance.

    The 6300 compares with the 1100T BE as such:

    http://www.cpu-world.com/Compare/316/AMD_FX-Series...

    Oddly, one of your arguments for having a Thuban in the first place was power consumption. The very reason a Thuban isn't clocked as high as the top X4s is to keep power consumption in check. Those six cores perform very admirably against even a 2600K in some circumstances, and generally with Bulldozer and Piledriver you'd look to the FX-8xxx CPUs if comparing with Thuban, however I expect the FX-6350 will be just enough to edge the 1100T BE in pretty much any area:

    http://www.cpu-world.com/Compare/321/AMD_FX-Series...

    The two main issues with the current "excavation equipment line" as you put it is a lack of single threaded power, plus the inherent inability to switch between threads more than once per clock - clocking Bulldozer high may offset the latter in some way but at the expense of power usage. The very idea that Steamroller fixes the latter with some work done to help the former, and that Excavator improves IPC whilst (supposedly) significantly reducing power consumption should be evidence enough that whilst it started off bad, AMD truly believes it will get better. In any case, how much juice does anybody expect eight cores to use at 4GHz with a shedload of cache? Does anybody remember how hungry Nehalem was, let along P4?

    I doubt that Jaguar could come anywhere near even a downclocked A10-4600M. The latter has a high-speed dual channel architecture and a 4-issue front end; to be perfectly honest, I think that even with its faults, it would easily beat Jaguar at the same clock speed.

    Tacking bits onto K10 is a lost cause. AMD doesn't have the money, and even if it did, Bulldozer isn't actually a bad idea. Give them a chance - how much faster was Phenom II over the original Phenom once AMD worked on the problem for a year?
    Reply
  • Shadowmaster625 - Wednesday, May 22, 2013 - link

    Yeah but AMD would not have stood still with K10. Look at how much faster Regor is compared to the previous athlon:

    http://www.anandtech.com/bench/Product/121?vs=27

    The previous athlon had a higher clock speed and the same amount of cache, but regor crushes it by almost 30% in Far Cry 2. It is 10% faster across the board despite being lower clocked and consuming far less power. Had they continued with Thuban it is possible they would have continued to squeeze 10% per year out of it as well as reduce power consumption by 15%, which if you do the math that leaves us with something relatively competitive today. Not to mention they would have saved a LOT of money. They could have easily added AVX or any other extensions to it.
    Reply
  • Hubb1e - Wednesday, May 22, 2013 - link

    Per clock Thuban > Piledriver, but power consumption favors Piledriver. Compare two chips of similar performance. The PhII 965 is a 125W CPU and the FX4300 is a 95W CPU and they perform similarly with the FX4300 actually beating the PhII by a small margin. Reply
  • kyuu - Wednesday, May 22, 2013 - link

    ... Lol? You can't simply clock a low-power architecture up to 4GHz. Even if you could, a 4GHz Jaguar-based CPU would still be slower than a 4GHz Piledriver-based one.

    Jaguar is a low-power architecture. It's not able (or meant to) compete with full-power CPUs in raw processing power. It's being used in the Xbox One and PS4 for two reasons: power efficiency, and cost. It's not because of its processing power (although it's still a big step up from the CPUs in the 360/PS3).
    Reply
  • plcn - Wednesday, May 22, 2013 - link

    BD/PD have plenty of viability in big power envelope, big/liquid cooler, desktop PC arrangements. consoles aspire to be much quieter, cooler, energy efficient - thus the sensible jaguar selection. even the best ITX gaming builds out there are still quite massive and relatively unsightly vs what seems achievable with jaguar... now for laptops on the other hand, a dual jaguar 'netbook' could be very very interesting. you can probably cook your eggs on it, too, but still interesting.. Reply
  • lmcd - Wednesday, May 22, 2013 - link

    It isn't a step in the right direction in IPC. Piledriver 40% faster than Jaguar at the same clocks and also clocks higher.

    Stop spreading the FUD about Piledriver -- my A8-4500m is a very solid processor with very strong graphics performance and excellent CPU performance for all but the most taxing tasks.
    Reply
  • lightsout565 - Wednesday, May 22, 2013 - link

    Pardon my ignorance, What is the "Embedded Memory" used for? Reply
  • tipoo - Wednesday, May 22, 2013 - link

    It's a fast memory pool for the GPU. It could help by holding the framebuffer or caching textures etc. Reply
  • BSMonitor - Wednesday, May 22, 2013 - link

    Embedded memory latency is MUCH closer to L1/L2 cache latency than system memory. System memory is Brian and Stewie taking the airline to Vegas vs the Teleporter to Vegas that would be cache/embedded memory... Reply

Log in

Don't have an account? Sign up now