Kabini Windows 8 Laptop Performance

With the SoC and “lighter device” benchmarks out of the way, let’s also look at what Kabini offers for a full laptop experience. Let me preface this section by simply stating many of our laptop benchmarks really aren’t a good fit for an APU like Kabini—e.g. doing 3D rendering or x264 HD encoding on such a chip is just asking for poor performance. We’re also looking at different OS configurations (Windows 7 vs. Windows 8, IE8/9 vs. IE10), so there’s a slightly higher potential for margin of error here.

Our current list of laptops includes AMD's Brazos E-350 (MSI X370), Kabini A4-5000, and Trinity A10-4600M; on the Intel front we have i7-3517U (Dell XPS 12) and Pentium 2020M (a late addition when we managed to get a laptop for short-term testing). Both of the Intel chips are 22nm parts, but note that the Pentium chip is a 35W part. Sadly, we have not yet been able to get a Pentium 2117U as a comparison. [Note: Some laptops are still being tested on some of the benchmarks; their scores will be added/updated as they complete.]

We do want to see what sort of gains are present relative to Brazos, however, so let’s get started. We’re presenting an abbreviated look at performance here, but we have the full set of benchmark results in Mobile Bench, including some of our older benchmarks that we’ve run against Brazos and other laptops prior to 2013. There are two main questions to consider for each benchmark: how much faster is Kabini than Brazos (and where does it place relative to other options), and does Kabini provide enough performance to handle the task represented by the benchmark?

PCMark 7 (2013)

Cinebench R11.5—Single-Threaded Benchmark

Cinebench R11.5—Multi-Threaded Benchmark

x264 HD 5.x

x264 HD 5.x

Starting with PCMark 7, we have both HDD and SSD results. As usual, the presence of an SSD boosts performance in the overall score by more than 50%, so Kabini with an SSD can feel far more responsive than Ivy Bridge with an HDD, depending on the task. Relative to Brazos, even with an HDD on both laptops, Kabini is nearly 50% faster. ULV Ivy Bridge on the other hand offers twice the performance of Kabini in the overall score, though Quick Sync skews that pretty heavily. Looking at the individual results, ULV IVB is around 30-50% faster on most CPU tasks, and it’s even a bit faster on the GPU side in most areas as we’ll see in a moment.

Update: We've added Pentium 2020M to the above charts, which lacks Quick Sync support and runs at a maximum clock speed of 1.8GHz. It's clearly slower than the i7-3517U in the Dell XPS 12, but it's also still a healthy step up from Kabini in terms of performance. The 2020M is a full 35W part, like the A10-4600M, and it tends to slightly outperform Trinity on CPU tasks while trailing in GPU performance. On the Kabini front, however, even the Pentium 2020M is able to lead on nearly all the performance metrics.

The CPU performance testing of x264 HD 5.x and Cinebench confirm the CPU deficit AMD faces with Kabini. In heavily threaded workloads, Ivy Bridge ULV is 50-100% faster, but the real problem is in the single-threaded workloads. A single Jaguar core in Cinebench manages to score just 0.39 compared to IVB ULV’s score of 1.24, so worst-case Kabini is one third the speed of Ivy Bridge. Standard Voltage Trinity APUs are likewise a big step up from Kabini, offering roughly twice the CPU performance in some cases. Of course, the power draw from standard Trinity tends to be far higher than Kabini.

Futuremark 3DMark (2013)

Futuremark 3DMark (2013)

Futuremark 3DMark (2013)

Futuremark 3DMark 11

Futuremark 3DMark06

Quickly looking at the 3DMark results, if you were hoping Kabini would be fast enough to handle modern games at moderate detail settings, the relative standings in 3DMark should help prep you for what’s to come. The A10 Trinity can handle many titles at moderate details, but even it struggles with many of the latest releases; Kabini has about a third of the total GPU compute performance of Trinity, and while it’s a bit faster than that in some games, for the most part it’s best suited for older games that don’t require as much from the CPU or GPU.

As for Intel's chips, while the Core i7 ULV part ends up faster than Kabini, the same can't be said of the Pentium 2020M; it's a tie in some tests but falls behind (sometimes significantly, e.g. 3DMark 11) in others. As neither chip is really fit for high-end graphics work, it's not really a major concern. If you want decent graphics performance, you're going to want more than either Kabini or Ivy Bridge has to offer.

Kabini vs CT/ARM: GPU Performance Kabini Gaming and Battery Life
Comments Locked

130 Comments

View All Comments

  • HisDivineOrder - Thursday, May 23, 2013 - link

    Given AMD's traditional design wins and how those systems end up, I suspect this is not going to matter much. I have more hope of Bay Trail providing a solid deal for once than I do this.

    It's a shame because this really should be AMD's niche to dominate, but I doubt any OEM'll give them a serious try.
  • Desperad@ - Thursday, May 23, 2013 - link

    On competitive positioning, is it even near IB Pentium?
  • brainee - Saturday, May 25, 2013 - link

    I think so, yes. IB Pentium 2117U (17 Watt TDP) should be around 33 % faster in legacy Intel-optimised CPU benchmarks doing the math and according to say Techspot. I would think ULV Pentiums are more expensive for OEMs, notebooks is a different story. Not to mention Kabini should cost a fraction to make for AMD compared with even crippled 2C Ivybridges aka Celeron / Pentium. Kabini wins in games and Open CL, and in AVX-enabled applications it should eat the Pentium alive since the latter doesn't support AVX extensions (should be mentioned at least). I'd prefer AVX extensions to Cinebench but this site seems to suggest I am a minority...
  • yhselp - Saturday, May 25, 2013 - link

    Comparing a 3W SoC (Z2760) to a 15W SoC (A4-5000), and calling the former laughable... not really fair.

    Sure, Kabini is definitely faster than the old Atom architecture and, yes, I understand this is not a definitive comparison; nevertheless - it seems misleading.

    What would happen if we compare a 3W Kabini to a 15W Haswell? Laughable wouldn't even begin to describe the performance difference.
  • silverblue - Saturday, May 25, 2013 - link

    But... an A4-5000 doesn't use anywhere near 15W, as far as I've heard. Still, let's consider the evidence - the Z2760 is a 32-bit, dual core, hyperthreaded CPU at 1.8GHz with a low powered graphics unit and 1MB of L2. The A4-5000 is a 64-bit, quad core CPU at 1.5GHz with a far stronger graphics unit and 2MB of dynamic L2. Temash would be a different proposition I expect as the A4-1200 is only clocked at 1GHz.
  • yhselp - Saturday, May 25, 2013 - link

    Yes, absolutely, I agree - it's just that the direct comparisons and conclusions made are a bit stark.

    There's always another side to an argument; in your case, I could argue that comparing the brand new Jaguar to a terribly old Atom architecture isn't the way to go. Consider the following evidence - Silvermont is 64-bit, quad-core, 2MB L2 cache, OoO, 2GHz+, 22nm, far more energy efficient, supports 1st gen Core instructions and Turbo Boost; it would decimate Jaguar.

    In the article, I also discovered that the 2020M is referred to as a 1.8GHz 35W part, when it's actually 2.4GHz. Are the benchmarks done on a underclocked 2020M or was that simply a typo?

    That's the kind of stuff I'm talking about, not AMD vs. Intel.
  • jcompagner - Sunday, May 26, 2013 - link

    So this is the core that will be in the next 2 big consoles?
    Am i the only one that think that these are quite weak, even if you have 8 of them?

    That does mean now that if one of those 2 consoles are the lead in the development that the games will be forced to be really good multi threaded. (So i guess the next games for the pc will also be using multiply cores way more)

    Why did they go for the jaguar core thats really targetted for ver low end or mobile stuff?

    Why didn't they just go for a Richland 8 core system with a very good gpu that lets say is a 100W part?

    What is the guess that the TDP is of the xbox one or ps4? A console can take 100W just easily that doesn't matter, so why choose for a core that is dedicated for mobile?
  • yhselp - Sunday, May 26, 2013 - link

    Yes, the Jaguar core is 'weak', but what does 'weak' mean? That is such a vague definition. For one usage scenario Jaguar might be unacceptable, for another it might be overkill. Remember, Sony/MS are not building a contemporary PC. Jaguar might seem slow to us, and in a gaming desktop it would be, but that's not the point. Think of consoles, in this case the PS4 and the Xbox One, as non-PC devices such as tablets. Would you say the latest Samsung/Apple running on a Cortex A15 is slow? No, you would say it's super fast. Well, Jaguar is even faster. Yes, a console has to deal with different workloads than a tablet, but that's why it has very different hardware.

    Why did Sony/MS choose Jaguar? Jaguar is easier to integrated, more power efficient and most importantly cheaper than Richland. It's a far simpler architecture than Richland, and probably easier to work with in a console's life. Also, it's very important to note that Sony/MS wanted an integrated solution - they weren't going to build a system with a dedicated video card like a gaming PC.

    Cost, cost, cost - everything is about the cost. A console cannot be expensive (the way a gaming PC is) - it has to sell very well in order to establish an install base to sell games to. Sony/MS will probably sell their 8th gen consoles at a loss initially - AMD's Jaguar/GCN was their best/only choice. What else could they do at the same price or even at all? Silvermont isn't ready yet and NVIDIA probably wouldn't be willing to integrate a GPU of theirs the way AMD did, and both of those would be more expensive than Jaguar/GCN. Not to mention, MS has had a ton of trouble with NVIDIA in the original Xbox - they are probably not willing to go down that road again.

    It's not really an 8-core solution - it's two quad-core modules and communication between the two might be problematic; so games on the new PS/Xbox would probably run on four Jaguar cores at 1.6 GHz. However, don't forget that neither of the two consoles has a ton of raw graphics power under the hood - the Xbox GPU is roughly equivalent to an HD 7770 (but with better memory bandwidth), and the PS to an HD 7850. Games would be specifically developed for this kind of hardware (unlike PC games) and would most probably be GPU limited so the Jaguar cores would really be sufficient.

    I hope this answers your questions.
  • Kevin G - Monday, May 27, 2013 - link

    A Pile Driver module is much larger than a Jaguar core. For die size concerns, it going with Jaguar made sense if core counts are the same. Steam Roller cores are due out in 2014 which are expected to bring higher IPC and a slight clock speed increase compared to Pile Driver.

    Power consumption is also an issue. The bulk of the power consumption from the XBox One and PS4 SoC's will come from their GPU's. Adding a high power CPU core like Pile Driver would have ballooned power consumption close to 200W which makes cooling impractical and expensive. Jaguar still adds power but it is far more manageable in comparison.

    In addition, Steam Roller is tied to processes from Global Foundries (though IBM could likely manufacture them if need be). TSMC is the preferred foundry for bulk processes due to cost and a slight edge in density. Jaguar has been prepared to be manufactured at TSMC from the start. AMD could have stuck with GF but it would have had to port GCN functional units to that same process. Such efforts are currently underway for Kaveri that is looking to be a 2014 part. So for any type of 2013 launch, going that route was not an option.
  • aikyucenter - Sunday, June 30, 2013 - link

    Great OpenCL performance ... love it ... just make it faster launch and decrease TDP too = PERFECT :D

Log in

Don't have an account? Sign up now