AMD Kabini AM1 Conclusions

After dealing with enthusiast mainstream CPUs for so many years, wrapping your head around 2 GHz dual/quad core parts again is somewhat mindboggling, feeling like I have just pulled out one of those first dual core systems when they hit the enthusiast mainstream segment.  I am glad that several years down the line they are now the low-end part of the market, and it raises the bar of the minimum performance of a desktop into something more usable as well as a rise in the quality and grunt of integrated graphics, all within a low cost framework.

To cut straight to the chase, our review here pitted all four new AMD Kabini AM1 Socketed Desktop APUs against the two Intel Bay Trail-D SoCs that matched up closer in specifications.  Both sides of the coin features 2-4 cores ranging from 1.3 GHz to 2.4 GHz, as well as integrated graphics solutions to sufficiently tackle all regular daily tasks asked of them.  The best match up was the AMD Athlon 5350, a quad core 2 GHz part with 128 SPs at 600 MHz against the Intel J1900, a quad core 2 GHz (2.4 GHz Turbo) part with 6 EUs at 688 MHz.

AMD Athlon 5350 vs. Intel Celeron J1900
  Athlon 5350 Celeron J1900
CPU Architecture Jaguar Silvermont
CPU Cores 4 4
CPU Frequency 2.05 GHz 2.0 GHz / 2.4 GHz Turbo
GPU Cores 128 SPs 6 EUs
GPU Frequency 600 MHz 688 MHz
Memory Channels Single Dual
Memory Frequency 1600 MHz 1333 MHz
L2 Cache 2 MB 2 MB
TDP 25 W 10 W
Price $59 $82

If we directly compare these two, we see a range of different characteristics.  The Intel CPU takes the crown in floating point tests, potentially indicating a better scheduler when dealing with floating point numbers.  The 3DPM test shows that here, as well as some of the more general purpose benchmarks such as the Media/Data segments of SYSmark 2014.  There is also the power consumption to consider, as the Bay Trail-D CPUs have only a 10W TDP.  The Athlon 5350 takes the majority of the integer based operations, such as Cinebench and FastStone, as well as the TrueCrypt benchmark due to its included AES-NI hardware acceleration.  Other items such as the web benchmarks showed little difference between AMD and Intel.

However where the Athlons stand out is in the associated IGP benchmarks.  In our 1280x1024 low resolution game tests, the top two Athlons (5350, 5150) approached 30 FPS average whereas the Bay Trail-D CPUs struggled with half that frame rate.  The same comes down to synthetics (3DMark), although some of the more CPU focused game benchmarks (draw calls) narrowed the gap.

When it comes to discrete GPU tests, as our Intel samples only had a closed PCIe 2.0 x1 slot, we were unable to compare directly with AMD’s Kabini.  In the global scope of things however, the AMD Kabini platform paired with a high powered GPU (AMD 7970, GTX 770) managed 30 FPS+ in 9 out of 12 of our benchmarks at 1080p with maximum detail using the Athlon 5350.  Even in Battlefield 4 single player with these high settings, a 20.7 FPS minimum indicates that a few notches down on image quality makes it readily playable.  However using such a powerful GPU is perhaps not the best scope for such a platform.

When using the systems and running the tests, it was clear with the two Semprons that during basic use, such as web browsing and navigation, it did feel a little slower than what I was used to. The web browsing tests show that up quite well, with the Kraken benchmark showing a +50% slower time to complete on the quad core Sempron vs. the top level Athlon.  This delay was not show stopping, and using an SSD alongside the system almost certainly helped with that.

Another point of sale for AMD Kabini will be in integrated systems, such as digital signage, library computers or similar.  From this perspective, as long as the system is not doing severe rendering on the fly (such as more than 1280x1024 on low with modern engines) but needs more computational power than say a Raspberry Pi, then the Kabini AM1 platform offers a good implementation and a low cost.  The next step from here would be to see small form factor devices that could also be upgradeable - something that could fit onto a VESA mount perhaps.

dGPU Benchmarks with ASUS HD7970
Comments Locked

87 Comments

View All Comments

  • Silver47 - Thursday, May 29, 2014 - link

    They are in the graphs, what are you smoking?
  • Novaguy - Thursday, May 29, 2014 - link

    I think using a weaker card for the dgpu would also have been interesting. Maybe something like an r7 270, 7750 or 7770.

    Also, what about Mantle benchmarks?
  • Novaguy - Thursday, May 29, 2014 - link

    I meant r7 250, not 270... can't seem to edit via mobile.
  • V900 - Friday, May 30, 2014 - link

    No, no Mantle benchmarks... It's useless, and only serves to clutter the article.

    Hardly anybody cared about Mantle when it was announced... And now that we know the next version of Direct X is coming, the only people that care the slightest, are a handful of AMD fanbois.
  • silverblue - Friday, May 30, 2014 - link

    You're entitled to your opinion. Let's look at it this way - DirectX 12 won't be here for another 18 months. Also, Mantle has been shown to perform better than AMD's own implementation of DX11 (and sometimes faster than NV's as well) and also helps with lower performing CPUs.

    The following link only shows the one game, but it should be enough to highlight the potential benefit of Mantle on a comparatively weak architecture such as Jaguar:

    http://www.pcper.com/reviews/Graphics-Cards/AMD-Ma...

    If you didn't need to upgrade your CPU to play the latest and greatest, I think you'd care, too.
  • formulav8 - Friday, May 30, 2014 - link

    If Mantle was from NVidia or Intel you would be a bigger stooge than anyone AMD.
  • Gauner - Thursday, May 29, 2014 - link

    I think it would be far more useful for this weak systems to have a different set of game tests.

    No one will buy one of those to play tomb raider or bioshock but I'm guessing some people would consider buying one if you could play for example league of legends or dota2 at minimum and 720/1080p.

    I understand it would mean an extra effort but right now the gaming tests don't really tell anything useful nor do they add useful information to other reviews by having an extra comparison.

    Just taking into account steam games(since they are the easier ones to get data about) in the top 5 most played games daily you always have: dota2, CS:GO and TF2. Those 3 games used to run(poorly, 800x600, low detail, 20-30fps) in the old Intel GMA4500M so in my opinion they have a lot of people potentially interested in them and would run well enough in low power systems to give a useful comparison between chips.
  • res057 - Thursday, May 29, 2014 - link

    Tests with high end games and cpu intensive tasks for processors such as these is like Car and Driver putting a Prius through quarter-mile test and comparing the results with a Corvette. Pointless.
  • Gauner - Friday, May 30, 2014 - link

    I can understand the logic behind CPU intensive tasks, even if you wont use this kind of CPUs to compress video with x264 you can watch the results and get a realistic estimation on the performance difference because the test is the same for both weak CPUs and high end CPUs.

    Problem with the gaming tests is that the comparison is with settings completely different from the high end tests so no parallelism there and with a set of games that no one would think to play in weak systems so there is no useful info given.
    I cant look at the results and say "since tomb raider at 1280x1024 with low graphics ran at 20fps I guess league of legends will run at 1080p with medium graphics and 35fps", there are too many different points to make a comparison(AAA engine vs engine designed for low power computers, different resolution, completely different poly count and texture sizes, ...).
  • takeship - Friday, May 30, 2014 - link

    I'm curious why no comparison with the old E-450/E-2000 Jaguar chips. Not so much for the CPU performance, but for the GPU improvements going to GCN.

Log in

Don't have an account? Sign up now