Minecraft

Switching gears for the moment we have Minecraft, our OpenGL title. It's no secret that OpenGL usage on the PC has fallen by the wayside in recent years, and as far major games go Minecraft is one of but a few recently released major titles using OpenGL. Minecraft is incredibly simple—not even utilizing pixel shaders let alone more advanced hardware—but this doesn't mean it's easy to render. Its use of massive amounts of blocks (and the overdraw that creates) means you need solid hardware and an efficient OpenGL implementation if you want to hit playable framerates with a far render distance. Consequently, as the most successful OpenGL game in quite some number of years (at over 7.5mil copies sold), it's a good reminder for GPU manufacturers that OpenGL is not to be ignored.

Minecraft

Minecraft does incredibly well on Trinity. While the improvement over Llano is only 15%, the advantage over Ivy Bridge is tremendous.

 

Civilization V

Our final game, Civilization V, gives us an interesting look at things that other RTSes cannot match, with a much weaker focus on shading in the game world, and a much greater focus on creating the geometry needed to bring such a world to life. In doing so it uses a slew of DirectX 11 technologies, including tessellation for said geometry, driver command lists for reducing CPU overhead, and compute shaders for on-the-fly texture decompression. There are other games that are more stressful overall, but this is likely the game most stressing of DX11 performance in particular.

Civilization V

Civilization V

Civilization V shows some of the mildest gains in all of our tests vs. Llano. The 5800K/7660D manage to outperform Llano by only 8 -11% depending on the test. The advantage over Intel is huge of course.

Starcraft 2 & Skyrim Performance Compute & Synthetics
Comments Locked

139 Comments

View All Comments

  • kyuu - Friday, September 28, 2012 - link

    "What I'm most looking forward to is a tablet of Surface quality with a low-voltage Trinity powering it."

    I should have said Trinity or, even better, one of its successors.
  • calzahe - Friday, September 28, 2012 - link

    Memory is quite cheap now, you can find good DDR3 2133MHz 4GB 2x2GB modules for around 40usd for current 2 channel memory APUs, so you'll need to add just 40usd for another 4GB 2x2GB modules for the 4 channel memory APUs, but if done properly these APUs will be able to use 8GB of Memory. It means that for extra 40USD the new APU would be able to use 8GB of memory what is much more than 3-4GB in current monster video cards which cost 500-600usd. Also for around 100-150usd you can get DDR3 2133MHz 16GB 4x4GB.

    Can you imagine the level of next-gen graphics if APUs will be able to fully utilise 8GB, 16GB or even 32GB of 4 channel system memory!!!
  • Marburg U - Thursday, September 27, 2012 - link

    So, Anand, you've just called this a "Review".

    Yes, you named it "part 1", but the fact is that at the moment you are publishing a review with only what AMD HAS TOLD YOU you are allowed to publish and which they are pleased to read.

    How the hell can i trust this site's reviews anymore?
  • silverblue - Thursday, September 27, 2012 - link

    You could always go to TechReport and join in the AMD bashing if you prefer. Whilst I don't completely agree with the idea of partially lifting the NDA in a specific fashion, it's clear that AMD wants to highlight the strengths of Trinity without possibly clouding the waters with middling x86 performance.

    Piledriver is not AMD's answer to Intel, even Vishera won't be an i7 competitor in most things and might struggle to stay with the i5s sometimes, and Zambezi was definitely underwhelming as a whole, so I can understand why they wouldn't want to focus on CPU performance. Additionally, if Vishera is due out at the same time as Trinity and you get an early idea of Trinity's CPU performance, even though Vishera will be generally faster than Trinity it may be classed at the same performance level.
  • cmdrdredd - Thursday, September 27, 2012 - link

    What's clear is AMD cannot compete in benchmarks that matter to most people who read these sites(how fast does it transcode my video vs an i5). So they try to hide that behind GPU performance charts.

    It's like Apple misleading people about the performance of their CPUs back in the day.
  • silverblue - Thursday, September 27, 2012 - link

    Amusingly, you'd think it would easily beat an i5 at transcoding... :P
  • Taft12 - Sunday, September 30, 2012 - link

    Uhh the benchmarks the readers of this site care about are the ones that ARE here - the gaming benchmarks. AT readers are intelligent enough to know CPUmark, Sandra, etc mean less than nothing.
  • torp - Thursday, September 27, 2012 - link

    The A10 65W looks like it has the same GPU and about 10% less CPU clock. Now THAT part could be really interesting for a low cost PC...
  • rarson - Thursday, September 27, 2012 - link

    Crossfire? Pairing one of these with a mid-range card in a hybrid Crossfire setup would be pretty awesome in an HTPC setup. Almost like a next-gen console, but much better.
  • RU482 - Thursday, September 27, 2012 - link

    looking to upgrade a couple of lower power SSF systems with one of those 65W CPUs. wonder how much an ITX mobo will run

Log in

Don't have an account? Sign up now