The Integrated GPU

For all but one of the processors, integrated graphics is the name of the game. AMD configures the integrated graphics in terms of Compute Units (CUs), with each CU having 64 streaming processors (SPs) using GCN 1.3 (aka GCN 3.0) architecture, the same architecture as found in AMD’s R9 Fury line of GPUs. The lowest processor in the stack, the A6-9500E, will have four CUs for 256 SPs, and the A12 APUs will have eight CUs, for 512 SPs. The other processors will have six CUs for 384 SPs, and in each circumstance the higher TDP processor typically has the higher base and turbo frequency.

AMD 7th Generation Bristol Ridge Processors
  GPU GPU SPs GPU Base GPU Turbo TDP
A12-9800 Radeon R7 512 800 1108 65W
A12-9800E Radeon R7 512 655 900 35W
A10-9700 Radeon R7 384 720 1029 65W
A10-9700E Radeon R7 384 600 847 35W
A8-9600 Radeon R7 384 655 900 65W
A6-9500 Radeon R5 384 720 1029 65W
A6-9500E Radeon R5 256 576 800 35W
Athlon X4 950 - - - - 65W

The new top frequency, 1108 MHz, for the A12-9800 is an interesting element in the discussion. Compared to the previous A10-7890K, we have a +28% increase in raw GPU frequency with the same number of streaming processors, but a lower TDP. This means one of two things – either the 1108 MHz frequency mode is a rare turbo state as the TDP has to be shared between the CPU and APU, or the silicon is sufficient enough to maintain a 28% higher frequency with ease. Obviously, based on the overclocking results seen previously, it might be interesting to see how the GPU might change in frequency without a TDP barrier and with sufficient cooling. For comparison, when we tested the A10-7890K in Grand Theft Auto at a 1280x720 resolution and low-quality settings, we saw an average 55.20 FPS.

Grand Theft Auto V on Integrated Graphics

Bearing in mind the change in the cache configuration moving to Bristol Ridge, moving from a 4 MB L2 to a 2 MB L2 but increasing the DRAM compatibility from DDR3-2133 to DDR4-2400, that value should move positive, and distinctly the most cost effective part for gaming.

Each of these processors supports the following display modes:

- DVI, 1920x1200 at 60 Hz
- DisplayPort 1.2a, 4096x2160 at 60 Hz (FreeSync supported)
- HDMI 2.0, 4096x2160 at 60 Hz
- eDP, 2560x1600 at 60 Hz

Technically the processor will support three displays, with any mix of the above. Analog video via VGA can be supported by a DP-to-VGA converter chip on the motherboard or via an external dongle.

For codec support, Bristol Ridge can do the following (natively unless specified):

- MPEG2 Main Profile at High Level (IDCT/VLD)
- MPEG4 Part 2 Advanced Simple Profile at Level 5
- MJPEG 1080p at 60 FPS
- VC1 Simple and Main Profile at High Level (VLD), Advanced Profile at Level 3 (VLD)
- H.264 Constrained Baseline/Main/High/Stereo High Profile at Level 5.2
- HEVC 8-bit Main Profile Decode Only at Level 5.2
- VP9 decode is a hybrid solution via the driver, using CPU and GPU

AMD still continues to support HSA and the arrangement between the Excavator v2 modules in Bristol Ridge and the GCN graphics inside is no different – we still get Full 1.0 specification support. With the added performance, AMD is claiming equal scores for the A12-9800 on PCMark 8 Home with OpenCL acceleration as a Core i5-6500 ($192 tray price), and the A12-9800E is listed as a 17% increase in performance over the i5-6500T. With synthetic gaming benchmarks, AMD is claiming 90-100% better performance for the A12 over the i5 competition.

An Unusual Launch Cycle: OEMs now, Individual Units Later Understanding Connectivity: Some on the APU, Chipset Optional
Comments Locked

122 Comments

View All Comments

  • msroadkill612 - Wednesday, April 26, 2017 - link

    Good post. Ta.

    Yep, for well over a decade, we hear from sisc fans how they are the future, yet i seem to live in a world where further miniturisation is the key to progress, and what better way than cisc on a single wafer, using commonly 14nm nodes, soon to be 7nm from GF.

    Intuitively, Spread out, discrete chips cant compete with "warts and all" ciscS.

    As it looks now, the new zen/vega amd apu, seems a new plateau of SOC, and may even be favoured in server gpu/cpu processes.

    we know amd can make ryzen, which is 2x4 cpu core units on one am4 socket plug.

    its a safe bet vega will be huge.

    we know amd can glue an above 4 core unit to a vega gpu core on one am4 socket (from raven ridge apu specs) - i.e they can mix and match cpu/gpu on one am4 socket.

    we know the biggest barrier to gpuS in the form of memory bandwidth, has been removed by vegaS HBM2 memory, and placing it practically on the chip.

    We know it doesnt stop there. Naples will offer 2x ryzen on one socket soonish, and there is talk of 64 core, or 8 ryzens on one socket.

    So why not 8 x APUs, or a mix of ryzen cpuS & APUs for g/cpu compute apps?
  • pattycake0147 - Friday, September 23, 2016 - link

    Pretty sure it was mainly a joke playing on the names...
  • Ratman6161 - Tuesday, October 4, 2016 - link

    I'm coming in late and trying to understand what appears to me to be a ridiculous argument. Apple A10 Vs AMD A10??? What??? Totally unrelated. Might as well add an Air Force A10 to the list since we seem to be wanting to compare everything with A10 in the name.
  • paffinity - Friday, September 23, 2016 - link

    Lol, Apple A10 would actually win.
  • Shadowmaster625 - Friday, September 23, 2016 - link

    Apple A10 is actually faster than any AMD chip at Jetstream, Kraken, Octane, and pretty much every other benchmark that measures real world web browsing performance. Such is the sad state of AMD.
  • ddriver - Friday, September 23, 2016 - link

    JS benchmarking is is a sad joke. You compare apples to oranges, as the engine implementation is fundamentally different. No respectable source would even consider such benchmarks a measure of actual chip performance.
  • xype - Saturday, September 24, 2016 - link

    I’m as "happily locked in" into Apple’s platforms as anyone, but the whole "lol A10 kicks x86 ass" thing is getting retarded. It’s a fine CPU, sure, but how people can’t comprehend that it’s designed for a whole different set of usage scenarios is beyond me.

    Now, that’s not to say Apple isn’t working on a desktop class ARM CPU/GPU combo, but _that_ would be a real surprise.
  • Meteor2 - Saturday, September 24, 2016 - link

    It's a measure of end-user experience, however.
  • Alexvrb - Sunday, September 25, 2016 - link

    Not necessarily. Those benches Shadow mentioned are more of a measure of a particular browser's optimizations for those benches, than anything.
  • silverblue - Saturday, September 24, 2016 - link

    Yet HSA would yield far bigger performance gains. The only issue is unlike iOS-specific optimisations which you're running into all the time, unless you're using specifically optimised software then HSA won't be helping anybody.

    If HSA was some intelligent force that automatically optimised workloads, I don't think anybody would dare suggest an Apple mobile CPU beating a desktop one.

Log in

Don't have an account? Sign up now