Sandy Bridge Celerons

Intel released Sandy Bridge-based Celeron CPUs in early September, and these started appearing in retail channels by the middle of that month; we provided a brief overview of these parts. The Celeron that stands out is the G530, a dual-core CPU clocked at 2.4GHz with 2MB L3 cache and on-die Intel HD Graphics. This processor lacks Hyper-Threading and Quick Sync support, and it has a TDP of 65W (though it will generally use far less power than that). While Intel's suggested pricing is a meager $42, retail prices have stayed steady since its release at $55-60. It is the least powerful Intel dual-core CPU, with only the single core G440 available for less money.

If you've been building and using computers for years, you know there is a stigma attached to the Celeron name. For a long time, Celerons were crippled to the point of near-unusability for even the most basic tasks. That has changed as our basic benchmarks indicate that the G530 is not at all an almost-garbage CPU. The Celeron stigma is dead.

Athlon II X2s

AMD's Athlon II X2 Regor-based 45nm dual-cores have been a mainstay of budget computing since their introduction in 2009. The Athlon II X2 250, clocked at 3.0GHz with 2MB L2 cache, is essentially as capable today as it was two years ago for basic usage. For example, 1080p videos on YouTube are no more difficult to decode and Microsoft Office 2010 isn't much more CPU-hungry than Office 2007 was. Given that most computers I assemble are budget systems, I've now used the Athlon II X2 250 for more builds than any other CPU. Is that about to change?

Llano APUs

AMD's most recent APUs (accelerated processing units) have also expanded into the budget processor range. These Fusion APUs combine both the CPU and Radeon "cores" on a single die. Anand reviewed the most capable APU back in June, and compared the A6 model to Intel's Sandy Bridge Pentium in late August. The more recently released 32nm A4-3300 chip (overviewed by Anand in September) is a dual-core part clocked at 2.5GHz with 1MB total L2 cache and featuring AMD's Radeon HD 6410 graphics—160 GPU cores clocked at 443MHz. Its nominal TDP is 65W. Priced around $70, the A4-3300 is only about $10 more than the Celeron G530 and Athlon II X2 250. It promises better graphics performance—but how does the least expensive A-series APU compare to inexpensive discrete video cards, and do you sacrifice processor performance for better graphics?

Battle of the Budget Processors: Benchmarks

While we didn't put the Celeron G530 and A4-3300 through our extensive Bench suite, here are a few benchmarks that show how they stack up against the venerable Athlon II X2 250. All benchmarks were performed using an Antec Neo Eco 400W power supply, a Western Digital Blue 500GB WD5000AAKX hard drive, and a 2x2GB kit of DDR3-1333 with a clean installation of Windows 7 Enterprise 64-bit, with only the manufacturer-supplied drivers installed.

Conversion of a PowerPoint Presentation to a PDF

For this benchmark, I converted a 100 slide, 25MB PowerPoint file to a PDF using Microsoft Office 2010's integrated "Save as PDF" option. As you can see, the Athlon II CPU performs this task slightly faster than the Celeron, though in reality you're only going to actually notice a difference if you're converting extremely large PowerPoint files. The Fusion APU is substantially slower—this is a difference you will notice in real-world usage scenarios.

7-Zip performance

These values were obtained using 7-Zip's built-in benchmark function with a 32MB dictionary. AMD's Athlon II CPU has a more noticeable advantage here over the Celeron—you will notice a difference if compressing/decompressing either many files or large files. The A4-3300 again performs palpably worse--no surprise given its lower 2.5GHz clock compared to the Athlon's 3.0GHz.

FastStone image resizing

For this test, I resized 50 4200p pictures down to 1080p resolution using FastStone's batch image conversion function. Again, the two CPUs perform similarly, though this time Intel takes the lead. The AMD APU once again lags significantly behind the two CPUs.

x264 HD encode test

Graysky's x264 HD test (v. 3.03) uses x264 to encode a 4Mbps 720p MPEG-2 source. The focus here is on quality rather than speed, thus the benchmark uses a 2-pass encode and reports the average frame rate in each pass. The difference between the Athlon II and Celeron CPUs is essentially nil; both offer better performance than the AMD APU.

Power consumption

Like the above benchmarks, all components were held equal for power consumption testing sans the CPU and motherboard. For the Athlon II platform, I used the ASRock 880GM-LE motherboard, for the Intel platform I used the ASRock H61M-VS motherboard, and the APU was tested on an ASRock A55M-HVS. This is where the efficiency of the newer architectures truly outshines the older Athlon II design. Measurements were taken using a P3 International P4400 Kill A Watt monitor and reflect the entire system, not just the CPU.

Intel's Celeron still leads for low power use, but Llano is at least within striking distance. The older Athlon II X2 uses around 50% more power than Llano for these two tests--or around 17 to 30W more power. Taking the lower number and going with a system that's only powered on eight hours per day, we end up with a difference of around 50kWh per year--or $4 to $15 depending on how much you pay for electricity. If you're in a market where power costs more, obviously there's a lot to be said for going with the more efficient architectures.

Gaming benchmarks

Next we test how the AMD A4-3300 APU's graphics prowess stacks up against a budget GPU. The AMD Athlon II and Intel Celeron CPUs were paired with an AMD Radeon HD 5670 512MB DDR5 discrete GPU as neither of their integrated graphics are capable of producing a tolerable gaming experience. The A4-3300 was not paired with a discrete GPU.

Left 4 Dead 2

For the Left 4 Dead 2 benchmark, we used a 1024x768 resolution with all settings at maximum (but without antialiasing). The AMD APU delivers almost 40 frames per second by itself, so no discrete graphics card is required. Subjectively, gameplay was smooth and fluid on the APU. However, bumping up the resolution to even 720p could be an issue, even with less demanding games.

DiRT 3

For the DiRT 3 benchmark, we used DirectX 11 at 1024x768 resolution, but this time graphics options were set to the low preset. Even then, the AMD APU struggled to breach the 30 frames per second threshold, and DiRT 3 clearly did not run as smoothly as Left 4 Dead 2. That said, it remained playable, and if you're tolerant of lower resolutions, it performs fine in windowed mode.

Keep in mind that we're using the bottom-rung Llano APU for these tests, and it's a pretty major cut from the A6 models--half the shader cores, but with a slightly higher clock, and only a dual-core CPU. Where the A6 and A8 can legitimately replace budget discrete GPUs, the same cannot be said for the A4 APUs. The lowest priced A6-3500 will set you back around $100, but it drops the CPU clock to 2.1GHz and only adds a third core. Meanwhile the quad-core A6-3650 will run $120 ($110 with the current promo code), but it sports a 2.6GHz clock with the HD 6530D graphics (and a higher 100W TDP). At that point, you might also be tempted to go for the A8-3850, with the full HD 6550D graphics and a 2.9GHz clock, which brings the total for the APU to $135. All of these APUs will work in the same base setup as our Llano build, but obviously the price goes up quite a bit. If you'd like added processing and graphics power, though, the quad-core parts make sense.

Summary

As you can see, the Athlon II and Celeron CPUs are very evenly matched across a range of basic productivity tests, while the Fusion APU typically lags behind, at least for office productivity and encoding tasks. That said, the A4-3300 is capable of delivering an acceptable gameplay experience for casual gamers without necessitating a discrete GPU. Additionally, Intel's newer Sandy Bridge architecture and AMD's newer Llano architecture result in dramatically lower total system power consumption at both idle and load compared to the aging AMD Regor architecture.

So which CPU should you buy for your budget build? In terms of upgradeability, socket AM3 is still viable. In the short term, Phenom II quad-cores are already inexpensive, starting at just over $100—so they will be even cheaper in another year or two. Of course, Bulldozer CPUs are compatible with many AM3+ motherboards and could be a wise upgrade in a few years as well. Intel's LGA 1155 socket is also very upgrade-friendly—the Celeron G530 is, after all, the least powerful Sandy Bridge CPU (aside from the sole single-core SKU). The Core i3-2100 will likely sell for less than $100 in another year or so (at least on the secondhand market), and more powerful Core i5 and i7s could keep today's Intel budget build alive and well for maybe as much as five more years. Like the Celeron G530, AMD's socket FM1 has nowhere to go but up from the A4-3300 APU. That said, LGA 1155 currently offers far more powerful CPUs than the high-end A8-3850.

I think in this case, given how evenly the CPUs perform (aside from power consumption), and that both offer lots of upgrade potential, the decision will come down to overall platform cost and features. The A4-3300 APU does offer an acceptable general, basic computing experience—its real strength is its ability to play less resource-intensive games without the extra cost of a discrete GPU. We cover a few budget AMD and Intel platform motherboards on the next page.
Introduction Motherboards and Features
Comments Locked

95 Comments

View All Comments

  • buildingblock - Tuesday, November 8, 2011 - link

    I don't see how the X4 2.6Ghz 631 can ever even begin to be a winner against the Intel opposition. It is no more than an A6 3650 with the graphics unit disabled. My local hardware dealer is listing it at around 10% more than the Intel 2.8Ghz G840, which easily out-performs it and has a GPU. The budget end of the market is dominated by cheap Intel H61 motherboards and the 1155 socket Pentium G series, and the X4 631 brings nothing whatsoever to the table that's going to change that.
  • Taft12 - Wednesday, November 9, 2011 - link

    What analysis do you need to see? It is an A6-3650 with the graphics lopped off. If you want to know how it would do in CPU benchmarks, just look at a A6-3650 review.

    Despite the GPU sacrifice, you get no TDP savings. It's a piece of shit through and through not worthy of discussion.
  • mhahnheuser - Friday, November 11, 2011 - link

    ...you are spot on. But I can tell you why. Because if he ran the right discrete card in the Llano it would crossfire with it and then there would be absolutely no point to the comparison.
  • mhahnheuser - Friday, November 11, 2011 - link

    ...so the conclusion should have been....don't buy the Celeron over Llano unless you add a fast discrete DX compatible gpu. He only tested the X2 so that he could validate the comparision to Llano by building a higher performing, higher cost Celeron system. The X2 is a old Gen processor whereas the Celeron tested is in Intel's SB family...how is that a basis for fair comparison?
  • mhahnheuser - Friday, November 11, 2011 - link

    Good observation. I second your opinion.
  • Wierdo - Tuesday, November 8, 2011 - link

    Maybe I'm missing something, but how come the performance difference between the X2 3.0Ghz and A4 2.5Ghz is so big?

    They're pretty much the same core give or take a few tweaks and an added GPU block, right? I don't understand how a 500mhz drop can lead to 30->19 sec in PPT to PDT conversion for example.

    What am I overlooking here?
  • slayernine - Tuesday, November 8, 2011 - link

    The A4 is a different processor, it is not from the same line as the X2 and thus performs quite differently. Also 500mhz can make a huge difference in single threaded applications. For example if you tried to play back a 1080p video without GPU acceleration (relying entirely on the CPU) the A4 would stutter at 2.5GHz but the X2 should be ok at 3GHz. However in reality the A4 is a much more well rounded processor that allows light graphical capabilities for gaming and video performance.

    Also some might point out that a significant portion of the A4 chip is dedicated to Radeon cores thus limiting the ability of the CPU portion through purpose build design.
  • Taft12 - Tuesday, November 8, 2011 - link

    Wow, this is really coming out of your ass.

    The CPU part of Llano *IS* derived from good ol' K10 - Llano is/was referred to as K10.5

    It DOESN'T perform "quite differently", Anand found in the A8-3850 review that its performance was quite close to the Athlon II X4:

    <i>Although AMD has tweaked the A8's cores, the 2.9GHz 3850 performs a lot like a 3.1GHz Athlon II X4. You are getting more performance at a lower clock frequency, but not a lot more.</i>
  • slayernine - Tuesday, November 8, 2011 - link

    It offers different features thus is quite different type of processor. One of those differences is the amount of CPU die dedicated to actual CPU functionality. I didn't say the CPU is portion is built differently. I am fully aware it is based on the same architecture. Perhaps I confused you in my choice of words.

    FYI Taft12 we are talking about the X2 3.0Ghz VS A4 not Athlon II X4 vs A8. The reason this matters is that the X2 3.0Ghz offers better CPU performance A4.

    Summary: Clock for clock the X2 isn't much different from the A4 but the A4 is a lower clock speed and thus slower at CPU intensive tasks because half the the damn thing is a GPU! APU's at the same price point will generally be slower than the competing CPU. So perhaps the simple answer to Wierdo's question is simply: "Clock speed matters."
  • Wierdo - Tuesday, November 8, 2011 - link

    Hmm...I don't see how 500mhz difference causes super-linear scaling in performance, 2.5/3ghz is less than a %20 difference, it shouldn't be more than %20 performance difference one would think - with the cores being from the same family (K10) for both products I'm missing where the balance can affect non-multimedia workloads.

    It's quite interesting from an academic perspective, If it was primarily limited to graphics type workloads I could understand, but I don't see it for stuff like PPT->PDF conversions for example.

Log in

Don't have an account? Sign up now