AMD's Barcelona: Why we haven't published benchmarks

A month ago we were able to present to you a rare look at AMD's forthcoming roadmap, detailing everything from new plans for the mobile space to giving a better idea of AMD's reasoning behind the ATI acquisition. We left that article with a positive and hopeful note for AMD:

"For a while we had lost confidence in AMD, like many of you had as well, and although AMD's position in the market hasn't changed we are more confident now that it can actually bounce back from this. Intel seemed to have the perfect roadmap with Conroe, Penryn and Nehalem all lined up back to back, and we saw little room for AMD to compete. Now, coming away from these meetings, we do believe that AMD may have a fighting chance. Over the coming months you'll begin to see why; it won't be an easy battle, but it will be one that will be fought with more than just price."

A strong roadmap alone does not make for a successful company; we need to see near term execution as well. For AMD, that means Barcelona has to be competitive. The interesting part of AMD's disclosures as of late is that as much information as AMD has given us about its roadmap for 2008, 2009 and beyond, we have little to no details about when we can expect Barcelona and how fast it will be.


When we headed out to Taiwan, a country of leaked processors and benchmarking opportunities, for Computex we surely expected to return with some Barcelona performance figures. We were hoping we'd come back with the very data that AMD hadn't allowed us to get ourselves when we visited the company over a month ago. And while there were some performance results reported from Taiwan, there was an eerie silence about AMD's updated micro-architecture.

We were determined not to leave the island without running at least one test on Barcelona. We worked long and hard, and we were finally able to spend some time alone with Barcelona in Taiwan. But the story doesn't end there; it's unfortunately not that simple.

Motherboard Problems

We know that Barcelona works and runs benchmarks, as we saw back at AMD in May. But the demos that AMD ran were on its own motherboards, not on motherboards from its partners. AMD's partners just recently received their first "production quality" Barcelona samples, and as expected, the current boards required some heavy BIOS work before the new chips would even work, much less perform up to the expectations set by AMD.

The motherboard we tested on had minimal HT functionality and wouldn't run at memory speeds faster than DDR2-667; most 3D video cards wouldn't even work in the motherboard. Memory performance was just atrocious on the system, but the motherboard manufacturers we worked with attributed this to BIOS tuning issues that should be fixed in the very near future.

In the end, performance was absolutely terrible. We're beginning to understand why AMD didn't let us test Barcelona last month. It's not that AMD is waiting to surprise Intel; it's that the platform just isn't ready for production yet.

More Barcelona
Comments Locked

64 Comments

View All Comments

  • Roy2001 - Wednesday, June 13, 2007 - link

    Well CobraT1, I think he has NO idea about CPU manufacturing and cost, at all.

    Barcy would be significant for AMD, but no for PC industry. There is no way it can compet with Penryn. The very first Penryn (1st tapeout) runs up to 3.7Ghz in Intel lab. Even if it can reach 2.8Ghz as AMD simulated, it won't beat Penryn @3.7Ghz.

    AMD was/is and will be in trouble. They have to wait until their 45nm process is up and run. Before that, they have no chance.
  • strikeback03 - Wednesday, June 13, 2007 - link

    Well, if your car is turbocharged, 50HP is often just an ECU flash away. If you have a supercharger you can put on a smaller pulley. No guarantee on how well the rest of the powertrain will hold up, but it can be done. The VW 1.8T engines supposedly respond well to just changing the intake manifold gasket. The RB26DETT was reported to pick up over 100HP from just camshafts and an ECU flash.

    Back to the subject of processors, I agree that the size of the leap over it's predecessor won't matter much if Barcelona can't at least match Penryn. Also, IIRC, Nehalem is scheduled for release next year, which is supposed to be a more significant update. So even if Barcelona outperforms Penryn, AMD can't wait nearly so long to pull their next rabbit out of a hat as they have for this update.

    Also, if I were a shareholder of AMD, I'd be concerned that if they do get delayed to the very end of this year or early next year, then the earliest they are going to return a profit is Q3 08. As mentioned elsewhere in the comments, companies don't generally rush out to upgrade their servers, and the majority of consumer purchases are in Q3 and esp. Q4. So if AMD can't have the new processors on the shelves this year, that will be two Christmas buying cycles in a row where they have played second fiddle to Intel. That won't help their bottom line.
  • CobraT1 - Wednesday, June 13, 2007 - link

    I understand what your saying, that it is possible to boost power output of some engine designs, especially designs that were poor to begin with. Yet, the point was to increase overall power without suffering losses, and it being easy. Like with processors, this is generally not a quick and easy task and concessions generally need to be made. It takes proper consideration of usage, limitations, cost and the understanding of how the components will work in concert. With software modifications of engine managements systems (ECU) it is just as easy to reduce performance or negatively effect other operational criteria with programming. Testing needs to be done, hence the usage of dyno's in graphing power profiles and monitoring function. And generally, with no other modifications the power profile will change. Gains will be made under certain speed\loads while losses will be seen under others. Efficiency also generally drops. For example, modifications can fairly easily be made (like throwing on a larger TB) to produce a higher peak hp, yet throttle response, low-end torque and efficiency will generally suffer. That would not be an ideal design choice in regards to what we are analogizing. Induction velocities, capacities, fuel atomization abilities, fuel mixtures, cam\valve timings, valve lift, duration and I\O overlap, combustion chamber size, CC shape, plug choice, Ign. timings, duration and intensity, I\O port shape, size and length, scavenging and\or back pressure, the list could go on and on. The point is, if you change one it effects the others, and like in processor design these components need to be designed in such a way that the hole functions as was intended. If an increase in overall performance and efficiency is the intended target, you generally can’t just add something to achieve this. Even just adding cache in a processor design has it positive and negative impacts.
  • strikeback03 - Thursday, June 14, 2007 - link

    I suppose I didn't read that correctly. Yes, if a company wants to add more power they need to research it extensively. However, the consumer can just go buy the results of that research, so for them it is relatively easy.
  • Roy2001 - Monday, June 11, 2007 - link

    That's funny comment. Hector has devoted his life to fight with Intel since Motorola days with 68000. But we can forsee he won't make it. CPU industry is not just design or execusion. It's capital/manufacturing.
  • TA152H - Monday, June 11, 2007 - link

    Yet, it was design, under Jerry Sander's leadership, that made AMD what it is today. Or was, except Hector now has given up that leadership in design. They waited too long. He was never known as a visionary, but was supposed to help with execution, which hasn't exactly worked, has it?

    Where is the 68K now anyway? That's a good example of his prowess. A much better processor with a much more elegant instruction set is essentially dead, although Freescale still sells something close to it with a scaled-down instruction set. If the 68K line had the 68008 in time, would IBM have chosen the 8088? Doubtful, since they used the 68K in their 3270 and 370 emulators, not an Intel product. It's not like the 68008 wasn't possible either, it's somewhat simpler in that it has few address lines and data lines. So, where does he deserve credit for this???
  • tacoburrito - Monday, June 11, 2007 - link

    Anyone else thought that Shittle's SFF cases are getting bigger? With their G2 and G5 models, they looked great with their compact designs. Now it seems that Shuttle's cases are simply mid-sized towers turned sideways.
  • erwos - Tuesday, June 12, 2007 - link

    Shuttles are a touch larger than they used to be, but let's face it: you just can't cool a modern quad-core CPU and a high-end GPU without a little extra space to work with.

    They are _nowhere near_ the size of a mid-tower. The SS21T is a touch larger than most, but everything else is far smaller.
  • Regs - Monday, June 11, 2007 - link

    That AMD did not spend 3-4 years of R&D on this processor. More like..6-12 months. They started when they first saw Pentium M or was it when they first saw a Core Duo? I'm thinking when they first saw Core Duo because AMD made die shrinks before without updating the architecture. Though it seems like most of the time in development for the "Bark" has been making it work on a platform.

    All I got to say is what the hell was AMD thinking? They don't even have their mid-range video cards out yet and that's if you don't all ready consider their "flagship" middle range.
  • ShapeGSX - Monday, June 11, 2007 - link

    quote:

    That AMD did not spend 3-4 years of R&D on this processor. More like..6-12 months. They started when they first saw Pentium M or was it when they first saw a Core Duo?


    That is simply not possible. It takes much longer than that to get a new piece of silicon out the door.

Log in

Don't have an account? Sign up now