AMD: Still in the Game

by Anand Lal Shimpi on 7/26/2007 2:00 PM EST
POST A COMMENT

31 Comments

Back to Article

  • Lord Evermore - Sunday, July 29, 2007 - link

    What the heck are RDDR and UDDR? My only guess is the U might stand for the UMA design, but I don't know if that would be preferred for the server or workstation. Reply
  • Anand Lal Shimpi - Tuesday, July 31, 2007 - link

    RDDR = Registered DDR
    UDDR = Unbuffered DDR

    Take care,
    Anand
    Reply
  • Martimus - Thursday, August 02, 2007 - link

    Ok, what is OoO? I couldn't find it with a search on Google. Reply
  • Spartan Niner - Saturday, August 04, 2007 - link

    OoO is "out-of-order" referring to OoOE, "out-of-order-execution"

    http://en.wikipedia.org/wiki/Out_of_order_executio...">http://en.wikipedia.org/wiki/Out_of_order_executio...
    Reply
  • Martimus - Monday, August 06, 2007 - link

    Thanks. Reply
  • xpose - Saturday, July 28, 2007 - link

    This is the best future roadmap article I have ever read. I am actually excited. No really. Reply
  • najames - Friday, July 27, 2007 - link

    I am an AMD fanboy, of 7 computers I have at home, only the 5 year old laptop has an Intel chip now. Dual cores are actually likely all I REALLY need. That said, I am sick of a bunch of hype and no new products. It's all blow and no show. I don't care about years down the road because it could all change between now and then.

    AMD/ATI could be a good thing too if they make good, polished drivers, 100% working for what was promised. How about throwing people a bone to make them switch, maybe even make some kick butt Linux drivers too.

    We were all on an AMD bus and nobody has been driving since the X2 chip. They taunted Intel and handed out huge bonuses, but forgot about any new development. I have to credit Intel, they kicked butt with Core 2, and seem to be doing more butt kicking going forward.

    I watched Hector on CNBC last night and he didn't look like he had a clue what was going on. Granted they weren't asking him details of any processors, but he was dodging basic business questions. Why do I have several hundred shares of AMD?
    Reply
  • Regs - Monday, July 30, 2007 - link

    quote:

    Why do I have several hundred shares of AMD?


    Because those relatively cheap shares, compared to Intel's, might be worth hundreds of times more one day from that stuff you call blow. Blow = prospects in business terms.

    I would say the same thing as you did though at first. It's obvious AMD and ATi's pipeline dried up and unfortunately both consecutively. You can argue that the 2900XT is a good card, performs well, etc..etc.. but that doesn't explain why AMD offers crapware for main stream (where the real money is). As for AMD's CPU line up...well..you can only sell old for so long in the technology sector without taking a hit.
    Reply
  • kilkennycat - Friday, July 27, 2007 - link

    .... dump ATi. The marriage made in hell. New products unable to meet schedule and with inferior performance, thus no way of rapidly recovering development costs by pricing for performance.

    Dave Orton sure did a neat sell-job on AMD, walking away with $$millions when AMD paid a 20% premium for a chronically non-performing company barely managing to eke out some tiny profits during the last couple of years. No wonder Mr. Orton was finally shown the door.
    Reply
  • kleinwl - Friday, July 27, 2007 - link

    What is the problem with AMD, did they not receive enough feedback that UVD is a "must have" on high end units. I don't want to have choose between good gaming performance and movie performance... I am paying a ridiculous premium already for hardware... the least they could do is make sure it has all the bells and whistles. Reply
  • kilkennycat - Friday, July 27, 2007 - link

    Highly likely that nVidia will solve this problem at both high and low end with their next family of GPUs. Stay tuned for the end of 2007. The first part out of the chute is also likely not to be the highest end but that which replaces the 8800GTS at a price close to $200 with full HD hardware decode... nVidia is very well aware of the cost-performance hole left by both AMD/ATI and themselves in their current GPU line. Reply
  • strikeback03 - Friday, July 27, 2007 - link

    with that Phenom demo box, I think they have finally found use for a 1000W+ power supply Reply
  • Spoelie - Friday, July 27, 2007 - link

    Given the size of the heatsink on the cpu, I'd venture power consumption is inline with other engineering samples, 120w or less max TDP Reply
  • Spoelie - Friday, July 27, 2007 - link

    Oh my bad, you're right when taking the three 2900XTs in consideration.

    Where's my edit button :(
    Reply
  • Spoelie - Friday, July 27, 2007 - link

    At least 2 times in the article, the text builds up anticipation for a graph, but it never comes, the most telling example is on page 6, but one or two pages before it it happened also. Both graphs are supposed to be from Intel. Reply
  • Justin Case - Monday, August 13, 2007 - link

    Exactly. They say "Two years ago Intel used the following chart to illustrate the need for multi-core CPUs", and then the image is an AMD slide, not an Intel graph. Reply
  • Omega215D - Thursday, July 26, 2007 - link

    If they plan to integrate an on die PCIe controller on the CPU how would this affect overclocking? Reply
  • Regs - Friday, July 27, 2007 - link

    I'd imagine just like how it was when AMD intergrated the memory controller, mobo makers will just have to add more bios options. Reply
  • yacoub - Thursday, July 26, 2007 - link

    While paging through the article, the thing that stood out most to me was the AMD graphic on page 5 supposedly demonstrating how much more performance Bulldozer is going to offer without a single number on the graph. I guess they want us to measure its performance increases in pixels. hehe :) Reply
  • LTG - Thursday, July 26, 2007 - link

    Anand you're really good at distilling out the bottom line from massive amounts of marketing talk and slide ware.

    Reply
  • flashbacck - Thursday, July 26, 2007 - link

    Whoever decided those acronyms were necessary should be fired. Reply
  • fzkl - Thursday, July 26, 2007 - link

    Like mentioned, the obvious great benefit of having low power x86 chips on mobile phones is the software aspect. PC applications can now run on phones reducing aspirin needs for developers. However, what does this mean in terms of security? Can we see mobile phones needing frequent patches, antivirus, firewalls like in the case of desktops?

    If this were to be the case we would have successfully made a simple device like the mobile phone(in usage terms) a high maintenance product which a layman might have trouble with.
    Reply
  • sheh - Thursday, July 26, 2007 - link

    x86 doesn't imply any OS or API. Linux, which is commonly used today on all kinds of devices, can work just as well on x86. Conversely, nothing prevents virus writers from writing viruses for Linux running phones.
    Reply
  • beyoku - Thursday, July 26, 2007 - link

    what happened? Was this article recelty pulled off or something?
    NDA???
    Reply
  • Guuts - Thursday, July 26, 2007 - link

    Looks to me like he's trying to get the images working... Reply
  • erwos - Thursday, July 26, 2007 - link

    Fusion looks like it'll be a fantastic chip for UMPCs and laptops. Hopefully they'll manage to squeeze more than CPU one core on there, too. Bobcat looks similarly fun - x86 phones! VIA was also discussing this idea, and I could really go for it.

    That said, AMD is really under-delivering with Barcelona - I suspect the next few years will be pretty rough. Intel has set a low ceiling price for the Barcelonas ($270 - same as the Q6600), and that's not going to be good for AMD's margins.
    Reply
  • mlau - Thursday, July 26, 2007 - link

    quote:

    x86 Phones!


    Embedded is ruled by ARM, Freescale, mips and sh derivates; amd and intel are going to have a tough time getting a super-ugly system like pc-x86 (with it's
    legacy baggages "bios", "acpi" [you know, the stuff windows requires to run], ...) into that space.
    Reply
  • Spoelie - Friday, July 27, 2007 - link

    you do not need to have those things to run an x86 cpu

    loot at EFI for example, in use by apple on their x86 based macs = no more bios.
    Reply
  • qpwoei - Sunday, July 29, 2007 - link

    The real problem with x86 is that it's inherently not power efficient. To get good x86 performance requires lots of transistors and lots of power due to complex decoders and schedulers. A much simpler architecture like ARM requires very few transistors to run efficiently (at the expense of slightly less compact executable code), and is much more suited for battery-powered devices.

    Not to say that there won't be x86-powered devices in the future, just that I don't expect them to really gain much of a foothold in any place where battery life is important (eg: phones).
    Reply
  • ss284 - Thursday, July 26, 2007 - link

    Return of the Jedi Reply
  • Imaginer - Thursday, July 26, 2007 - link

    It better be as good as "The Empire Strikes Back"! :D Reply

Log in

Don't have an account? Sign up now