POST A COMMENT

44 Comments

Back to Article

  • bgd73 - Monday, April 28, 2008 - link

    I read a few pages from a 1360 page book about computer repairing, in the history section. It was big nm back then, big power going wild... it states 1mhz for 1mbyte transfer. No wonder I think my 2.8e from 2003 before all your multicore quibbling is still just as decent as modern times...with 2 cores not quite bragged about. They are simply organizing cpus more than ever and reducing the die size. Keeping performance of the first of dual cores is as far as it may go for years...until the mhz is increased. 2800 mhz as a width is as wide as it goes. Organizing it does bring performance, like defragging a drive. Furthermore, if software knows how to use it...other software running simultaneous is losing..just like an old "hack me cuz I am errored forever" single core. Single cores are done, clean up that room with at least 2 thread cpus, that is all I found to be with 2 or more threads..very strong stable,secure, won't blip to a light switching on in the same room on the same circuit. The rest is marketing, they have to say something don't they... Reply
  • gochichi - Saturday, April 26, 2008 - link

    These processors are all "good" but this performance mark is not the "holy grail", I'd like to see more performance over time, as I'm used to.

    I recently switched to Intel, and you know, I'm happy with their prodcuts. I think AMD needs to get moving, their product's weakness isn't good for the industry.

    Both Nvidia and Intel have no competition, their job is just to maximize their profits on old research and development rather than actually competing under pressure.

    Reply
  • hoelder - Friday, April 25, 2008 - link

    If AMD would create and express team on the Processor side, take the best die implementation they currently have, lock the doors and over and over 24/7 cast new dies until they have a mass producible Opteron 4 core or Phenom 4x with 4 GHz, then they are where the should be today. Because in the end it's the CPU clock. And with every smaller die add cores and cache, it is plausible. Intel can afford that of course, but has also a tail of people involved. With a smaller team you can create miracles and with good enthusiasm on the exec level, that works. Reply
  • haplo602 - Thursday, April 24, 2008 - link

    What about linux kernel compilation with j2/3/4/6/8 ??? I'd like to see that comparison ...

    Reply
  • MrMilli - Thursday, April 24, 2008 - link

    Your power usage chart is a bit deceiving.
    In the article you mention that Windows Media Encoder is actually hardcoded to only support powers of two number of cores. Still you use this for measuring the power on a Phenom X3. So basically the 3th core is just idling.
    I think that's the reason there is such a big gap between the X3 and X4.
    Reply
  • enjoycoke - Thursday, April 24, 2008 - link

    I think Intel won't be releasing their new platform until 3rd Quater because they have been having such a good run with their current platforms already and will be taking a bit of a breather against AMD and other rivals.
    They really need constant profits to keep their stock price in line and thats what matter most.

    Reply
  • Archibald - Thursday, April 24, 2008 - link

    It is appears, that if one ignores the 1-10% performance increase(s), the dual-core is plenty for a casual power user (i.e. non-gamer). After all, the multi-core Si-HW is here, but the SW arena is a chaotic battlefield:
    ....Justin Rattner, an Intel Senior Fellow, recently promoted to take over Intel R&D has been quoted as saying that the clock wars of the past two decades will be replaced with ?core wars? over the next few decades. ?Intel & Microsoft are working feverishly on developing ?Concurrent Programming Languages? to effectively take advantage of the concurrent processor architectures that represent the future of the industry. ?Multicore processors require concurrent software:?The Free Lunch Is Over? (for software developers)., for more see this: http://tinyurl.com/62986h">http://tinyurl.com/62986h.

    I tend to favor AMD's approach with 780/790 and Brisbane, although marketing of this combo might be a challenge, from an engineering point of view it may be a decent (quite usable) design.

    Comment: Is the UI design of this blog from the Stone Age?
    Reply
  • derek85 - Thursday, April 24, 2008 - link

    I think AMD just released the perfect CPU to go with their 780G platform for a HTPC:

    - Low cost
    - Lower power consumption
    - HT3 to boot graphics memory bandwidth for better performance
    - Multi-core horsepower for better encoding/decoding

    Phenom is much mightier than Athlon X2 when it comes to multimedia. Now there is just no more reason to choose a similarly priced K8 over this.
    Reply
  • ap90033 - Thursday, April 24, 2008 - link

    Wow "perfect"? Slower in gaming, check. Slower clock for clock than Intel check. Pehnom 9850 cost more than Q6600 check. lol Reply
  • derek85 - Thursday, April 24, 2008 - link

    If it's for a gaming PC I would agree ... but I think I said HTPC. Cheapset X3 is only $150, $50+ cheaper than a Q6600, and will do this job just fine with less heat and power consumption. Reply
  • Locutus465 - Thursday, April 24, 2008 - link

    how do you justify this position when comparing the platforms as a whole, particularly taking into consideration budget platforms with integrated graphics. Reply
  • Roy2001 - Thursday, April 24, 2008 - link

    For HTPC, 780G paired with a 5600+ is enough. Actually, a E2200 can decode any HD/BD movies, who cares chipset?

    For multicore usage, get a Q6600/Q9450, gamers wang a E8400/E7200. Period.
    Reply
  • Roy2001 - Thursday, April 24, 2008 - link

    For HTPC, 780G paired with a 5600+ is enough. Actually, a E2200 can decode any HD/BD movies, who cares chipset?

    For multicore usage, get a Q6600/Q9450, gamers wang a E8400/E7200. Period.
    Reply
  • Nihility - Wednesday, April 23, 2008 - link

    Great review!
    The power consumption test seems to indicate the X3 requires less power per core than the X4. A 25% decrease in system power consumption after removing 1 core seems to prove that.
    I was hoping for an overclocking test. Naturally I assume it would be just as bad as the X4 version but due to the lower power requirements I'm curious if AMD managed to improve it.
    Reply
  • Schugy - Wednesday, April 23, 2008 - link

    The Phenom X3 is a great CPU just as the X4 but there's a lot of outdated software. It's obvious that the Phenom likes software that is frequently updated with its capabilities in mind. The Phenom has a lot of horsepower for Nero Recode (Imagine 45nm and 3GHz or more), Main Concept-Encoder, LDAP, UT3, AutoMKV but some software makers are extremely good in wasting it. Reply
  • bpl442 - Wednesday, April 23, 2008 - link

    Nvidia Geforce 7100 / nforce 630i based motherboards like the Gigabyte GA-73PVM-S2.


    "Unfortunately, Intel is in a not-so-great position right now when it comes to its platforms. It can't turn to ATI anymore for integrated graphics solutions, and with a full out war on NVIDIA brewing, it's left alone to provide chipsets for its processors as NVIDIA's latest IGP solutions are not yet available for Intel CPUs."
    Reply
  • nubie - Wednesday, April 23, 2008 - link

    I think they are referring to the 8200/8300 series, and the like with full HD decode, also possibly 65nm/55nm so that the power usage is less.

    nVidia isn't ready yet for Intel. (personally I don't care for onboard graphics, mainly because of their lack of TV outputs, and it seems that they are being skipped over entirely and switched to HDMI/DisplayPort if anything, I did once own a 6150 with S-Video/Composite/Component output, but that is the exception to the rule. And still no HD decode.)
    Reply
  • FodderMAN - Wednesday, April 23, 2008 - link

    Why does no one test these at faster HT speeds?

    I still stand by the statement I have made time and again that these processors are being castrated by the 200FSB HT speeds. If you run a classic athlon64 at 233, 250, or even 266 ( when attainable ) these processors start to really outshine Intel’s procs. And I can only see phenom being that much stronger. Now I know this is considered an overclock as AMD has zero procs at the higher bus speeds. This would help AMD's revenues as they would be able to add another product code for procs running various FSB / HT speeds even though they would shrink the multiplier range as these cores seem to run into stability problems at about 2.6 - 2.8ghz.

    PLEASE someone do a test at higher bus speeds so we can refer AMD to published numbers. I guess for the time being I will have to wait for a good 6 phase powered board before I buy my phenom and do the testing myself. But I always had great luck with the classic ath64's when upping the FSB / HT speeds and lowering the multipliers. The athlon64 always showed great promise at 250-266 FSB/HT and I can only imaging how well they would run at a 333 FSB / HT speed.

    Time will tell,
    The Goat
    Reply
  • retrospooty - Thursday, April 24, 2008 - link

    "a classic athlon64 at 233, 250, or even 266 ( when attainable ) these processors start to really outshine Intel’s procs."

    There isnt any situation at all, where AMD is outshining Intel CPU these days. If you OC the AMD to 266, you can OC the Intel to 500+ FSB, and AMD isnt outshing anything, not in speed, performance, power, overclockability, or bang for your buck. Intel is winning by all measurements.
    Reply
  • Assimilator87 - Wednesday, April 23, 2008 - link

    According to the guys at XS, raising the HT reference clock doesn't affect Phenom's performance in any way. It's the effective HT clock that matters, which is currently 1800Mhz using a multi of 9. Reply
  • JarredWalton - Wednesday, April 23, 2008 - link

    I've seen nothing to suggest a faster HyperTransport bus would help AMD much. You need to compare at the same CPU speed; if you raise the HT bus to 250 MHz that represents a 25% overclock of the CPU as well, so of course it helps performance a lot. Try comparing:

    Athlon X2 4600+ 2.4GHz
    Run at 200 HTT and 12X CPU vs. 240 HTT and 10X CPU

    Athlon X2 4800+ 2.5GHz
    Run at 200 HTT and 12.5X CPU vs. 250 HTT and 10X CPU
    (Note: the 12.5X multiplier vs. 10X may have an impact - half multipliers may not perform optimally.)

    Athlon X2 5000+ 2.6GHz
    Run at 200 HTT and 13X CPU vs. 260 HTT and 10X CPU

    Now, the one thing you'll also have to account for is memory performance. At default settings (i.e. DDR2-800), you get different true memory speeds. The 12X CPU will end up at a true DDR2-800; the 12.5X will end up at DDR2-714 (CPU/7 yields 357MHz base memory speed); the 13X will result in DDR2-742 (again, CPU/7 yields 371 MHz base memory speed). For the "overclocked HT bus" setups, you'll need to select the same memory dividers to get apples-to-apples comparisons, which depending on motherboard may not be possible.

    Unless you can do all of the above, you cannot actually make any claims that HyperTransport bus speeds are the limiting factor. I imagine you may see a small performance boost from a faster HT bus with everything else staying the same, but I doubt it will be more than ~3% (if that). HT bus only communicates with the Northbridge (chipset), and the amount of traffic going through that link is not all that high. Remember, on Intel that link to the chipset also has to handle memory traffic; not so on AMD platforms.
    Reply
  • ghitz - Wednesday, April 23, 2008 - link

    The e8400 performance/power usage is outstanding and will be great value once the G45 boards trickle in. I can't wait for those G45s!
    Reply
  • ap90033 - Wednesday, April 23, 2008 - link

    So AMD STILL hadnt caught up. Thanks Good to know. Not that Im suprised.... Reply
  • natebsi - Wednesday, April 23, 2008 - link

    The bottomline is: AMD's newest CPU's are bested in nearly every single benchmark by an Intel CPU thats been out like, what, a year?

    I have no love/hate relationship with either Intel or AMD, but thats just sad. I predict many more losing quarters for them, though I don't know how many more they can take...
    Reply
  • Griswold - Thursday, April 24, 2008 - link

    Thanks for that null-posting. Reply
  • najames - Wednesday, April 23, 2008 - link

    As a long time AMD only user, I just bought an Intel Q6600 on impusle from Frys.com for only $180. I was looking at a 780G solution and thought, I'll get the Intel quad and a similar Intel based solution for doing video processing work. Oops, I found out the only current Intel mATX is the G35 is from Asus, ONE BOARD, huge selection to choose from huh?

    I'll either sell/return the unopened CPU or buy a P35 board and graphics card. I could deal with a slightly slower AMD 9550 CPU and a better platform instead, tough choice.
    Reply
  • strikeback03 - Wednesday, April 23, 2008 - link

    I needed parts for a new system for the lab last week, I went with the non-integrated graphics and add-on card. Integrated graphics would have been fine for the application, but when the board plus card cost less than the ASUS G35 board (and are full-size ATX as well, which is useful) then the decision wasn't too hard. Reply
  • Staples - Wednesday, April 23, 2008 - link

    Intel graphics have always been terrible. AMD definitely has the advantage for integrated graphics and even know their CPUs can not compete, I still find myself considering one just for their graphic options. I am glad that this review points it out bringing to light that Intel graphics are just not acceptable. Whether Intel will change is a big unknown, probably not.

    I find the added emphasis over the last year of power consumption a great one. With the price of energy these days, it is something I factor into my purchase. SSE4 and a lower power consumption is the reason I am holding out for a Q9450. Hopefully by the time it actually goes into mass production (hopefully in the next two months), a decent integrated option will be out for the platform.
    Reply
  • 0roo0roo - Thursday, April 24, 2008 - link

    terrible? i used an intel 950 integrated graphics with some 1080p content, it decoded just fine with an e2200. Reply
  • derek85 - Thursday, April 24, 2008 - link

    Terrible? Yes, terrible. Besides the lame hardware they can't even write proper drivers, see how many randering problems they have in the games. Reply
  • Locutus465 - Wednesday, April 23, 2008 - link

    I just upgraded my system to the following last night (running vista ultimate ed. 64bit).

    AMD Phenom 9850be (at stock speed for now, with packaged heatsink).
    4GB OCz DDR2 800 memory (at stock speed)
    ASUS M3A32-MVP Deluxe / WiFi-AP AMD 790FX
    DIAMOND Viper Radeon HD 3870
    Soundblaster x-fi Fatal1ty

    The rest of my system stayed the same, primary hdd = WD Caviar SATA 7200RPM, secondary = Segate SATA 7200RPM, page file running off of secondary disc rather than system disc etc.

    I can tell you right now that the all AMD platform is very strong. As my primary display I have a 19" LCD running 1280x1024 and I run all games with all graphics options set to max and never get below 70FPS on any game I know how to pull the FPS for. I'm able to run crysis very smoothly at the same resolution with medium graphics settings, I have not yet tried cranking things up though. Additionally I've had to take some of my work home which delt with converting a 3GB pipe delimited file in to several smaller files then converting those into valid CSV files (using excel), on my new system this process was very quick. For your reference below is a list of every game I've tried on my system, they ALL play silky smooth.

    Doom 3
    Quake 4
    Age of Empires III
    F.E.A.R
    Oblivion
    Half Life 2
    Half Life 2 EP1
    WoW
    *Crysis

    *only game I don't have graphics/audio settings fully maxed on.

    Since upgrading I've also taken to gaming on my 720P Toshi DLP using the DVI to HDMI converter packaged with my video card and audio running through my AVR via multi-channel analog inputs. All I have to say is damn is that fun!!!
    Reply
  • Ensoph42 - Wednesday, April 23, 2008 - link

    I don't understand why every review I've seen for the Phenoms use DDR2-800. I thought one of the perks was that you were supposed to use DDR2-1066 to get max performance. Someone explain this to me. Reply
  • niva - Wednesday, April 23, 2008 - link

    I have a phenom 9600 with 8Gb of RAM, I had major issues getting the RAM to 1066 and remaining stable, simply stayed with 800 for stability reasons. Then again, I don't play games much so I'm not concerned about squeezing out an extra 1-5% performance for the sake of stability.

    Of course I didn't play with this too much, maybe I was doing something wrong but I've not found a good guide saying exactly what I need to set for the system to remain stable.
    Reply
  • Ensoph42 - Wednesday, April 23, 2008 - link

    I hear you. Although is it that your mem is rated at 800 and wont OC to 1066? Or rated for 1066 but just isn't stable period at that speed?

    However the GIGABYTE GA-MA78GM-S2H, that is used in the review, has a memory standard of DDR2-1066. One of the selling points of the Phenom, I believed, was the memory controller supported DDR2-1066.

    I found this link here that takes a look at performance differences. I havent given it a close read since it's late, and seems limited but:

    http://www.digit-life.com/articles3/mainboard/ddr2...">http://www.digit-life.com/articles3/mainboard/ddr2...



    Reply
  • perzy - Wednesday, April 23, 2008 - link

    Well unless the software is as well written as Unreal engine 3 (Tim Sweeney is a god),in 95% of the programs avrege Joe use (including windows itself) there is very little advantage even going from singlecore to dualcore!
    Which brings me to my question: Whats really going on with the 'Heat wall' 'Frequenzy wall' or whatever you call it that Intel hit so hard in 2004(-ish) ? (remember the throttling superhot 3.8 GHz P4's ?)
    What all users really need is higher frequenzy! Why arn't we getting it?
    Reply
  • Clauzii - Wednesday, April 23, 2008 - link

    The frequency wall can be considered like cooking the electrons off of the DIE-surface, which is not so good. High frequency=High heat. Now Youre thinking, "But IBM got 6GHz?". It's a different design philosophy: Simpler pipeline, faster frequency.

    Until ALL programs/OSes support multi-threaded programs, we are bound to single-threaded OR the pseudo Hyperthreading which can do SOME multithreading, depending on the Code & Data used.

    If I've used a 8 GHz machine to write this on, I would still only be able to see the cursor rate one pr. sec.

    What I think we need are (even more!) intelligent CPUs (GPUs). If a CPU or GPU knew approx. what kind of performance and usage of power a program needs, a more sofisticated power scheme could be possible??

    Until then, All I know is that evolution goes forward. Not always fast, but forward :)
    Reply
  • Nehemoth - Wednesday, April 23, 2008 - link

    ...(or give up 200MHz and get a quad-core X3 9550 at the same price)...

    Should be Quad-Core X4 no X3
    Reply
  • MrBlastman - Wednesday, April 23, 2008 - link

    I'd like to see them include the X2 6400 in the benchmarks as well. I see that they might be trying to get the pricing in line, but for an AMD user looking to upgrade, all I really am considering right now is a 6400, X3 or X4, nothing else.

    The UT 3 benchmarks shed some hope for the Phenoms as they show with a properly coded game, the X4's can remain competitive. I still wish to this day they'd release 3+ GHz phenoms :(
    Reply
  • ImmortalZ - Wednesday, April 23, 2008 - link

    Any MPEG4 AVC video encoded at Profile 4.1 or lower is fully accelerated by today's GPUs. Scene releases from the past few months confirm to this - and even with this, most of the older release are still compatible given you use the proper filters. Reply
  • ImmortalZ - Wednesday, April 23, 2008 - link

    * By Profile I meant Level. Reply
  • Anand Lal Shimpi - Wednesday, April 23, 2008 - link

    The problem is without support in 100% of the titles it's not something you can really count on. If you go with too slow of a CPU, hoping to rely on GPU acceleration but then try and play a rip that isn't accelerated you're just out of luck.

    Regardless, I'm just waiting for the day when all platforms feature GPU acceleration :)

    Take care,
    Anand
    Reply
  • ViRGE - Wednesday, April 23, 2008 - link

    "Now if you pirate your HD movies then none of this matters, as GPU accelerated H.264 decode doesn't work on much pirated content."

    Sure it does, the Cyberlink H.264/MPEG-2 decoder is a complete DirectShow-compliant module. Anything H.264 that can be played in a DirectShow application is accelerated by it, both legit and pirated content.
    Reply
  • 0roo0roo - Thursday, April 24, 2008 - link

    it doesn't matter either way. pirated h264 content tends to be lower bitrate versions of the full hd rip.
    even at full bitrate it doesn't matter as processors have come to a point where even budget dual cores can decode h264 quite well.
    Reply
  • ChronoReverse - Wednesday, April 23, 2008 - link

    And on the free software side, Media Player Classic Home Cinema (what a mouthful) also has GPU accelerated decode now too (only for the newer video cards though).

    While not all pirate content are encoded in a manner that can be accelerated, the functionality is available now.
    Reply

Log in

Don't have an account? Sign up now