Audio/Video Encoding

MusicMatch Jukebox 7.10

MusicMatch Jukebox 7.10

DivX 6 with AutoGK

Armed with the DivX 6 and the AutoGK front end for Gordian Knot, we took all of the processors to task at encoding a chapter out of Pirates of the Caribbean. We set AutoGK to give us 75% quality of the original DVD rip and did not encode audio; all of the DivX 6 settings were left at default.

DivX 6.0 w/ AutoGK 1.60

Windows Media Encoder 9

To finish up our look at Video Encoding performance, we have two tests both involving Windows Media Encoder 9. The first test is WorldBench 5's WMV9 encoding test.

Microsoft Windows Media Encoder 9.0

Once we crank up the requirements a bit and start doing some HD quality encoding under WMV9, the single core performance drops dramatically:

Microsoft Windows Media Encoder 9.0

Video Creation/Photo Editing Gaming Performance
Comments Locked

109 Comments

View All Comments

  • archcommus - Tuesday, August 2, 2005 - link

    Call me old fashioned, but since when was $350 affordable for a CPU? I prefer not over $200. :D
  • ceefka - Tuesday, August 2, 2005 - link

    The argument "with Intel you'd need a new motherboard" is invalid if you haven't built anything yet and start from scratch. That would easily leave options open for anyone to chose either. I agree that if your budget can handle it, you should at least consider the X2.

    People still complaining about the price of the X2 should realize that this is no ordinairy gaming CPU and the newest tech never came cheap. Since the Pentium D is like two cores slapped together, it shouldn't cost anymore than it does.

    I wonder if Intel's Pentium D had a slick architecture like the X2, it would be as cheap as the current Pentiums D. It's not the core itself perhaps that increases the cost as it is the tech that connects the two like the X2 does. Yes, that's included in the price of an X2 ;-)
  • SDA - Tuesday, August 2, 2005 - link

    Actually, you are wrong. It is the core itself that increases the cost. Larger core equals less cores per wafer and (generally) more defective cores per batch (if the possibility of a defect happening in one square millimeter is one in X..).

    The technology connecting the two, R&D costs, are paid back in the A64's cost. I suppose in a sense they're paid back in every A64's cost, but the DIFFERENCE between the A64 and A64X2 has nothing to do with slick technology.
  • coldpower27 - Tuesday, August 2, 2005 - link

    There is also something to keep in mind, why shouldn't a processor with with a die size of 199mm2 Toledo core cost 75% more then the 114mm2 San Diego core? I mean you still want to get as much profit as possible per silicon wafer. It doesn't really help your bottom line if you sell more silicon area for a lower price to me.

    Neither Intel's or AMD's procesor are double for Dual core die size,

    Prescott = 112mm2, Smithfield = 206mm2. 84% Increase
    San Diego = 114mm2, Toledo = 199mm2. 75% Increase
    Venice = 84mm2, Manchester = 147mm2. 75% Increase

    Though since Intel is just basically slapping two cores together with arbiter logic, if one core is defective on the silicon wafer, they can salvage a Prescott core from it, AMD can't do this, due to their Dual core implementation, though if the defect is in the cache, they can sell it as a Athlon 64x2, 3800+, 4200+, 4600+.

    AMD's pricing structure though currently allows, for more margins on Dual Core processors while for Intel it is the opposite, margins are higher for their Prescott, Prescott-2M. Though they don't have to put up with this situation that much longer as Intel, has economical NetBurst Dual Cores for 65nm process. Though on that process there ar more interesting Dual cores as well.
  • masher - Tuesday, August 2, 2005 - link

    > "...why shouldn't a processor with with a die size of 199mm2 Toledo core cost 75% more then the 114mm2 San Diego...It doesn't really help your bottom line if you sell more silicon area for a lower price to me."

    Because there are fixed unit costs in addition to the raw cost of processing a square mm of silicon. Costs that add to a lot more than the raw cost itself. You have to package the silicon, test it, pack it, ship it..not to mention R&D it, market it, and sell it. Those costs predominate in most cases. Which is why when AMD or Intel don't cut their prices in half the moment they move to the next lithography node.

    Given a zero defect rate, a 75% larger die should be should be maybe a third more costly to sell. But that larger die also increases your defects/wafer by 75% (roughly) as well.

    Example. Assuming a 70% yield (30% defect rate) on a single core chip, you'd expect around a 48% yield on the dual-core version. So each wafer gives you (0.75)(.48)/(.70) = 51% of the chips per wafer.

    It gets much worse with low yields. For instance, a 50% single-core yield translates to a dual-core yield of a pitiful 12.5%! So when defects are high, you have to stick with small die sizes.
  • masher - Tuesday, August 2, 2005 - link

    > "Neither Intel's or AMD's procesor are double for Dual core die size..."

    True enough; I spoke loosely. Intel is considerably closer to double, though, which was my point. All else being equal, it should be AMD who can provide a cheaper second core rather than Intel.

    > "Though since Intel is just basically slapping two cores together with arbiter logic, if one core is defective on the silicon wafer, they can salvage a Prescott core from it..."

    An excellent point, and that may indeed be a larger factor in the price differential than the defect rate.
  • coldpower27 - Tuesday, August 2, 2005 - link

    I don't really call a difference of increases of 9% that much, but I guess it's all a matter of perspective. Though in the end of the day, the difference between Intel Smithfield and AMD's Toledo is no more then 4% approx on die size.

    There are also other cost advantages Intel enjoys, remember all 90nm production is on 300mm wafer processing, which allows for less waste and simply more die per wafer, and reduced resource use, while AMD won't be there till Q1 2006 when commerical production begins on Fab 36 and the activation of their Charter partner fab.

    AMD's also uses SOI technology, which we see has benefits in curbing leakage, but we don't have a good idea on how much this technology adds to the cost of the wafer, from what I have seen, since AMD's han't made proclamations on how inexepnsive it was to implmement, cost is not a strong point of this technology.

  • masher - Wednesday, August 3, 2005 - link

    > "I don't really call a difference of increases of 9% that much"

    Well, a 12% differential (1-.84/.75) to be technical...but its not huge. The point was just that it exists...and that it favors AMD, not Intel. Sans all the other factors of course.

    > "remember all 90nm production is on 300mm wafer processing, which allows for less waste..."

    Very true..and the wastage fraction gets worse with increased die size also.

    > "AMD's also uses SOI technology...we don't have a good idea on how much this technology adds to the cost of the wafer"

    A year ago, SOI wafers were triple the cost of bulk wafers. Probably a good bit less now...and the raw wafer cost doesn't include the processing and consumables cost. Finally, Intel's wafers are hardly bulk-grade either.
  • coldpower27 - Tuesday, August 2, 2005 - link

    Addendum: Coming on the 65nm process :D
  • ceefka - Tuesday, August 2, 2005 - link

    Sounds logcial. What I wanted to stretch is: there is still the difference in development costs for the Pentium D and the X2. The D being cheaper to develop than the X2 and then of course the volumes in which Intel can sell its double whopper.

Log in

Don't have an account? Sign up now