The Dark Knight: Intel's Core i7

by Anand Lal Shimpi & Gary Key on 11/3/2008 12:00 AM EST
POST A COMMENT

74 Comments

Back to Article

  • anand4happy - Sunday, February 08, 2009 - link

    saw many thing but this is the thing something dfferent

    sd4us.blogspot.com/2009/01/intel-viivintel-975x-express-955x.html
    Reply
  • nidhoggr - Monday, November 10, 2008 - link

    I cant find that information on the test setup page. Reply
  • nidhoggr - Monday, November 10, 2008 - link

    test not text :) Reply
  • puffpio - Wednesday, November 05, 2008 - link

    would you guys consider rebenchmarking?
    from the x264 changelog since the nehalem specific optimizations:
    "Overall speed improvement with Nehalem vs Penryn at the same clock speed is around 40%."
    Reply
  • anartik - Wednesday, November 05, 2008 - link

    Good review and better than Tom's overall. However Tom's stumbled on something that changed my mind about gaming with Nehalem. While Anand's testing shows minimal performance gains (and came to the not good for games conclusion) Tom's approached it with 1-4 GPU's SLI or Crossfire. All I can say is the performance gains with Nvidia cards in SLI was stunning. Maybe the platform favors SLI or Nvidia had a driver advantage in licensing SLI to Intel. Either way Nehalem and SLI smoked ATI and the current 3.2 extreme quad across the board. Reply
  • dani31 - Wednesday, November 05, 2008 - link

    I know it would't change any conclusion, but since we discuss bleeding edge Intel hardware it would have been nice to see the same in the AMD testbed.

    Using a SB600 mobo (instead of the acclaimed SB750) and an old set of drivers makes it look like the AMD numbers were simply pasted from an old article.
    Reply
  • Casper42 - Tuesday, November 04, 2008 - link

    Something I think you guys missed in your article/conslusion is the fact that we're now able to pair a great CPU with a pretty damn good North/South Bridge AND SLI.

    I found that the 680/780/790 featureset is plainly lacking and that the Intel ICH9R/10R seems to always perform better and has more features. If any doubt, look at Matrix RAID vs nVidia's RAID. Night and day difference, especially with RAID5.

    The problem with the X38/X48 was you got a great board but were effectively locked into ATI for high end Gaming.

    Now we have the best of both worlds. You get ICH10R, a very well performing CPU (even the 920 beats most of the Intel Quad Core lineup) AND you can run 1/2/3 nVidia GPUs on the machine. In my opinion, this is a winning combination.


    The only downside I see is board designs seem to suck more and more.

    With socket 1366 being so massive and 6 DIMM slots on the Enthusiast/Gamer boards, we're seeing not only 6 expansion slots (down from the standard of 7) but in most boards I have seen pics of, the top slot is an x1 so they can wedge it next to the x58 IOH which means your left with only 5 slots for other cards. Using 3 dual slot cards is out of the question without a massive 10 slot case (of which there are only like 3-5 on the market) and even if you can wedge 2 or 3 dual slot cards into the machine, you have almost zero expansion card slots should you ever need them.

    Then we get to all the cooling crap surrounding the CPU. ALL these designs rely on a top down traditional cooler and if you decide to use a highly effective tower cooling solution, all the little heatsink fins on the Northbridge and pwer regulators around the CPU get very little or no airflow. Now your in there adding puny little 40/60mm fans that produce more noise than airflow, not to mention that the DIMMs are hardly ever cooled in today's board designs.
    Call me a cooling purist if you will, but I much prefer traditional front to back airflow and all this side intake top exhaust stuff just makes me cringe. I personally run a Tyan Thunder K8WE with 2 Hyper6+ coolers and the procs and RAM are all cooled front to back. Intake and exhaust are 120mm and I have a bit of an air channel in which that airflow never goes near the expansion card slots below, which by the way have a 92mm fan up front pushing air in across the drives and another 92mm fan clipped onto the expansion slots in the back pulling it back out.

    I dont know how to resolve these issues, but I think someone surely needs to because IMHO its getting out of control.
    Reply
  • lemonadesoda - Tuesday, November 04, 2008 - link

    "Looking at POV-Ray we see a 30% increase in performance for a 12% increase in total system power consumption, that more than exceeds Intel's 2:1 rule for performance improvement vs. increase in power consumption."

    You cant use "total system power", but must make the best estimate of CPU power draw. Why? Because imagine if you had a system with 6 sticks of RAM, 4 HDDs, etc. you would have ever increasing power figures that would make the ratio of increased power consumption (a/b) smaller and smaller!

    If you take your figures and subtract (a guestimate of) 100W for non CPU power draw, then you DONT get the Intel 2:1 ratio at all!

    The figures need revisiting.
    Reply
  • AnnonymousCoward - Thursday, November 06, 2008 - link

    Performance vs power appears to linearly increase with HT. Using the 100W figure for non-CPU draw means a 25% power increase, which is close to the 30% performance.

    Unless we're talking about servers, I think looking at power draw per application is silly. Just do idle power, load power, and maybe some kind of flops/watt benchmark just for fun.
    Reply
  • silversound - Tuesday, November 04, 2008 - link

    great article, tomsharware reviews always pro intel and nvidia, not sure if they got pay $ to suppot them. anandtech is always neutral, thx! Reply
  • npp - Tuesday, November 04, 2008 - link

    Well, the funny thing is THG got it all messed up, again - they posted a large "CRIPPLED OVERCKLOCKING" article yesterday, and today I saw a kind of apology from them - they seem to have overlooked a simple BIOS switch that prevents the load through the CPU from rising above 100A. Having a month to prepare the launch article, they didn't even bother to tweak the BIOS a bit. That's why I'm not taking their articles seriously, not because they are biased towards Intel ot AMD - they are simply not up to the standars (especially those here @anandtech). Reply
  • gvaley - Tuesday, November 04, 2008 - link

    Now give us those 64-bit benchmarks. We already knew that Core i7 will be faster than Core 2, we even knew how much faster.
    Now, it was expected that 64-bit performance will be better on Core i7 that on Core 2. Is that true? Draw a parallel between the following:

    Performance jump from 32- to 64-bit on Core 2
    vs.
    Performance jump from 32- to 64-bit on Core i7
    vs.
    Performance jump from 32- to 64-bit on Phenom
    Reply
  • badboy4dee - Tuesday, November 04, 2008 - link

    and what's those numbers on the charts there? Are they frames per second? high is better then if thats what they are. Charts need more detail or explanation to them dude!

    TSM
    Reply
  • MarchTheMonth - Tuesday, November 04, 2008 - link

    I don't believe I saw this anywhere else, but the spots for the cooler on the Mobo, they the same as like the LGA 775, i.e. can we use (non-Intel) coolers that exist now for the new socket? Reply
  • marc1000 - Tuesday, November 04, 2008 - link

    no, the new socket is different. the holes are 80mm far from each other, on socket 775 it was 72mm away. Reply
  • Agitated - Tuesday, November 04, 2008 - link

    Any info on whether these parts provide an improvement on virtualized workloads or maybe what the various vm companies have planned for optimizing their current software for nehalem? Reply
  • yyrkoon - Tuesday, November 04, 2008 - link

    Either I am not reading things correctly, or the 130W TDP does not look promising for the end user such as myself that requires/wants a low powered high performance CPU.

    The future in my book is using less power, not more, and Intel does not right now seem to be going in this direction. To top things off, the performance increase does not seem to be enough to justify this power increase.

    Being completely off grid(100% solar / wind power), there seem to be very few options . . . I would like to see this change. Right now as it stands, sticking with the older architecture seems to make more sense.
    Reply
  • 3DoubleD - Tuesday, November 04, 2008 - link

    130W TDP isn't much worse for previous generations of quad core processors which were ~100W TDP. Also, TDP isn't a measure of power usage, but of the required thermal dissipation of a system to maintain an operating temperature below an set value (eg. Tjmax). So if Tjmax is lower for i7 processors than it is for past quad cores, it may use the same amount of power, but have a higher TDP requirement. The article indicates that power draw has increased, but usually with a large increase in performance. Page 9 of the article has determined that this chip has a greater performance/watt than its predecessors by a significant margin.

    If you are looking for something that is extremely low power, you shouldn't be looking at a quad core processor. Go buy a laptop (or an EeePC-type laptop with an Atom processor). Intel has kept true to its promise of 2% performance increase for every 1% power increase (eg. a higher performance per watt value).

    Also, you would probably save more power overall if you just hibernate your computer when you aren't using it.
    Reply
  • Comdrpopnfresh - Monday, November 03, 2008 - link

    Do differing cores have access to another's L2? Is it directly, through QPI, or through L3?
    Also, is the L2 inclusive in the L3; does the L3 contain the L2 data?
    Reply
  • xipo - Monday, November 03, 2008 - link

    I know games are not the strong area of nehalem, but there are 2 games i'd like to see tested. Unreal T. 3 and Half Life 2 E2.. just to know how does nehalem handles those 2 engines ;D Reply
  • Jingato - Monday, November 03, 2008 - link

    If the 920 can easily be overclocked to 3.8Ghz on air, what intensive is there to purchase the 965 for more that triple the price? Reply
  • TantrumusMaximus - Monday, November 03, 2008 - link

    I don't understand why the tests were on such low resolutions... most gamers are running higher res than 1280x1024 etc etc....

    What gives?
    Reply
  • daniyarm - Monday, November 03, 2008 - link

    Because if they ran gaming benchmarks at higher res, the difference in FPS would be hardly visible and you wouldn't go out and buy a new CPU.
    If they are going to show differences between Intel and AMD CPUs, show Nehalem at 3.2 GHz vs 9950 OC to 3.2 GHz so we can see clock for clock differences in performance and power.
    Reply
  • npp - Monday, November 03, 2008 - link

    9950 consumes about 30W more at idle than the 965XE, and 30W less under load. I guess that OC'ing it to 3,2Ghz will need more than 30W... Given that the 965 can process 4 more threads, I think the result should be more or less clear. Reply
  • tim851 - Monday, November 03, 2008 - link

    Higher resolutions stress the GPU more and it will become a bottleneck. Since the article was focussing on CPU power and not GPU power they were lowering the resolution enough to effectively take the GPU out of the picture. Reply
  • Caveman - Monday, November 03, 2008 - link

    It would be nice to see these CPU reviews use relevant "gaming" benchmarks. It would be good to see the results with something like MS flight simulator FSX or DCS Black Shark, etc... The flight simulators these days are BOTH graphically and calculation intensive, but really stress the CPU. Reply
  • AssBall - Monday, November 03, 2008 - link

    No, they don't, actually. Reply
  • philosofool - Monday, November 03, 2008 - link

    It would have been nice to see a proper comparison of power consumption. Given all of Intel's boast about being able to shut off cores to save power, I'd like to see some figures about exact savings. Reply
  • nowayout99 - Monday, November 03, 2008 - link

    Ditto, I was wondering about power too. Reply
  • Anand Lal Shimpi - Monday, November 03, 2008 - link

    Soon, soon my friend :)

    -A
    Reply
  • Kaleid - Monday, November 03, 2008 - link

    http://www.guru3d.com/news/intel-core-i7-multigpu-...">http://www.guru3d.com/news/intel-core-i...and-cros... Reply
  • bill3 - Monday, November 03, 2008 - link

    Umm, seems the guru3d gains are probably explained by them using a dual core core2dou versus quad core i7...Quad core's run multi-gpu quiet a bit better I believe.

    Reply
  • tynopik - Monday, November 03, 2008 - link

    what about those multi-threading tests you used to run with 20 tabs open in firefox while running av scan while compressing some files while converting something else while etc etc?

    this might be more important for daily performance than the standard desktop benchmarks
    Reply
  • D3SI - Monday, November 03, 2008 - link


    So the low end i7s are OC'able?

    what the hell is toms hardware talking about lol
    Reply
  • conquerist - Monday, November 03, 2008 - link

    Concerning x264, Nehalem-specific improvements are coming as soon as the developers are free from their NDA.
    See http://x264dev.multimedia.cx/?p=40">http://x264dev.multimedia.cx/?p=40.
    Reply
  • Spectator - Monday, November 03, 2008 - link

    can they do some CUDA optimizations?. im guessing that video hardware has more processors than quad core intel :P

    If all this i7 is new news and does stuff xx faster with 4 core's. how does 100+ core video hardware compare?.

    Yes im messing but giant Intel want $1k for best i7 cpu. when likes of nvid make bigger transistor count silicon using a lesser process and others manufacture rest of vid card for $400-500 ?

    Where is the Value for money in that. Chukkle.
    Reply
  • gramboh - Monday, November 03, 2008 - link

    The x264 team has specifically said they will not be working on CUDA development as it is too time intensive to basically start over from scratch in a more complex development environment. Reply
  • npp - Monday, November 03, 2008 - link

    CUDA Optimizations? I bet you don't understand completely what you're talking about. You can't just optimize a piece of software for CUDA, you MUST write it from scratch for CUDA. That's the reason why you don't see too much software for nVidia GPUs, even though the CUDA concept was introduced at least two years ago. You have the BadaBOOM stuff, but it's far for mature, and the reason is that writing a sensible application for CUDA isn't exactly an easy task. Take your time to look at how it works and you'll understand why.

    You can't compare the 100+ cores of your typical GPU with a quad core directly, they are fundamentaly different in nature, with your GPU "cores" being rather limited in functionality. GPGPU is a nice hype, but you simply can't offload everything on a GPU.

    As a side note, top-notch hardware always carries price premium, and Intel has had this tradition with high-end CPUs for quite a while now. There are plenty of people who need absolutely the fastest harware around and won't hesitate paying it.
    Reply
  • Spectator - Monday, November 03, 2008 - link

    Some of us want more info.

    A) How does the integrated Thermal sensor work with -50+c temps.

    B) Can you Circumvent the 130W max load sensor

    C) what are all those connection points on the top of the processor for?.

    lol. Where do i put the 2B pencil to. to join that sht up so i dont have to worry about multiply settings or temp sensors or wattage sensors.

    Hey dont shoot the messenger. but those top side chip contacts seem very curious and obviously must serve a purpose :P

    Reply
  • Spectator - Monday, November 03, 2008 - link

    Wait NO. i have thought about it..

    The contacts on top side could be for programming the chips default settings.

    You know it makes sence.Perhaps its adjustable sram style, rather than burning connections.

    yes some technical peeps can look at that. but still I want the fame for suggesting it first. lmao.

    Have fun. but that does seem logical to build in some scope for alteration. alot easier to manufacture 1 solid item then mod your stock to suit market when you feel its neccessary.

    Spectator.
    Reply
  • Spectator - Monday, November 03, 2008 - link

    that sht is totally logical.

    And Im proper impressed. I would do that.

    you can re-process your entire stock at whim to satisfy the current market. that sht deserves some praise, even more so when die shrinks happen. Its an apparently seemless transition. Unless world works it out and learns how to mod existing chips?

    Chukkle. but hey im drunk; and I dont care. I just thought that would be a logical step. Im still waiting for cheap SSD's :P

    Spectator.
    Reply
  • tential - Monday, November 03, 2008 - link

    We already knew nehalem wasn't going to be that much of a game changer. The blog posts you guys had up weeks ago said that because of the cache sizes and stuff not to expect huge gains in performance of games if any. However because of hyperthreading I think there also needs to be some tests to see how multi tasking goes. No doubt those gains will be huge. Virus scanning while playing games and other things should have extremely nice benefits you would think. Those tests would be most interesting although when I buy my PC nehalem will be mainstream. Reply
  • npp - Monday, November 03, 2008 - link

    I'm very curious to see some scientific results from the new CPUs, MATLAB and Mathematica benchmarks, and maybe some more. It's interesting to see if Core i7 can deliver something on these fronts, too. Reply
  • pervisanathema - Monday, November 03, 2008 - link

    I was afraid Nehalem was going to be a game changer. My wallet is grateful that its overall performance gains do not even come close to justifying dumping my entire platform. My x3350 @ 3.6GHz will be just fine for quite some time yet. :)

    Additionally, its relatively high price means that AMD can still be competitive in the budget to low mid range market which is good for my wallet as well. Intel needs competition.
    Reply
  • iwodo - Monday, November 03, 2008 - link

    Since there are virtually no performance lost when using Dual Channel. Hopefully we will see some high performance DDR3 with low Latency next year?
    And which means apart from having half the core, Desktop version doesn't look so bad.

    And since you state the Socket 1366 will be able to sit a Eight Core inside, i expect the 11xx socket will be able to suit a Quad Core as well?

    So why we dont just have 13xx Socket to fit it all? Is the cost really that high?
    Reply
  • QChronoD - Monday, November 03, 2008 - link

    How long are they going to utilize this new socket??
    $284 for the i7-920 isn't bad, but will it be worth the extra to buy a top end board that will appreciate a CPU upgrade 1-2 years later? Or is this going to be useless once Intel Ticks in '10?
    Reply
  • steveyballme - Monday, November 03, 2008 - link

    We worked side by side with Intel to be sure that Vista was optimised for running on this thing!

    http://fakesteveballmer.blogspot.com">http://fakesteveballmer.blogspot.com
    Reply
  • Strid - Monday, November 03, 2008 - link

    Great article. I enjoyed reading it. One thing I stumbled upon though.

    "The PS/2 keyboard port is a nod to the overclocking crowd as is the clear CMOS switch."

    What makes a PS/2 port good for overclockers? I see the use for the clear CMOS switch, but ...
    Reply
  • 3DoubleD - Monday, November 03, 2008 - link

    In my experience USB keyboards do not consistently allow input during the POST screen. If you are overclocking and want to enter the BIOS or cancel an overclock you need a keyboard that works immediately once the POST screen appears. I've been caught with only a USB keyboard and I got stuck with a bad overclock and had to reset the CMOS to gain control back because I couldn't cancel the overclock. Reply
  • Clauzii - Monday, November 03, 2008 - link

    I thought the "USB Legacy support" mode was for exactly that? So legacy mode is for when the PC are booted in DOS, but not during pre? Reply
  • sprockkets - Monday, November 03, 2008 - link

    No, USB legacy support is for support during boot up and for the time you need input before an OS takes control of the system. However, as already mentioned, sometimes USB keyboards just don't work in a BIOS at startup for one reason or another, and in my opinion, this means they should NEVER get rid of the old PS/2 port.

    I ran into this problem with a Shuttle XPC with the G33 chipset, which had no ps/2 ports on it. There was a 50/50 chance it would not work.
    Reply
  • Clauzii - Thursday, November 06, 2008 - link

    I still use PS/2. None of the USB keyboards I've borrowed or tried out would work in 'boot'. Also I think a PS/2 keyboard/mouse don't lag so much, maybe because it has it's own non-shared interrupt line.

    But I can see a problem with PS/2 in the future, with keyboards like the Art Lebedev ones. When that technology gets more pocket friendly I'd gladly like to see upgraded but still dedicated keyboard/mouse connectors.
    Reply
  • The0ne - Monday, November 03, 2008 - link

    Yes. I have the PS2 keyboard on-hand in case my USB keyboard can't get in :) Reply
  • Strid - Monday, November 03, 2008 - link

    Ahh, makes sense. Thanks for clarifying! Reply
  • Genx87 - Monday, November 03, 2008 - link

    After living through the hell that were ATI drivers back in 2003-2004 on a 9600 Pro AIW. I didnt learn and I plopped money down on a 4850 and have had terrible driver quality since. More BSOD from the ati driver than I have had in windows in the past 5 years combined from anything. Back to Nvidia for me when I get a chance.

    That said this review is pretty much what I expected after reading the preview article in August. They are really trying to recapture market in the 4 socket space. A place where AMD has been able to do well. This chip is designed for server work. Ill pick one up after my E8400 runs out of steam.
    Reply
  • Griswold - Tuesday, November 04, 2008 - link

    You're just not clever enough to setup your system properly. I have two indentical systems sitting here side by side with the only difference being the video card (HD3870 in one and a 8800GT in the other) and the box with the nvidia cards gives me order of magnitude more headaches due to crashing driver. While that also happens on the 3870 machine now and then, its nowehere nearly as often. But the best part: none of the produces a BSOD. That is why I know you're most likely the culprit (the alternative is faulty hardware or a pathetic overclock). Reply
  • Lord 666 - Monday, November 03, 2008 - link

    The stock speed of a Q9550 is 2.83ghz, not 2.66qhz.

    Why the handicap?
    Reply
  • Anand Lal Shimpi - Monday, November 03, 2008 - link

    My mistake, it was a Q9450 that was used. The Q9550 label was from an earlier version of the spreadsheet that got canned due to time constraints. I wanted a clock-for-clock comparison with the i7-920 which runs at 2.66GHz.

    Take care,
    Anand
    Reply
  • faxon - Monday, November 03, 2008 - link

    toms hardware published an article detailing that there would be a cap on how high you are allowed to clock your part before it would downclock it back to stock. since this is an integrated par of the core, you can only turn it off/up/down if they unlock it. the limit was supposedly a 130watt thermal dissipation mark. what effect did this have in your tests on overclocking the 920? Reply
  • Gary Key - Monday, November 03, 2008 - link

    We have not had any problems clocking our 920 to the 3.6GHz~3.8GHz level with proper cooling. The 920, 940, and 965 will all clock down as core temps increase above the 80C level. We noticed half step decreases above 80C or so and watched our core multipliers throttle down to as low as 5.5 when core temps exceeded 90C and then increase back to normal as temperatures were lowered.

    This occurred with stock voltages or with the VCore set to 1.5V, it was dependent on thermals, not voltages or clock speeds in our tests. That said, I am still running a battery of tests on the 920 right now, but I have not seen an artificial cap yet. That does not mean it might not exist, just that we have not triggered it yet.

    I will try the 920 on the Intel board that Toms used this morning to see if it operates any differently than the ASUS and MSI boards.
    Reply
  • Th3Eagle - Monday, November 03, 2008 - link

    I wonder how close you came to those temperatures while overclocking these processors.

    The 920 to 3.6/3.8 is a nice overclock but I wonder what you mean by proper cooling and how close you came to crossing the 80C "boundary"?
    Reply
  • Gary Key - Monday, November 03, 2008 - link

    "The 920 to 3.6/3.8 is a nice overclock but I wonder what you mean by proper cooling and how close you came to crossing the 80C "boundary"?"

    It was actually quite easy to do with the retail cooler, in fact in our multi-task test playing back a BD title while encoding a BD title, the core temps hit 98C. Cinebench multi-core test and OCCT both had the core temps hit 100C at various points. Our tests were in a closed case loaded out with a couple of HD4870 cards, two optical drives, three hard drives, and two case fans.

    Proper cooling (something we will cover shortly) consisted of the Thermalright Xtreme120, Vigor Monsoon II, and Cooler Master V8 along with the Freezone Elite. We were able to keep temps under 70C with a full load on air and around 45C with the Freezone unit.
    Reply
  • Th3Eagle - Tuesday, November 04, 2008 - link

    Wow, thats interesting. Can't wait to see the new article. Always nice to see an article about coolers.

    Thanks for the reply.
    Reply
  • Anand Lal Shimpi - Monday, November 03, 2008 - link

    Gary did the i7-920 tests so I'll let him chime in there, we're also working on an overclocking guide that should help address some of these concerns.

    -A
    Reply
  • whatthehey - Monday, November 03, 2008 - link

    Tom's? You might as well reference HardOCP....

    Okay, THG sometimes gets things right, but I've seen far too many "expose" articles where they talk about the end of the world to take them seriously. Ever since the i820 chipset fiasco, they seem to think everything is a big deal that needs a whistle blower.

    Anandtech got 3.8GHz with an i7-920, and I would assume due diligence in performance testing (i.e. it's not just POSTing, but actually running benchmarks and showing a performance improvement). I'm still running an overclocked Q6600, though, and the 3.6GHz I've hit is really far more than I need most of the time. I should probalby run at 3.0GHz and shave 50-100W from my power use instead. But it's winter now, and with snow outside it's nice to have a little space heater by my feet!
    Reply
  • The0ne - Monday, November 03, 2008 - link

    TomHardware and Anandtech were the one websites I visited 13 years ago during my college years. Tom's has since been pushed far down the list of "to visit sites" mainly due to their poor articles and their ad littered, poorly designed website. If you have any type of no-script enable there's quite a bit to enable to have the website working. The video commentary is a joke as they're not professionals to get the job done professionally...visually anyhow.

    Anandtech has stayed true to it's root and although I find some articles a bit confusing I don't mind them at all. Example of this are camera reviews :)
    Reply
  • GaryJohnson - Monday, November 03, 2008 - link

    Geez, calling a core 2 a space heater. How soon we forget prescott... Reply
  • JarredWalton - Monday, November 03, 2008 - link

    I think overclocked Core 2 Quad is still very capable of rating as a space heater. The chips can easily use upwards of 150W when overclocked, which if memory serves is far more than any of the Prescott chips did. After all, we didn't see 1000W PSUs back in the Prescott era, and in fact I had a 350W PSU running a Pentium D 920 at 3.4 GHz without any trouble. :-) Reply
  • Griswold - Tuesday, November 04, 2008 - link

    Funny comparison. If it was just for the space heater arguments sake (well, 150W is by far not enough to qualify as a real space heater to be honest), I could follow you but saying the 150W of a 4 core, more-IPC-than-any-P4-can-ever-dream-of, processor should or could be compared to the wattage of the infamous thermonuclear furnace AKA prescott, is a bit of a long stretch, dont you think? :p Reply
  • Ryan Smith - Monday, November 03, 2008 - link

    Intel can call it supercalifragilisticexpialidocious until they're blue in the face, but take it from a local, it's Neh-Hay-Lem. Just see how it's pronounced in this news segment:

    http://www.katu.com/outdoors/3902731.html?video=YH...">http://www.katu.com/outdoors/3902731.html?video=YH...
    Reply
  • mjrpes3 - Monday, November 03, 2008 - link

    Any chance we'll see some database/apache benchmarks based on Nehalem soon? Reply
  • fzkl - Monday, November 03, 2008 - link

    "Where Nehalem really succeeds however is in anything involving video encoding or 3D rendering"

    We have new CPU that does Video encoding and 3D Rendering really well while at the same time the GPU manufacturers are offloading these applications to the GPU.

    The CPU Vs GPU debate heats up more.
    _______________________________________________________________
    www.topicbean.com
    Reply
  • Griswold - Tuesday, November 04, 2008 - link

    Wheres the product that offloads encoding to GPUs - all of them, from both makers - as a publicly available product? I havent seen that yet. Of course, we havent seen Core i7 in the wild yet either, but I bet it will be many moons before there is that single encoding suite that is ready for primetime regardless of the card that is sitting in your machine. On the other hand, I can encode my stuff right now with my current Intel or AMD products and will just move them over to the upcoming products without having to think about it.

    Huge difference. The debate isnt really a debate yet, if you're doing more than just talking about it.
    Reply
  • haukionkannel - Monday, November 03, 2008 - link

    Well if both CPU and GPU are better for video encoding, the better! Even now the rendering takes forever.
    So there is not any problem if GPU helps allready good 3d render CPU. Everything that gives more speed is just bonus!
    Reply

Log in

Don't have an account? Sign up now