The Chipset - Meet Intel's X58

Nehalem moves the North Bridge and memory controller on-die, but just like in the AMD world there's still a need for an off-die chipset, in this case it's Intel's brand new X58.

The Intel X58 chipset is a two chip solution although later next year Intel will introduce a single chip solution alongside the mainstream version of Nehalem (which will use a different socket). Traditionally Intel referred to its North Bridge as the MCH, shorthand for Memory Controller Hub; that definition no longer applies to Nehalem so X58 is called an I/O Hub (IOH).

The X58 IOH attaches to the same ICH10 (I/O Controller Hub) that is used in Intel's 4-series chipsets.

The biggest feature of X58 is that with proper "certification" by NVIDIA, motherboard makers can include support for the right BIOS flags to allow NVIDIA's drivers to enable SLI on the platform. Meaning the X58 will be the first Intel chipset to support both CrossFire and SLI multi-GPU solutions without the use of any NVIDIA silicon. There's a per-motherboard fee from NVIDIA for each certified X58 board sold and thus not all boards will be certified, the most prominent of which is Intel's own X58 board. Luckily we also had access to ASUS' P6T Deluxe which is certified, giving us the ability to look at CrossFire and SLI scaling on X58 vs. other platforms.

Turbo Mode: Gimmicky or Useful? X58 Multi-GPU Scaling
Comments Locked

73 Comments

View All Comments

  • npp - Tuesday, November 4, 2008 - link

    Well, the funny thing is THG got it all messed up, again - they posted a large "CRIPPLED OVERCKLOCKING" article yesterday, and today I saw a kind of apology from them - they seem to have overlooked a simple BIOS switch that prevents the load through the CPU from rising above 100A. Having a month to prepare the launch article, they didn't even bother to tweak the BIOS a bit. That's why I'm not taking their articles seriously, not because they are biased towards Intel ot AMD - they are simply not up to the standars (especially those here @anandtech).
  • gvaley - Tuesday, November 4, 2008 - link

    Now give us those 64-bit benchmarks. We already knew that Core i7 will be faster than Core 2, we even knew how much faster.
    Now, it was expected that 64-bit performance will be better on Core i7 that on Core 2. Is that true? Draw a parallel between the following:

    Performance jump from 32- to 64-bit on Core 2
    vs.
    Performance jump from 32- to 64-bit on Core i7
    vs.
    Performance jump from 32- to 64-bit on Phenom
  • badboy4dee - Tuesday, November 4, 2008 - link

    and what's those numbers on the charts there? Are they frames per second? high is better then if thats what they are. Charts need more detail or explanation to them dude!

    TSM
  • MarchTheMonth - Tuesday, November 4, 2008 - link

    I don't believe I saw this anywhere else, but the spots for the cooler on the Mobo, they the same as like the LGA 775, i.e. can we use (non-Intel) coolers that exist now for the new socket?
  • marc1000 - Tuesday, November 4, 2008 - link

    no, the new socket is different. the holes are 80mm far from each other, on socket 775 it was 72mm away.
  • Agitated - Tuesday, November 4, 2008 - link

    Any info on whether these parts provide an improvement on virtualized workloads or maybe what the various vm companies have planned for optimizing their current software for nehalem?
  • yyrkoon - Tuesday, November 4, 2008 - link

    Either I am not reading things correctly, or the 130W TDP does not look promising for the end user such as myself that requires/wants a low powered high performance CPU.

    The future in my book is using less power, not more, and Intel does not right now seem to be going in this direction. To top things off, the performance increase does not seem to be enough to justify this power increase.

    Being completely off grid(100% solar / wind power), there seem to be very few options . . . I would like to see this change. Right now as it stands, sticking with the older architecture seems to make more sense.
  • 3DoubleD - Tuesday, November 4, 2008 - link

    130W TDP isn't much worse for previous generations of quad core processors which were ~100W TDP. Also, TDP isn't a measure of power usage, but of the required thermal dissipation of a system to maintain an operating temperature below an set value (eg. Tjmax). So if Tjmax is lower for i7 processors than it is for past quad cores, it may use the same amount of power, but have a higher TDP requirement. The article indicates that power draw has increased, but usually with a large increase in performance. Page 9 of the article has determined that this chip has a greater performance/watt than its predecessors by a significant margin.

    If you are looking for something that is extremely low power, you shouldn't be looking at a quad core processor. Go buy a laptop (or an EeePC-type laptop with an Atom processor). Intel has kept true to its promise of 2% performance increase for every 1% power increase (eg. a higher performance per watt value).

    Also, you would probably save more power overall if you just hibernate your computer when you aren't using it.
  • Comdrpopnfresh - Monday, November 3, 2008 - link

    Do differing cores have access to another's L2? Is it directly, through QPI, or through L3?
    Also, is the L2 inclusive in the L3; does the L3 contain the L2 data?
  • xipo - Monday, November 3, 2008 - link

    I know games are not the strong area of nehalem, but there are 2 games i'd like to see tested. Unreal T. 3 and Half Life 2 E2.. just to know how does nehalem handles those 2 engines ;D

Log in

Don't have an account? Sign up now