While Anand and Derek were covering IDF and Intel Conroe Core benchmarking, and Wesley already previewing hardware in Taiwan, the European branch of AnandTech (that is, me) went to Cebit 2006 in Hannover. The main focus of this article is server and related IT subjects, but there were a few cool things that I couldn’t resist, so they ended up in the same report.

Cebit may not be the big “launch event” trade show anymore, but it is still by far the biggest ICT trade show in the world. “Die Hannover Messe AG” claims that almost 6,300 exhibitors from 70 countries were present at CeBIT 2006. A total of 300,000 square meters is filled with IT related stands, which welcome more than 500,000 visitors.


Intel had a very big stand on Cebit, and the showpiece of the booth was one of the Formula one cars of the Intel sponsored BMW Sauber F1 Team. Unless you were the “Bundeskanzlerin” of Germany, you couldn’t enter the supercar.

Most of the Intel news was already reported in our IDF report. Woodcrest, the server version of Conroe, will have a TDP of 80 Watt. No hyperthreading is available, which is a bit weird considering that the Core architecture is an architecture that depends on extracting high levels of Instruction Level parallelism. Database applications have a much lower IPC than SPEC CPU Integer 2000 and games. Hyperthreading could probably have done more for Woodcrest than it ever could for the Nocona and Irwindale Xeons.

Woodcrest (Core architecture) and Dempsey (Netburst) are pin compatible and it should be possible to replace a Dempsey CPU with a Woodcrest CPU. However, most server manufactures to whom we spoke, are not so sure. Some said that a new PCB of their motherboard will be necessary, while others said it will be possible only on motherboards that have been validated for Woodcrest (not the current Bensley platforms). But who is going to buy Dempsey when Woodcrest is out?


Many IT people are looking into virtualisation as a way to make better use of the available server power. However, our own tests in the lab show that software virtualisation is not really easy; it is our experience that guest OS crashes more often on virtualised servers than on real hardware servers. We definitely would advise against running anything mission critical on the current batch of software virtualisation solutions. And Microsoft’s virtual server is not even a real virtual layer - it runs on top of a Windows 2003 server OS. In a nutshell, we are rather sceptical about software virtualisation, and rather have set our hope on hardware virtualisation.

Christian Anderka of Intel confirmed that the current Xeon Paxville and all Xeons (Dempsey, Woodcrest) on the Bensley platform feature hardware virtualisation of the CPU. The CPU gives the “hypervisor” or “virtual layer” a separate and higher privileged mode than the kernel mode in which the current OS run. That should increase the stability of virtualised servers greatly.

Intel announced the next generation of Intel Virtualization Technology (Intel VT) for enterprise servers: Intel Virtualization for Directed I/O (Intel VT-d). VT-d adds support for hardware supported virtualisation of disks and others, but it is very unclear when this technology will be really ready. VT-d includes technology such as hardware DMA remapping and works also on the interrupts level. We will discuss virtualisation later in more detail, but it is clear that VT-D will also need to be supported in PCIe, the chipset components and peripherals.

The quad core Clovertown, which is little more than two (dual core) Woodcrests in the same CPU package, will include this VT-d technology somewhere in 2007. Clovertown is socket-compatible with the Bensley (Dempsey, Woodcrest) platform and is slated to ship in early 2007.

It is, however, also clear that the new instructions that are available in the new highly privileged mode for virtual layers will not be used to their full potential at first. We expect that it will take many releases of newer and improved virtual layers (“hypervisors”) before the new virtual layer instructions are used to their full potential.

Itanium ready to take on the RISC competition
Comments Locked


View All Comments

  • AkaiRo - Monday, March 13, 2006 - link

    When you talk about SAS you have to clarify if you are referring to SAS 3.5" or SAS SFF (Small Form Factor). SAS 3.5", which is what the companies you are talking about in the article are using, is only a waypoint on the roadmap. SAS 3.5" and low-end/mid-range SATA enclosures use U320 connectors. High End SATA enclosures can use fibre or RJ-45 connectors as well. However, there are SAS (and SATA) SFF enclosures out on the market already (HP's Modular Storage Array 50 enclosure).

    SAS/SATA SFF is the designated target for the majority of storage subsystems in the next few years because server manufacturers are going to increasing focus more on spindle count affecting overall I/O than anything else. The SAS SFF drives use the platters from the 15,000rpm drives which are 2.5" in size, which is why the largest SAS SFF drives for now are 146GB. There is quite an initiative by the biggest players who deal in servers, workstations/desktops, AND notebooks, to move to a common platform for ALL three classes of machines, but it's a chicken and egg thing with everyone waiting for someone else to provide the incentive to make the switch.
  • Calin - Tuesday, March 14, 2006 - link

    The 2.5 inch drives are physically too small to reach high capacities, and many of the buyers don't know anything about the hard drive they have except capacity. As a result, a physically smaller, less warm, even supposedly higher performance drive at a higher price will be at disadvantage compared to a physically larger, warmer and even lower performance at a lower price. Especially taking into account that you can buy 500GB 3.5inch drives, but only 120GB 2.5inch drives
  • themelon - Monday, March 13, 2006 - link

    This is nothing new. Granted once you go beyond 4 you have to run them slower....
  • JohanAnandtech - Tuesday, March 14, 2006 - link

    8 Dimms per CPU was very uncommon and required expensive components and engineering. I have seen on the HP DL585, but there 8 DIMMs result in DDR266 speed, which is serious performance penalty. Most DDR boards are still limited to 4 DIMMs per CPU.

    With DDR-2 6 - 8 DIMMs per CPU is relatively easy to do, at least at DDR-II 667 speeds. You'll see 6-8 DIMMs also on affordable solutions, not on high eend server only. That is new :-)
  • Beenthere - Monday, March 13, 2006 - link

    SAS don't impress me none at this stage. Yes it's more reliable than SATA drives but almost anything is. Drive performance is virtually identical with SAS and SCSI 320. All I see is a lower manufacturing cost that hasn't been passed on yet.
  • ncage - Monday, March 13, 2006 - link

    Improving performance is not the whole point of SAS. SCSI 320 is already fast as it is. Heck SCSI 160 is fast. Anyawys i digress. Its the ability to use SATA cables in a server which is a big deal when your dealing with a little 1U case. Its also the ability to Mix/Match SATA with SCSI with for some data centers could dramtically save money. If you mixed SATA/SCSI you could have a combination of Peformance/Redudancy/Cost all in one package. Granted "Critical" data centers will probably be all SCSI. I wouldn't advise eBay put SATA drivers on their servers :). You can't expect each reviesion of storage connection technology to provide better performance...sometimes it not about peformance at all.
  • Calin - Tuesday, March 14, 2006 - link

    There are enough servers that don't need hard drive performance, and will run anything mirrored in RAM. As a result, one could use the same boxes, only with different hard drives for different tasks. Makes everything simpler if you have a single basic box.
  • dougSF30 - Monday, March 13, 2006 - link

    Rev E DC Opteron TDPs have also always been 95W. The SC Rev E parts were 89W.">

    You can look up the Rev E Opteron parts at the above link.

  • dougSF30 - Monday, March 13, 2006 - link

    These are likely not the parts you see at 68W with Rev F, so again, power is not rising (it is actually falling with Rev F).

    There has been a 68W "blade TDP" point that Rev E Opterons have been sold at, in addition to the 55W and 30W points.

    So, I suspect you are simply seeing 95W and 68W TDP families for Rev F, just like Rev E. Rev F will allow for higher frequency parts within those families, in part due to a DDR2 controller taking less power than DDR1, in part due to SiGe strain being incorporated into the 90nm process.

Log in

Don't have an account? Sign up now