Original Link: http://www.anandtech.com/show/1766

Although this morning was made interesting by the rush of a new microprocessor architecture, the second keynote of the day was far less engaging - albeit concerning a very important topic: mobility.

Intel had a quick slide up about Napa, the platform for Yonah in the first half of next year:

There's not much new here, but Napa did end up occupying a few more minutes of the keynote. Intel was proud to announce that the development and design wins of Napa are proceeding at a better rate than Sonoma, the current Centrino platform.

Obviously, Napa and Yonah are still about half a year away from their release (Intel has indicated that they will ship in the first quarter of next year), so we'll have to wait until next year to see how quickly Intel truly moves to Napa.

One thing that we found very interesting is that Napa also appears to be the platform of choice for Merom. In fact, Intel's platform for Merom is listed in their literature as either being Napa or a Napa Refresh, not a brand new design. We are hoping this means that Merom will work in Yonah motherboards, also hopefully meaning that Conroe will work in the next-generation Pentium D motherboards. The wish is that if you were to buy an Intel motherboard early next year, that you'd be able to use a next-generation CPU in it. While it is far from Intel's style to do so, so far the indications of such a scenario being possible are looking quite good.

Intel indicated that they would supply design guidelines to motherboard makers early in 2006 so that their early '06 platforms could potentially support the new processors released in the second half of 2006. Maybe Intel has indeed learned from AMD when it comes to backwards compatibility, or then again maybe we're reading far too into this and we are hoping for more than reality.

Intel also demoed a 1.2GHz Xscale CPU, although they didn't commit to shipping it at 1.2GHz, this demo was more of a proof of concept.

Intel talked about their partnership with Panasonic in developing new notebook batteries which should help them meet their 8-hour battery life in 2008 goal.

Intel's Gets into Benchmarking

Intel's Digital Home Capability Assessment Tool (isn't that a mouthful) is a new Intel benchmark that will be released this Fall. What's particularly interesting about this benchmark is that it runs actual digital home tasks (e.g. playing HD movies, transcoding video, recording video and exporting video) using real world applications.

The benchmarking tool runs these tasks one at a time as well as in parallel, and generates a performance rating based on things like the number of frames dropped while watching a HD video stream.

The idea is to be able to quantify the digital home experience a particular platform would offer, thus allowing end users to better compare systems based on overall experience. Honestly, the whole thing sounds a lot like AMD's True Performance Initiative that resulted in the model numbers we've come to live with for so long; it also sounds a lot like Intel's iComp ratings from years past.

The difference between this tool and previous attempts to characterize performance is that Intel's tool doesn't just output a performance rating, it also gives you a detailed list of what tasks the tested PC can do poorly, relatively well and perfectly:

The overall performance rating

The breakdown of what this system can and cannot do

The rating system is an interesting idea and Intel promises that it will be made extensible to include other usage patterns as they emerge.

Obviously Intel's tool isn't something we'd use here on AnandTech, but it is interesting to note that you may end up seeing the results of this tool in stores like Best Buy and by companies like Dell to help characterize the relative performance of their systems as far as a digital home user is concerned.

The tasks the tool performs are using regular, unmodified applications, so in theory they shouldn't deliberately penalize non-Intel processors, but we'd honestly be very surprised if Intel had put together a benchmark that they didn't win.

Intel's other benchmarking program was a similar experience-oriented performance tool for games, very similar to FRAPS with a few tweaks. The tool measures instantaneous frame rate over time, but then it compares those frame rates to user data expressing the quality of their gaming experience with reference to the frame rates and produces an experience rating.

The idea of the experience rating sounds interesting, but we can't help but believe that this tool wouldn't have been created had Intel not been so far behind on gaming performance.

Intel is planning on making all of these types of tools available to the public and making them somewhat transparent, hopefully showing consumers what these benchmarks actually do to generate their results. Again, these aren't things we'd use here, but they are things that may end up in a retail store near you.

Virtualization Everywhere

Intel had a relatively strong focus on virtualization technology at this year's Fall IDF:

All of the desktop virtualization demos were done on either 65nm Intel Presler processors (65nm Pentium D) or 65nm Yonah processors, both featuring Intel Virtualization Technology. Below we have a Dell Dimension PC featuring a 65nm Pentium D:

The 65nm Presler system also happened to be a BTX form factor PC.

Alongside the Presler demo systems, Intel had a 65nm Yonah demo platform:

Because it's too early to showcase Yonah notebook designs, all Yonah platforms were demonstrated in this open air fashion.

The virtualization demos were fairly run of the mill; they all used VMWare and mostly involved different virtual partitions for various users or tasks.

The demo below is of a 65nm Presler system acting as a development workstation, developing code on the right screen and testing the compiled program in various OSes each in their own virtual partition on the left screen.

This next demo featured two virtual partitions, one running business applications while the other is running a game:

Although there is a fair amount of hardware complexity in enabling virtualization, the software exposed to the user remains quite simple. Switching between virtual partitions is as easy as can be:

Xbox 360 at IDF?

Given that there is no explicit Intel technology inside the Xbox 360, imagine our surprise when we saw a Xbox 360 on display at IDF:

The system was of course acting as a media center extender to a MCE PC which was running an Intel processor.

The system gave us our first look at a more final power cable and the Xbox 360's HD AV cables, as well as the remote control nestled in between the two.

The Xbox 360 was of course hooked up to the Samsung DLP RPTV using component cables. The coaxial audio cable was the only one not connected, for obvious reasons.


As (almost) always, AMD was around IDF offering briefings to all who wanted them on AMD's products.

Unfortunately there wasn't much to talk to AMD about, mostly because they feel that their current architecture is more than competitive with anything Intel will release next year. In fact, AMD feels so strongly about their technology that with the exception of some minor updates as well as the migration to DDR2, the Athlon 64 micro-architecture will remain unchanged throughout 2006. AMD wasn't ready to talk about when the K8 architecture would get a refresh, but it won't be anytime soon that's for sure.

AMD was showcasing a dual core Turion 64 CPU, to be officially announced sometime in the first half of next year. The CPU will be a 90nm part, with no word on when the transition to 65nm will take place.

The dual core Turion 64 in action

The dual core Turion 64 part will be a Socket S1 part, featuring fewer than 754 pins, DDR2 and a lower profile than their Socket-754 Turion parts.

Other than that, there wasn't much else to report on from AMD at the show.

ATI's CrossFire on Intel 955X Motherboards

ATI is officially the first to embrace cross-chipset operation of their multi-GPU product, as they were demonstrating their CrossFire technology on an Intel 955X motherboard.

Given that ATI still hasn't shipped CrossFire this isn't too big of a deal, but it is something we would like to see NVIDIA do as well, and preferably to more chipset manufacturers.

Turning Single into Multi-Threaded with Speculative Threading

Intel had a particularly interesting research project being demonstrated called Mitosis, a hardware and compiler solution to implementing speculative threading.

On modern day Out of Order microprocessors, the CPUs themselves will speculatively execute code based on what it thinks will need to be executed in the future - thus improving processor utilization and overall performance. This research project proposes that the same sort of speculative execution be applied on a thread level, meaning that threads are created on the fly and speculatively executed by idle cores in a system in order to improve performance on future multi-core CPUs.

The Mitosis project relies on both hardware and software (compiler) support to work. First, on the software side, blocks of code that have very few inputs and outputs are detected and considered for use as a separate thread.

The entry and exit points of the current working thread are marked, and the portion of the thread that would be split off is separated. The new thread is then fed the appropriate input data that it needs to begin its execution.

With the single thread now split into two threads, they are both sent to the multi-core processor and executed in parallel. At the end of the execution of the thread, its result is checked to make sure that the data is still valid, and if so, the result is committed and all is well. If the result is invalid the thread must be thrown away, but since we're talking about a single threaded application to begin with, there is no wasted performance, only wasted power as the core this thread was running on would have been idle had it not been for the speculative thread generation.

On the hardware side, there is one major change needed to implement Mitosis:

The inclusion of a global register file and a register versioning table to keep track of which cores have the latest and most correct register valuesis necessary. You also need some additional logic to help validate the outcome of these threads.

The end result of all of this is pretty promising, especially for single threaded applications that would otherwise get no benefit from being run on a multi-core CPU.

In order to demonstrate the performance potential, the Intel researchers working on the project took a look at performance in the Olden benchmark suite. The Olden suite was chosen because it is a set of code that is extremely difficult to parallelize. The graph below shows performance improvement over a single Out of Order core:

The green bars show the performance improvement going from single to dual core, the red bars show the performance improvement from having the benchmark and its data stored entirely within L1 cache (no cache misses) and finally the yellow bars show the performance improvement due to the use of the Mitosis compiler/hardware modifications with a dual core CPU.

As you can see, offering in many cases a 2 - 3x performance improvement is nothing short of impressive. But keep in mind, this project is in its very early stages of research and as promising as this looks, it may take 5 - 10 years for the research to make its way into the real world.

Intel's BTX, Back at the Show

Intel's BTX form factor did make an appearance at IDF, but a very quiet one. A lot of the demo systems were actually BTX form factors, but the only central location for all things BTX was at the very back of the Technology Showcase:

There were some fairly sleek case designs, such as the Dell case we'd seen everywhere (especially in the virtualization demos):

Intel also had a few BTX motherboards on display:

...some of which were even their own:

But overall the focus on BTX has definitely calmed since its first introduction, and we've heard much of the same from motherboard makers.

Seagate's Momentus 5400 FDE - Real Time Hardware HDD Encryption

Seagate was proudly displaying their Momentus 5400 FDE (Full-Disk Encryption), a 2.5" hard drive with an ASIC on the PCB that performs real time encryption and decryption of data on the drive from the minute you plug it in.

Seagate claims that the encryption/decryption ASIC imposes no performance penalty on the drive itself, even during times of peak sequential transfer rates.

The encryption engine is active when you first turn on the drive and begin using it, although no encryption key is generated (the drive comes with a factory-installed key that is inaccessible by the user). A derivative key (not the actual encryption key) can be generated by manually setting the ATA password field, the password is cryptographically combined with the encryption key to create a derivative key that is stored in non-addressable memory, making the encryption key accessible. Wave Systems' Embassy security center was on display as an application that could be used to manage security settings of the FDE drive:

Software like Embassy can make backing up and gaining access to encrypted disks on different computers possible and seamless.

Seagate plans to ship the Momentus 5400 FDE in the first quarter of next year, although no pricing information has been announced. As of now Seagate is only committing to releasing the drive, any third party security software would have to be purchased separately.

Log in

Don't have an account? Sign up now