Setting up the Tests

Setting up the tests and getting everything perfect wasn't an easy task at all, in fact the first issue we encountered wasn't one we thought about at all until working on our Hyper-Threading Pentium 4 article. In that article we mentioned that Windows 2000 will not properly recognize Hyper-Threading and thus will treat every CPU, whether logical or physical, as a physical CPU. The problem with this is that if you have a 4-way Xeon MP server and enable Hyper-Threading, the OS will think you have 8 total CPUs and unless you're running Windows 2000 Advanced Server the OS will only recognize 4 of them. Because of this we had to switch our test OS from Windows 2000 Server to Windows 2000 Advanced Server. Keep this in mind if you plan on deploying a 4-way Xeon MP setup on a Windows 2000 Server platform.

Microsoft's SQL Server doesn't suffer from this same fate and it properly recognizes Hyper-Threading on processors. Because of this you can use SQL Server 2000 Standard Edition with a 4-way Xeon MP with Hyper-Threading enabled, and you'll only have to be licensed for the four physical processors you have in that machine. With any more than 4 physical CPUs however you'll have to switch over to SQL Server 2000 Enterprise Edition, but the switch will grant you the ability to use more than 2GB of memory which is quite useful.

The other issue that creeps up is that without a knowledge of Hyper-Threading, Windows 2000 may not necessarily assign threads to physical CPUs first before assigning them to logical CPUs. Remember that a physical CPU will always be faster than a logical processor that exists within another physical CPU; so if you only have four concurrent threads being dispatched to the CPU you'd want them sent to all four physical CPUs and not two physical and two logical processors.

The OS problems will be solved once .NET Server is released, but until then you're just going to have to keep that in the back of your mind before deploying hardware. Now let's move onto the tests themselves.

As we've mentioned in previous articles, recording a trace on a database server is much like recording a timedemo in Quake III. Every single request that is sent to the database server is recorded into a file that is nothing more than a list of those requests. Then, you can run a copy of the database that the trace was recorded from on any machine and play the trace back in order to simulate the exact same load on the DB on any machine. For our load multiplied tests we simply ran multiple copies of the DB and multiple traces concurrently.

For the AnandTech Forums DB tests we ran single and 3x load test runs, while the AnandTech Web and AnandTech Ad DBs were run under 1x, 4x, 6x and 12x load.

Each test was run 5 times, with the first scores being thrown out in order to let the DB server begin to cache queries; the rest of the runs were averaged to provide the final score in transactions per second. The final score was a sum of all of the concurrent transactions divided by the total running time of the traces. The DB server was rebooted and defragmented before switching tests or load settings.

The DB servers were configured with 2GB of memory and 6 x 18GB 10K RPM Seagate Cheetah drives in RAID 0 running off of an Intel 64-bit PCI SCSI RAID controller. Normally you wouldn't find a DB server configured with drives in RAID 0 but in order to prevent us from having to use twice as many drives we stuck with RAID 0. We weren't concerned with data loss in the event of a drive failure since this was only a test server, but in a real world situation we'd have the drives configured in a RAID 10 array to provide performance and data redundancy.

Performance: Answering the Questions AnandTech Web DB Performance - 1X Load
Comments Locked

0 Comments

View All Comments

Log in

Don't have an account? Sign up now