Caches
There are currently two types of caches, L1 (level 1) and L2 (guess...). The L1 cache is built into the processor, so there are no bus-lines to go through; the L2 cache is an external piece, although I understand that on Pentium II's it's attached to the plug-in card. The cache is a layer between system RAM and your processor. Think of it like a salesperson in a department store. If there is a model on the shelf then it will be handed to you with little hassle; but if the shelf is empty then they will have to go to the storage area. Well the cache keeps track of the most recently used memory addresses (locations) within it and the values they contain. When the processor requests an access to memory (trust me, it will) then that address and value must be in the cache. If it is not, then the cache will load that value from memory, and replace some previously cached memory address with this new one (if necessary). If a program is not optimized to take advantage of the cache's ability to make repeated access to the same address, then severe performance hits result. For instance, to speed itself up, the L1 cache on the Pentium loads strips of 32 bytes at a time from the L2 cache. If the address the program is looking for is not in the L1 cache, approximately 75 nanoseconds will be wasted on a P200. If this value is not in the L2 cache either, then an additional 1.5 to 2.5 microseconds will be wasted in "cache thrashing". All this for one add instruction that uses a seldom-used RAM location! Think about the number of adds, subtracts, and moves that are done in code. Microseconds are certainly an insignificant measure of time for you and me, but now think about whether your 50ns EDO RAM or your 10ns SDRAM is performing up to par! I hope I have proved my point.

Courtesy of Avinash Baliga of Vzzrzzn's Programming Homepage

 

 

Cache Memory & Cacheable Areas
Cache memory, you've heard of it, and you're using it constantly...but why? Cache memory is merely RAM, that can be accessed at ultra fast speeds, much faster than your system RAM. Your cache memory can access a certain amount of your system RAM at those ultra fast speeds, therefore making retrieval and storage of commonly used, or cached, programs very fast. So, why is it that you experience degraded system performance when using more RAM than you have in your cacheable area? Well, consider your cacheable area the amount of customers you can serve at once, when you have more customers (using more RAM) than you can serve at a time ( using more RAM than you can cache) you experience delays or slow downs. If you have 128MB of RAM for example, in a system that can only cache 64MB, there is still 64MB remaining that cannot be accessed as fast as the other 64MB. Therefore you take a small performance hit when using that uncached RAM.

 

 

Disks
A disk-drive is something that comes standard with almost every computer these days; it could be a hard-disk (also called a fixed-disk) and it could a floppy or ZIP drive (removable media drive). These are called disks because inside the protective plastic covers are flat magnetic circles. Inside more recent hard-disks are not just one magnetic disk, but many, stacked up one on top of another. 3 1/2" floppy drives are very low capacity, but a 2GB hard-disk has a huge capacity, at least compared to your RAM. Because the sizes of hard-disks are simply so overwhelming, instead of containing just linear addresses (like RAM) they are broken into sectors. The sector is the base unit of data on a hard-drive, just as a byte is the base unit of data on a microprocessor. A sector is 512 bytes. Since a sector is still quite small (only half a kilobyte), disks are broken up into tracks as well. Tracks can be thought of as concentric circles on the disk, each one containing the same number of sectors (although the outer ones could contain much more than the inner ones). When a disk is formatted, it is broken up into these tracks and made so that each track contains the same number of sectors. On larger hard-disks with multiple stacked disks, then the number of tracks on one of those disks is also the number of cylinders on the disk. So a cylinder is simply a track but for a multi-disked hard-drive.

Courtesy of Avinash Baliga of Vzzrzzn's Programming Homepage

 

 

Firmware
I like to think of Firmware as a mix between Hardware and Software. Firmware usually refers to electronic units (hardware) which can be modified by a separate medium (software). For example, your system BIOS is hardware however it CAN be modified by software since you can configure the settings contained in it via your BIOS Setup utility.

 

 

Hardware
Hardware is basically the physical equipment used with computers, such as motherboards, peripheral cards, microprocessors, etc...

 

 

IC - Integrated Circuit
Here's another buzz-word you must've heard at least once when talking about Computer Hardware (see Capacitors), an IC, or Integrated Circuit. An IC is a short, sometimes fancy, word for an electronic unit composed of a group of transistors (transistors will be explained later) as well as other circuit elements on, in most cases, a silicon wafer or chip. An example of an IC would be a microprocessor, like the Intel Pentium TM, although a microprocessor is a complex example of an IC, it is an IC nevertheless. Many components you find on Motherboards, peripheral cards (video cards, sounds cards, etc...) are composed of many ICs working cooperatively with each other.

 

 

Pipeline
Have you noticed that Intel is always boasting "Our Pentium series chips have a dual-pipeline that makes your programs run twice as fast!". Well, I'll explain how a pipeline works, then I'll explain how the above "bold" statement is untrue. The processor has a set of instructions that it understands, like moving values into registers and adding. There are five steps involved in the execution of each instruction (a little walk on the techie side):

FETCH the instruction's code.
DECODE what instruction it means.
CALCULATE what memory is going to be used.
EXECUTE the specified operation and store the results internally.
WRITEBACK the results to the memory or registers specified.

In older processors, each instruction was laboriously executed one at a time. However, on more modern processors (486 and above) the pipeline was introduced. A pipeline is like a miniature assembly-line, where as the first instructions is at the DECODE (2nd) stage, the second instruction is being FETCHED (1). Then as the first instruction is at the CALCULATE (3rd) stage , the second instruction will be at the DECODE (2nd) stage, and a third instruction will be FETCHED (1). This allows for much faster execution, with only minor hitches when slow instructions (like multiply) are coded. A dual-pipeline is simply two pipelines, but it's not as great as it sounds. First of all, code must be executed in the order that it is presented in (i.e. it's linear) so if two pipelines are present then the instruction in the U-pipe (1st pipeline) must be done before the instruction in the V-pipe (2nd pipeline). Additionally, the V-pipe can handle a pathetically small number of operations (not the full set as could be inferred by their advertising). To compound this, only a certain set of codes can be "paired" (instruction A goes in U-pipe, instruction B goes in V-pipe). This means that unless the program has been specifically optimized for this particular processor, the difference that the V-pipe makes is insignificant. Trust me, you won't get too "lucky" with your pairings, there are way too many rules. So the next time somebody starts talking about a dual-pipeline, you can say "So what...!"

Courtesy of Avinash Baliga of Vzzrzzn's Programming Homepage

 

 

RAM
Memory in your computer is the RAM (random-access memory, for lack of a better term), not your hard-drive space. Memory locations are called addresses and are numbered starting with zero and increasing by one for each byte of RAM until the end of memory is reached. So 4MB of RAM contains 4 million addresses (that's a lot)! And although the mass-media may exclaim that RAM is super-fast, you must realize that they lie! If you've noticed, memory is attached to your motherboard, not your processor. This means that the processor must use Bus lines to access memory. But not to fear, there are special pins on the processor for this purpose (it's designed to use memory) and there are special bus lines (think of them as embedded wires) for CPU/memory interaction. The fact that memory must be accessed through your bus means that memory access is only as fast as your Bus speed. So if you've got a 200MHz Pentium, odds are that your bus is running at 66MHz. This means that if there were no caches whatsoever then an add instruction which takes one clock cycle would take 15 nanoseconds to access one piece of memory (in optimal conditions) versus 5 nanoseconds to access a register. But all modern systems have caches, and these play a role in determining memory access speed.

Courtesy of Avinash Baliga of Vzzrzzn's Programming Homepage

See the RAM Guide for More Information

 

 

Registers

The microprocessor is a very complex beast, but programming it can be simple. The microprocessors I will discuss are only x86 processors, but all processors work in a similar style. Because RAM is separate from the processor, it has a small number of extremely fast memory locations called registers. When a processor is referred to as a 32-bit processor, that denotes that its main registers are 32-bits long (I highlighted "main" because the Pentium has 64-bit internal registers but those don't count). The main CPU (central processing unit) is the integer unit, which decodes and executes all instructions dealing with simple integer operations (add, subtract, multiply, and divide). When a profiling program like WinBench gives the MIPS of a processors, that's how many millions of instructions the CPU can execute per second! The FPU (floating-point unit) has been a hot topic recently. Floating point numbers are real numbers (contain a decimal point). FPU's have a separate set of registers than the CPU. Because of the complexity of FPU's, they tend to be slower. Additionally, the registers on an x86 FPU are 80-bits long, and more bits means slower operation. Because accessing registers takes nanoseconds (billionths of a second) versus memory access which takes microseconds (millionths of a second), fast code is code that keeps the memory access to a real minimum.

Courtesy of Avinash Baliga of Vzzrzzn's Programming Homepage

 

 

Software
Software, unlike Hardware consists of all the programs, applications, functions, etc... necessary to make a computer perform specific productive functions and routines.

 

 

Voltage Regulators
You've heard me, time and time again refer to the type of Voltage Regulators and their heatsinks used on motherboards. But what exactly do Voltage Regulators do...and what is the difference between a passive and a switching voltage regulator? A voltage regulator takes the electrical current from your cases' power supply and basically regulates the amount of electricity necessary for your motherboard and, most importantly, your CPU to operate properly. In some cases a more advanced voltage regulator is necessary to provide the current to the CPU as well as the motherboard. For example, the Pentium MMX's split voltage specification (sometimes referred to as dual-voltage) dictates that the I/O Voltage (current to the rest of the motherboard) must be at or around 3.3 volts while the Core Voltage (current to the CPU) must be at or around 2.8 volts, therefore it requires a voltage regulator capable of providing two independent voltages, 3.3v I/O and 2.8v Core. In most newer motherboards you find Dual Voltage Regulators or Split Rail Voltage Regulators capable of providing two independent voltage settings concurrently. Then what is the difference between a passive (or linear) voltage regulator and a switching voltage regulator? It is my understanding that switching voltage regulators more effectively sustain a current stream than passive voltage regulators therefore make up for some shortcomings in your cases' power supply or small flaws in your motherboard's design.

 

Index
Comments Locked

0 Comments

View All Comments

Log in

Don't have an account? Sign up now