cyrnel's sort of right, but a bit outdated. Welcome to today's lesson : a brief history of cache.
In the beginning, there was nothing. Err, no cache that is. The concept didn't exist. However, the egg-heads quickly figured out that the transfer rates of RAM at the time were too slow to feed the processor all the data they needed, which is why cache was born. It all started with level 1 cache (which wasn't called level 1 at the time, as there were no other levels). Chip manufacturers figured out that they could include a small amont of fast (but expensive) static RAM into the CPU itself and use it to store instructions that are likely to be needed again. The size currently ranges from 8KB to 64KB, although obviously it started at the smaller end of the scale. Algorithms were developed to allow the CPU to store the most commonly used (and therefore commonly needed) data in that cache and a great deal of time was spent trying to improve the it rate. However, it was discovered that the speed of the cache was directly related to the size; the more cache there was, the more the processor had to check and the higher the latency became. That left the engineers in a tricky spot. For one, there was only so much memory that could fit on the processor at a time. This was several years ago and Moore's law hadn't taken us quite as far as it has today. Aside from the fact that there was only so much that would physically fit, too much cache began to effectively destroy the speed benefit; if there was too much the latency went through the roof, but too little and you began to run into 'capacity misses' when the cache simply couldn't hold all the information that the processor needed.
And thus along came level 2. Level 2 was originally defined as off-chip; that is, on the motherboard. It uses a different set of algorithms to store data; some processors use a level 2 cache that is inclusive (and holds all the information the level 1 cache does) while others are exclusive (any datum that's in level 1 will not be in level 2 and vice versa). The whole system works like this:
The CPU checks it's level 1 cache first to see if the datum it needs is in there. If there is a hit, the datum stays where it is (under most algorithms). If it's a miss, the cpu moves onto the larger but slower level 2 cache. If it's a hit there, the processor takes the datum from the level 2 cache and moves it up to level 1; in some cases it'll swap two pieces of data, while in others it'll simply overwrite the least recently used item in the level 1 cache with the new one.
So that's how it worked for a while. Then as memory became smaller and cheaper, manufacturers began to realize that they could boost the speed of level 2 cache by moving it onto the chip, then later onto the die (the central part of the cpu). By shortening the distance from the processor to the cache and removing an entire section of the bus, latency times for level 2 cache were brought down dramatically.
So everything was redefined. Level 1 cache kept it's job, while level 2 cache was moved from the motherboard and turned into a larger secondary on-die cache. At first, motherboard manufacturers began using a much larger but slower still level 3 cache; but as memory sizes and bus speed increased it gradually became redundant for the home PC market. It was discovered that there was a functional limit to the amount of cache a home market multi-task PC could effectively use. Further to that, with advancing bus speeds the boost offered by level 3 cache began to diminish; the main memory began to catch up. Therefore level 3 cache was dropped on most home PC's to reduce price. Since it was either made up of static RAM and very expensive or dynamic RAM and thus lost any speed benefit over the system memory, it was seen as an unecessary addition for the most part. That's why most modern home PC's lack level 3 cache, as you noted.
There are, however, computers that do use level 3 cache. Servers and other high powered, high demand machines often end up executing the same set of instructions over and over, yet those instrucions usually won't fit in the small level 2 cache. Therefore, the level 3 cache will be anywhere from 2MB to about 256MB. As you can probably imagine, this can get very expensive.
As has been noted, level 3 cache is now being moved on chip; Intel has sold the Itanium II for about 2 years now, which in it's most recent incarnation carries up to 8MB of cache on chip. It's worth noting as well that Itaniums are designed to be run in multi-processor server platforms; therefore, each processor has up to 8MB of L3 cache, leading to a much higher effective amount of cache, dependent on whether the system is a 2 or 4 processor architecture.
Whether or not this will lead to a larger still L4 cache, I don't know. For one, there's only so much information that needs to be resident in cache at a time; aside from that, the bigger the cache, the more expensive it is, although Moore's law continues to help ameliorate that. There may already be motherboards out there with L4 cache available, although I'm not up to date enough on the server market to be certain.
After all that, the answer to your question is simply one of legacy. Some motherboards still check the level 3 cache because the architecture for it was never removed from the chipset; it's just been set to 0. Therefore, the line you see during boot up.
__________________
I wake up in the morning more tired than before I slept
I get through cryin' and I'm sadder than before I wept
I get through thinkin' now, and the thoughts have left my head
I get through speakin' and I can't remember, not a word that I said
- Ben Harper, Show Me A Little Shame
|