There are calculations that need to be done for every pixel visible on the screen. Having more pixels means more of these calculations need to be done, which decreases the frame throughput. This is really the most important part; as noted, the frame buffer using more memory at higher resolutions isn't much of a factor if at all these days.
The basic job of the video card is to obtain a color value for every pixel on the screen. So for every given pixel we need to figure out which polygon is visible to the camera and what color the polygon is at that point. (There could also be multiple visible polygons if we have transparent polygons, in which case the colors need to be blended to obtain a final value.) We can render polygons with a scanline algorithm, which goes up or down the array of pixels line by line and scans across each line to determine which pixels are inside the polygon. Already we see one way that higher resolution slows it down, because if there's a polygon with a certain physical size, at higher resolution there are more pixels in it that need to be scanned and set to the proper color in the frame buffer.
So what's the proper color? Actually, before we figure that out, we need to answer the question, Is this polygon the one closest to the camera at this point? Because if not, it's not going to be visible and we shouldn't waste time calculating a color. This is where the z-buffer comes in, which holds the z-value (depth into the screen) of the point on the polygon closest to the camera. We need to take the z-value of the pixel we just calculated a color for and compare it to the z-value already in the buffer. If it's closer (which could be greater or less depending on your definition of "close"), we update the z-buffer, calculate the color, and change it in the frame buffer. If it's further than something already there, we throw it out and move on.
As for the color, the simplest way to figure that out is if we're using flat shading and no textures, where every pixel in a polygon is the same color. But needless to say, this is very ugly and unrealistic, and so real renderers almost always use a combination of textures and shading/lighting to achieve a better look. So for every pixel in a polygon, we need to get the texture coordinates, look up the color value in the texture, take that color value and apply shading and lighting. (This could be done by the fixed-function pipeline or by a custom pixel shader.) Finally we can set the color of the pixel in the frame buffer.
All of this writing is what the card has to figure for every pixel in every polygon (that makes it through clipping and culling, but let's not get into that), and there are more pixels in polygons at higher resolution. z-testing, lighting calculations, texture lookups, pixel shaders, and so on, all math being repeated more when we have more pixels, and more math means more calculation time and fewer final frames. What the hardware does is not all that different from a software implementation of a renderer, it's just a lot faster.
__________________
"Prohibition will work great injury to the cause of temperance. It is a species of intemperance within itself, for it goes beyond the bounds of reason in that it attempts to control a man's appetite by legislation, and makes a crime out of things that are not crimes. A Prohibition law strikes a blow at the very principles upon which our government was founded." --Abraham Lincoln
|