![]() |
Video Resolution vs. Performance?
Can anyone tell me what effect video resolution has on video display performance? I've always "known" -- that is, I've been told, that 1280 x 1024 would strain my card more than 800 x 600 would. However, if I ever notice a difference in performance between the two, its neglible. I'm a software guru, so hardware is definitely not something I understand fully. I realize that its drawing that many more pixels/inch on the screen, but does that really matter to the renderer/GPU? It seems more like a monitor issue than video card. I run a GeForce2 (dont get me started) right now, so its not that the card is just too fast to notice a difference.
Any suggestions? Be as detailed/technical as you gotta be. :) |
Yes, having a higher resolution is more graphic intensive than a lower resolution. The difference between 1280x1024 and 800x600 isn't as noticeable as the one from 1280x1024 to 1600x1200 or 1920x1200. Especially playing games under those resolutions, the performance decreases with increased resolution.
|
Having a higher resolution means the graphics card needs more memory to hold the color data for each pixel. How much is required depends on the color depth of your screen. If it's 24-bit, that's 3 bytes per pixel. So for 1600x1200 @ 24 bits/pixel that's 5MB or so of video memory (3 * 1600 * 1200 = 5760000). That used to matter many years ago but nowadays it is more a function of your monitor, as you say. For 2D, video cards just receive a bitmap organized by the CPU and forward them to the monitor (monitor doesn't store any data, just rewrites the pixels n times every second). For 3D it's more complex, and GPU speed/GPU memory bus rate matter a lot more for those sorts of things, because the GPU actually does some work.
|
The reason why the video card works harder at higher resolutions is it's rendering a more detailed image every frame. More detail means more work for the video card.
|
There are calculations that need to be done for every pixel visible on the screen. Having more pixels means more of these calculations need to be done, which decreases the frame throughput. This is really the most important part; as noted, the frame buffer using more memory at higher resolutions isn't much of a factor if at all these days.
The basic job of the video card is to obtain a color value for every pixel on the screen. So for every given pixel we need to figure out which polygon is visible to the camera and what color the polygon is at that point. (There could also be multiple visible polygons if we have transparent polygons, in which case the colors need to be blended to obtain a final value.) We can render polygons with a scanline algorithm, which goes up or down the array of pixels line by line and scans across each line to determine which pixels are inside the polygon. Already we see one way that higher resolution slows it down, because if there's a polygon with a certain physical size, at higher resolution there are more pixels in it that need to be scanned and set to the proper color in the frame buffer. So what's the proper color? Actually, before we figure that out, we need to answer the question, Is this polygon the one closest to the camera at this point? Because if not, it's not going to be visible and we shouldn't waste time calculating a color. This is where the z-buffer comes in, which holds the z-value (depth into the screen) of the point on the polygon closest to the camera. We need to take the z-value of the pixel we just calculated a color for and compare it to the z-value already in the buffer. If it's closer (which could be greater or less depending on your definition of "close"), we update the z-buffer, calculate the color, and change it in the frame buffer. If it's further than something already there, we throw it out and move on. As for the color, the simplest way to figure that out is if we're using flat shading and no textures, where every pixel in a polygon is the same color. But needless to say, this is very ugly and unrealistic, and so real renderers almost always use a combination of textures and shading/lighting to achieve a better look. So for every pixel in a polygon, we need to get the texture coordinates, look up the color value in the texture, take that color value and apply shading and lighting. (This could be done by the fixed-function pipeline or by a custom pixel shader.) Finally we can set the color of the pixel in the frame buffer. All of this writing is what the card has to figure for every pixel in every polygon (that makes it through clipping and culling, but let's not get into that), and there are more pixels in polygons at higher resolution. z-testing, lighting calculations, texture lookups, pixel shaders, and so on, all math being repeated more when we have more pixels, and more math means more calculation time and fewer final frames. What the hardware does is not all that different from a software implementation of a renderer, it's just a lot faster. |
All times are GMT -8. The time now is 12:07 AM. |
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
Search Engine Optimization by vBSEO 3.6.0 PL2
© 2002-2012 Tilted Forum Project