![]() |
defragging hdd
I have a 60 gig hdd that gets pretty well used with all the dl deleting and burning I do so when it comes time to defrag the drive which do you all think would be better?
I have Norton Sysworks which comes with their defragger Speed Disk, I have heard this does a pretty good job in a nice fast time frame. OR Should I stick with the standard old MS Defrag option in windows? Any other suggestions, comments? Thanks. |
Try downloading Diskeeper. It is a great defragging program that also gives you a description of how much was defragged so you know how bad the drive was. You can analyze the drive and it also gives you a percentage of performance gain that will be gained after defrag, and the seconds in load time currently and seconds in load time post-defrag.
|
Diskeeper is a good program, I've used it for a while.
Recently, I've been using Raxco's Perfect Disk, it's supposed to be better/quicker. Either way, if you have a lot of activity, file-wise, defragging regularly is good. |
disk keeper is great, but it does not have the safeties that MS defrag does, i lost half a TB to it.
|
Someone had to say it.
What is all this defrag nonsense. ;) You could switch to Linux.
Mark |
In linux you still have the fsck command.
fsck [ -sACVRTNP ] [ -t fstype ] filesys [ ... ] [--] [ fsck-options ] fsck is used to check and optionally repair a one or more Linux file systems. filesys can be a device name (e.g. /dev/hdc1, /dev/sdb2), a mount point (e.g. /, /usr, /home), or an ext2 label or UUID specifier (e.g. UUID=8868abf6-88c5-4a83-98b8-bfc24057f7bd or LABEL=root). The fsck program will try to run filesystems on different physical disk drives in parallel to reduce total amount time to check all of the filesystems. The Windows Defrag program is based off of Peter Norton's original defrag software. Up until Windows ME, the program was actually Norton's defragger until MS decided to do it themselves. If you're running Windows XP, I believe it tries to do some defragging in the background automatically, or at least it tries to be smarted where it puts things on the disk. In a single-user, single-tasking OS, it's best to keep all blocks for a file together, because _most_ of the disk accesses over a given period of time will be against a single file. In this scenario, the read-write heads of your HD advance sequentially through the hard disk. In the same sort of system, if your file is fragmented, the read-write heads jump all over the place, adding seek time to the hard disk access time. In a multi-user, multi-tasking, multi-threaded OS, many files are being accessed at any time, and, if left unregulated, the disk read-write heads would jump all over the place all the time. Even with 'defragmented' files, there would be as much seek-time delay as there would be with a single-user single-tasking OS and fragmented files. Fortunately, multi-user, multi-tasking, multi-threaded OSs are usually built smarter than that. Since file access is multiplexed from the point of view of the device (multiple file accesses from multiple, unrelated processes, with no order imposed on the sequence of blocks requested), the device driver incorporates logic to accomodate the performance hits, like reordering the requests into something sensible for the device (i.e elevator algorithm). In other words, fragmentation is a concern when one (and only one) process access data from one (and only one) file. When more than one file is involved, the disk addresses being requested are 'fragmented' with respect to the sequence that the driver has to service them, and thus it doesn't matter to the device driver whether or not a file was fragmented. To illustrate: I have two programs executing simultaneously, each reading two different files. The files are organized sequentially (unfragmented) on disk... [1.1][1.2][1.3][2.1][2.2][2.3][3.1][3.2][3.3][4.1][4.2][4.3][4.4] Program 1 reads file 1, block 1 file 1, block 2 file 2, block 1 file 2, block 2 file 2, block 3 file 1, block 3 Program 2 reads file 3, block 1 file 4, block 1 file 3, block 2 file 4, block 2 file 3, block 3 file 4, block 4 The OS scheduler causes the programs to be scheduled and executed such that the device driver receives requests file 3, block 1 file 1, block 1 file 4, block 1 file 1, block 2 file 3, block 2 file 2, block 1 file 4, block 2 file 2, block 2 file 3, block 3 file 2, block 3 file 4, block 4 file 1, block 3 Graphically, this looks like... [1.1][1.2][1.3][2.1][2.2][2.3][3.1][3.2][3.3][4.1][4.2][4.3][4.4] }------------------------------>[3.1] [1.1]<--------------------------' `----------------------------------------->[4.1] [1.2]<------------------------------------' `-------------------------->[3.2] [2.1]<----------------' `------------------------------->[4.2] [2.2]<--------------------------' `---------------->[3.3] [2.3]<-----------' `------------------------------->[4.4] [1.3]<---------------------------------------------' As you can see, the accesses are already 'fragmented' and we haven't even reached the disk yet (up to this point, the access have been against 'logical' addresses). I have to stress this, the above situation is _no different_ from an MSDOS single file physical access against a fragmented file. So, how do we minimize the effect seen above? If you are MSDOS, you reorder the blocks on disk to match the (presumed) order in which they will be requested. On the other hand, if you are Linux, you reorder the _requests_ into a regular sequence that minimizes disk access using something like an elevator algorithm. You also read ahead on the drive (optimizing disk access), buffer most of the file data in memory, and you only write dirty blocks. In other words, you minimize the effect of 'file fragmentation' as part of the other optimizations you perform on the _access requests_ before you execute them. Now, this is not to say that 'file fragmentation' is a good thing. It's just that 'file fragmentation' doesn't have the *impact* here that it would have in MSDOS-based systems. The performance difference between a 'file fragmented' Linux file system and a 'file unfragmented' Linux file system is minimal to none, where the same performance difference under MSDOS would be huge. Under the right circumstances, fragmentation is a neutral thing, neither bad nor good. As to defraging a Linux filesystem (ext2fs), there are tools available, but (because of the design of the system) these tools are rarely (if ever) needed or used. That's the impact of designing up front the multi-processing/multi-tasking multi-user capacity of the OS into it's facilities, rather than tacking multi-processing/multi-tasking multi-user support on to an inherently single-processing/single-tasking single-user system. |
I'd recommend O&O Defrag. Does a great job, and gives you several types of defragmentation methods depending on what you use your computer for.
|
Ok, bendsley updated his post which makes this post worthless...
:-) |
Quote:
...umm... yeah what he said, I am trying Diskeeper out :) |
I used to use both O&O Defragger and Diskeeper but I ended up settling with PerfectDisk because of their "Offline Defrag" options implemented. It defrags things such as pagefile location and MFT Zones. Not sure if that was capable in O&O or Diskeeper (I might have just never recognized it) but it does quicken up bootup time a decent amount.
|
Quote:
/bad bad day let me tell you //37 GIG worth of a bad day ///on the main lab server ////backups also failed *sigh* |
The defrag feature built into Windows 2000/XP isn't Norton or Microsoft. It's a lite version of Diskeeper. I think it synced with about version 4.0. Ancient.
Defragmenting can help speed a badly fragmented filesystem but it's no magic bullet. The hours wasted defragging and analyzing vs. just buying another HD... Running with a nearly full disk promotes fragmentation. Depends on disk size and the files you create, but when you fill a filesystem beyond the point where it has adequate contiguous free blocks for new file creation, things go downhill quickly. Safe rule of thumb would be to add disk space at 75% full. |
As another point of order, some filesystems are much more prone to fragmentation than others. NTFS, the standard windows FS, isnt that bad, but FAT32 likes to fragment itself all to hell. I have a partition on my laptop that is FAT32 and I have to defrag the thing at least once a week or things get aggravatingly slow.
|
All times are GMT -8. The time now is 06:36 AM. |
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
Search Engine Optimization by vBSEO 3.6.0 PL2
© 2002-2012 Tilted Forum Project