Tilted Forum Project Discussion Community  

Go Back   Tilted Forum Project Discussion Community > Interests > Tilted Technology


 
 
LinkBack Thread Tools
Old 07-14-2005, 05:54 PM   #1 (permalink)
Junkie
 
MontanaXVI's Avatar
 
Location: Go A's!!!!
defragging hdd

I have a 60 gig hdd that gets pretty well used with all the dl deleting and burning I do so when it comes time to defrag the drive which do you all think would be better?

I have Norton Sysworks which comes with their defragger Speed Disk, I have heard this does a pretty good job in a nice fast time frame.

OR

Should I stick with the standard old MS Defrag option in windows?

Any other suggestions, comments?

Thanks.
__________________
Spank you very much
MontanaXVI is offline  
Old 07-14-2005, 05:57 PM   #2 (permalink)
Go Cardinals
 
soccerchamp76's Avatar
 
Location: St. Louis/Cincinnati
Try downloading Diskeeper. It is a great defragging program that also gives you a description of how much was defragged so you know how bad the drive was. You can analyze the drive and it also gives you a percentage of performance gain that will be gained after defrag, and the seconds in load time currently and seconds in load time post-defrag.
__________________
Brian Griffin: Ah, if my memory serves me, this is the physics department.
Chris Griffin: That would explain all the gravity.
soccerchamp76 is offline  
Old 07-14-2005, 08:50 PM   #3 (permalink)
SiN
strangelove
 
SiN's Avatar
 
Location: ...more here than there...
Diskeeper is a good program, I've used it for a while.

Recently, I've been using Raxco's Perfect Disk, it's supposed to be better/quicker.

Either way, if you have a lot of activity, file-wise, defragging regularly is good.
__________________
- + - ° GiRLie GeeK ° - + - °
01110010011011110110111101110100001000000110110101100101
Therell be days/When Ill stray/I may appear to be/Constantly out of reach/I give in to sin/Because I like to practise what I preach
SiN is offline  
Old 07-14-2005, 09:20 PM   #4 (permalink)
Devils Cabana Boy
 
Dilbert1234567's Avatar
 
Location: Central Coast CA
disk keeper is great, but it does not have the safeties that MS defrag does, i lost half a TB to it.
__________________
Donate Blood!

"Love is not finding the perfect person, but learning to see an imperfect person perfectly." -Sam Keen
Dilbert1234567 is offline  
Old 07-15-2005, 12:30 AM   #5 (permalink)
Upright
 
Location: US
Someone had to say it.

What is all this defrag nonsense. You could switch to Linux.

Mark
PenguinsRock is offline  
Old 07-15-2005, 05:30 AM   #6 (permalink)
Professional Loafer
 
bendsley's Avatar
 
Location: texas
In linux you still have the fsck command.

fsck [ -sACVRTNP ] [ -t fstype ] filesys [ ... ] [--] [ fsck-options ]

fsck is used to check and optionally repair a one or more Linux file systems. filesys can be a device name (e.g. /dev/hdc1, /dev/sdb2), a mount point (e.g. /, /usr, /home), or an ext2 label or UUID specifier (e.g. UUID=8868abf6-88c5-4a83-98b8-bfc24057f7bd or LABEL=root). The fsck program will try to run filesystems on different physical disk drives in parallel to reduce total amount time to check all of the filesystems.

The Windows Defrag program is based off of Peter Norton's original defrag software. Up until Windows ME, the program was actually Norton's defragger until MS decided to do it themselves. If you're running Windows XP, I believe it tries to do some defragging in the background automatically, or at least it tries to be smarted where it puts things on the disk.

In a single-user, single-tasking OS, it's best to keep all blocks for a
file together, because _most_ of the disk accesses over a given period
of time will be against a single file. In this scenario, the read-write
heads of your HD advance sequentially through the hard disk. In the same
sort of system, if your file is fragmented, the read-write heads jump
all over the place, adding seek time to the hard disk access time.

In a multi-user, multi-tasking, multi-threaded OS, many files are being
accessed at any time, and, if left unregulated, the disk read-write
heads would jump all over the place all the time. Even with
'defragmented' files, there would be as much seek-time delay as there
would be with a single-user single-tasking OS and fragmented files.

Fortunately, multi-user, multi-tasking, multi-threaded OSs are usually
built smarter than that. Since file access is multiplexed from the point
of view of the device (multiple file accesses from multiple, unrelated
processes, with no order imposed on the sequence of blocks requested),
the device driver incorporates logic to accomodate the performance hits,
like reordering the requests into something sensible for the device
(i.e elevator algorithm).

In other words, fragmentation is a concern when one (and only one)
process access data from one (and only one) file. When more than one
file is involved, the disk addresses being requested are 'fragmented'
with respect to the sequence that the driver has to service them, and
thus it doesn't matter to the device driver whether or not a file was
fragmented.

To illustrate:

I have two programs executing simultaneously, each reading two different
files.

The files are organized sequentially (unfragmented) on disk...
[1.1][1.2][1.3][2.1][2.2][2.3][3.1][3.2][3.3][4.1][4.2][4.3][4.4]


Program 1 reads file 1, block 1
file 1, block 2
file 2, block 1
file 2, block 2
file 2, block 3
file 1, block 3

Program 2 reads file 3, block 1
file 4, block 1
file 3, block 2
file 4, block 2
file 3, block 3
file 4, block 4

The OS scheduler causes the programs to be scheduled and executed such
that the device driver receives requests
file 3, block 1
file 1, block 1
file 4, block 1
file 1, block 2
file 3, block 2
file 2, block 1
file 4, block 2
file 2, block 2
file 3, block 3
file 2, block 3
file 4, block 4
file 1, block 3

Graphically, this looks like...

[1.1][1.2][1.3][2.1][2.2][2.3][3.1][3.2][3.3][4.1][4.2][4.3][4.4]
}------------------------------>[3.1]
[1.1]<--------------------------'
`----------------------------------------->[4.1]
[1.2]<------------------------------------'
`-------------------------->[3.2]
[2.1]<----------------'
`------------------------------->[4.2]
[2.2]<--------------------------'
`---------------->[3.3]
[2.3]<-----------'
`------------------------------->[4.4]
[1.3]<---------------------------------------------'

As you can see, the accesses are already 'fragmented' and we haven't
even reached the disk yet (up to this point, the access have been
against 'logical' addresses). I have to stress this, the above
situation is _no different_ from an MSDOS single file physical access
against a fragmented file.

So, how do we minimize the effect seen above? If you are MSDOS, you
reorder the blocks on disk to match the (presumed) order in which they
will be requested. On the other hand, if you are Linux, you reorder the
_requests_ into a regular sequence that minimizes disk access using
something like an elevator algorithm. You also read ahead on the drive
(optimizing disk access), buffer most of the file data in memory, and
you only write dirty blocks. In other words, you minimize the effect of
'file fragmentation' as part of the other optimizations you perform
on the _access requests_ before you execute them.
Now, this is not to say that 'file fragmentation' is a good thing. It's
just that 'file fragmentation' doesn't have the *impact* here that it
would have in MSDOS-based systems. The performance difference between a
'file fragmented' Linux file system and a 'file unfragmented' Linux
file system is minimal to none, where the same performance difference
under MSDOS would be huge.

Under the right circumstances, fragmentation is a neutral thing, neither
bad nor good. As to defraging a Linux filesystem (ext2fs), there are
tools available, but (because of the design of the system) these tools
are rarely (if ever) needed or used. That's the impact of designing up
front the multi-processing/multi-tasking multi-user capacity of the OS
into it's facilities, rather than tacking multi-processing/multi-tasking
multi-user support on to an inherently single-processing/single-tasking
single-user system.
__________________
"You hear the one about the fella who died, went to the pearly gates? St. Peter let him in. Sees a guy in a suit making a closing argument. Says, "Who's that?" St. Peter says, "Oh, that's God. Thinks he's Denny Crane."

Last edited by bendsley; 07-15-2005 at 06:42 AM..
bendsley is offline  
Old 07-15-2005, 05:45 AM   #7 (permalink)
Psycho
 
connyosis's Avatar
 
Location: Sweden - Land of the sodomite damned
I'd recommend O&O Defrag. Does a great job, and gives you several types of defragmentation methods depending on what you use your computer for.
__________________
If atheism is a religion, then not collecting stamps is a hobby.
connyosis is offline  
Old 07-15-2005, 05:50 AM   #8 (permalink)
Psycho
 
connyosis's Avatar
 
Location: Sweden - Land of the sodomite damned
Ok, bendsley updated his post which makes this post worthless...
:-)
__________________
If atheism is a religion, then not collecting stamps is a hobby.

Last edited by connyosis; 07-15-2005 at 11:44 AM..
connyosis is offline  
Old 07-15-2005, 11:40 AM   #9 (permalink)
Junkie
 
MontanaXVI's Avatar
 
Location: Go A's!!!!
Quote:
Originally Posted by bendsley
In linux you still have the fsck command.

fsck [ -sACVRTNP ] [ -t fstype ] filesys [ ... ] [--] [ fsck-options ]

fsck is used to check and optionally repair a one or more Linux file systems. filesys can be a device name (e.g. /dev/hdc1, /dev/sdb2), a mount point (e.g. /, /usr, /home), or an ext2 label or UUID specifier (e.g. UUID=8868abf6-88c5-4a83-98b8-bfc24057f7bd or LABEL=root). The fsck program will try to run filesystems on different physical disk drives in parallel to reduce total amount time to check all of the filesystems.

The Windows Defrag program is based off of Peter Norton's original defrag software. Up until Windows ME, the program was actually Norton's defragger until MS decided to do it themselves. If you're running Windows XP, I believe it tries to do some defragging in the background automatically, or at least it tries to be smarted where it puts things on the disk.

In a single-user, single-tasking OS, it's best to keep all blocks for a
file together, because _most_ of the disk accesses over a given period
of time will be against a single file. In this scenario, the read-write
heads of your HD advance sequentially through the hard disk. In the same
sort of system, if your file is fragmented, the read-write heads jump
all over the place, adding seek time to the hard disk access time.

In a multi-user, multi-tasking, multi-threaded OS, many files are being
accessed at any time, and, if left unregulated, the disk read-write
heads would jump all over the place all the time. Even with
'defragmented' files, there would be as much seek-time delay as there
would be with a single-user single-tasking OS and fragmented files.

Fortunately, multi-user, multi-tasking, multi-threaded OSs are usually
built smarter than that. Since file access is multiplexed from the point
of view of the device (multiple file accesses from multiple, unrelated
processes, with no order imposed on the sequence of blocks requested),
the device driver incorporates logic to accomodate the performance hits,
like reordering the requests into something sensible for the device
(i.e elevator algorithm).

In other words, fragmentation is a concern when one (and only one)
process access data from one (and only one) file. When more than one
file is involved, the disk addresses being requested are 'fragmented'
with respect to the sequence that the driver has to service them, and
thus it doesn't matter to the device driver whether or not a file was
fragmented.

To illustrate:

I have two programs executing simultaneously, each reading two different
files.

The files are organized sequentially (unfragmented) on disk...
[1.1][1.2][1.3][2.1][2.2][2.3][3.1][3.2][3.3][4.1][4.2][4.3][4.4]


Program 1 reads file 1, block 1
file 1, block 2
file 2, block 1
file 2, block 2
file 2, block 3
file 1, block 3

Program 2 reads file 3, block 1
file 4, block 1
file 3, block 2
file 4, block 2
file 3, block 3
file 4, block 4

The OS scheduler causes the programs to be scheduled and executed such
that the device driver receives requests
file 3, block 1
file 1, block 1
file 4, block 1
file 1, block 2
file 3, block 2
file 2, block 1
file 4, block 2
file 2, block 2
file 3, block 3
file 2, block 3
file 4, block 4
file 1, block 3

Graphically, this looks like...

[1.1][1.2][1.3][2.1][2.2][2.3][3.1][3.2][3.3][4.1][4.2][4.3][4.4]
}------------------------------>[3.1]
[1.1]<--------------------------'
`----------------------------------------->[4.1]
[1.2]<------------------------------------'
`-------------------------->[3.2]
[2.1]<----------------'
`------------------------------->[4.2]
[2.2]<--------------------------'
`---------------->[3.3]
[2.3]<-----------'
`------------------------------->[4.4]
[1.3]<---------------------------------------------'

As you can see, the accesses are already 'fragmented' and we haven't
even reached the disk yet (up to this point, the access have been
against 'logical' addresses). I have to stress this, the above
situation is _no different_ from an MSDOS single file physical access
against a fragmented file.

So, how do we minimize the effect seen above? If you are MSDOS, you
reorder the blocks on disk to match the (presumed) order in which they
will be requested. On the other hand, if you are Linux, you reorder the
_requests_ into a regular sequence that minimizes disk access using
something like an elevator algorithm. You also read ahead on the drive
(optimizing disk access), buffer most of the file data in memory, and
you only write dirty blocks. In other words, you minimize the effect of
'file fragmentation' as part of the other optimizations you perform
on the _access requests_ before you execute them.
Now, this is not to say that 'file fragmentation' is a good thing. It's
just that 'file fragmentation' doesn't have the *impact* here that it
would have in MSDOS-based systems. The performance difference between a
'file fragmented' Linux file system and a 'file unfragmented' Linux
file system is minimal to none, where the same performance difference
under MSDOS would be huge.

Under the right circumstances, fragmentation is a neutral thing, neither
bad nor good. As to defraging a Linux filesystem (ext2fs), there are
tools available, but (because of the design of the system) these tools
are rarely (if ever) needed or used. That's the impact of designing up
front the multi-processing/multi-tasking multi-user capacity of the OS
into it's facilities, rather than tacking multi-processing/multi-tasking
multi-user support on to an inherently single-processing/single-tasking
single-user system.

...umm... yeah what he said, I am trying Diskeeper out
__________________
Spank you very much
MontanaXVI is offline  
Old 07-16-2005, 11:01 AM   #10 (permalink)
Psycho
 
I used to use both O&O Defragger and Diskeeper but I ended up settling with PerfectDisk because of their "Offline Defrag" options implemented. It defrags things such as pagefile location and MFT Zones. Not sure if that was capable in O&O or Diskeeper (I might have just never recognized it) but it does quicken up bootup time a decent amount.
propaganda is offline  
Old 07-17-2005, 09:10 PM   #11 (permalink)
Crazy
 
Quote:
Originally Posted by bendsley
In linux you still have the fsck command.

fsck [ -sACVRTNP ] [ -t fstype ] filesys [ ... ] [--] [ fsck-options ]

fsck is used to check and optionally repair a one or more Linux file systems. filesys can be a device name (e.g. /dev/hdc1, /dev/sdb2), a mount point (e.g. /, /usr, /home), or an ext2 label or UUID specifier (e.g. UUID=8868abf6-88c5-4a83-98b8-bfc24057f7bd or LABEL=root). The fsck program will try to run filesystems on different physical disk drives in parallel to reduce total amount time to check all of the filesystems.
Just as a side note, when it gives you the warning about fsck a mounted partition and that data loss may occur, LISTEN TO IT and abort the fsck.

/bad bad day let me tell you
//37 GIG worth of a bad day
///on the main lab server
////backups also failed
*sigh*
TheProf is offline  
Old 07-17-2005, 09:41 PM   #12 (permalink)
Adequate
 
cyrnel's Avatar
 
Location: In my angry-dome.
The defrag feature built into Windows 2000/XP isn't Norton or Microsoft. It's a lite version of Diskeeper. I think it synced with about version 4.0. Ancient.

Defragmenting can help speed a badly fragmented filesystem but it's no magic bullet. The hours wasted defragging and analyzing vs. just buying another HD...

Running with a nearly full disk promotes fragmentation. Depends on disk size and the files you create, but when you fill a filesystem beyond the point where it has adequate contiguous free blocks for new file creation, things go downhill quickly. Safe rule of thumb would be to add disk space at 75% full.
__________________
There are a vast number of people who are uninformed and heavily propagandized, but fundamentally decent. The propaganda that inundates them is effective when unchallenged, but much of it goes only skin deep. If they can be brought to raise questions and apply their decent instincts and basic intelligence, many people quickly escape the confines of the doctrinal system and are willing to do something to help others who are really suffering and oppressed." -Manufacturing Consent: Noam Chomsky and the Media, p. 195
cyrnel is offline  
Old 07-18-2005, 05:39 AM   #13 (permalink)
beauty in the breakdown
 
Location: Chapel Hill, NC
As another point of order, some filesystems are much more prone to fragmentation than others. NTFS, the standard windows FS, isnt that bad, but FAT32 likes to fragment itself all to hell. I have a partition on my laptop that is FAT32 and I have to defrag the thing at least once a week or things get aggravatingly slow.
__________________
"Good people do not need laws to tell them to act responsibly, while bad people will find a way around the laws."
--Plato
sailor is offline  
 

Tags
defragging, hdd


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On



All times are GMT -8. The time now is 09:58 PM.

Tilted Forum Project

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Search Engine Optimization by vBSEO 3.6.0 PL2
© 2002-2012 Tilted Forum Project

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360