Quantcast

Maximum PC

It is currently Thu Dec 25, 2014 9:26 am

All times are UTC - 8 hours




Post new topic Reply to topic  [ 57 posts ]  Go to page 1, 2, 3  Next
Author Message
 Post subject: Straight to the brains. An Old Question with no answers
PostPosted: Sun Dec 17, 2006 11:08 am 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
Is there someone in here capable of telling me just exactly HOW file fragmentation comes about? Yes, I know that when a file is written to the disk it might require more space then is handy in one chunk, so part gets moved to the next chunk of open space. I also know that for every person who says file fragmentation is nothing to be concerned about, there is another person who says it IS something to be concened about. But I am also told that once a file is written to the disk, it remains effetively unchanged until such time as it is deleted, and then actually remains until the space it was in is overwritten. So how is it possible for a system that does nothing more complex then sitting there idle to get it's drive fragmented? It has been proven that the simple act of firing up a system creates at least some fragmentation...

I have a theory on just how this is possible, but as of yet have found no one able or willing to point me in the right direction to either prove or disprove this theory. I figured those working at learning programming and such might be able to shine just a wee bit of light into this tunnel.


Top
  Profile  
 
 Post subject:
PostPosted: Mon Dec 18, 2006 7:29 pm 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
Well, I see there's been a fair number of Views, but no comments. Which tells me one of only a very few things have happened.
1. Either most of you have decided I am simply a raving lunatic and don't wish to risk being infected by my psychosis.
2. Or else it's just a matter of no one here KNOWS the answer but lacks the courage to admit it.

I figure the odds are that about 50% here adhere to the belief that File fragmentation is nothing to worry about if only because today's systems are SO much faster that any loss in speed would never be noticed. While the other 50% most likely defrag on a regular basis, in some cases perhaps even daily.
Anyway, here's my theory...

Use an analogy of an actual desktop as being a hard drive. Each allocation unit being the size of 1 sheet of paper. Each file requires 1 or more pieces of paper to contain it, with all of the various pages being placed side by side. They cannot overlap or be stacked. The FAT table would simply bea lst ofevery file on thatdask, and it's loaction on that desk. When you reach out and tke 1 file of however many pages, you pick up EVERY page that goes with that file. During the time in which you have that file in hand, or as the PC has that file in active or virtual memory, the FAT table no longer has that file listed as being 'On' the hard drive. The space(s) it takes/took up being listed as being free for other use. Someone comes along and sets another file of whatever size on the desk. Perhaps it took up all of the space effectively opened by your picking up that one file. When you have finshed doing whatever you were doing with that first file, and you go to put it back, perhaps the space it was previously in is now taken up by some other file. Or perhaps you have added to that file causing it to use more pages. Whatever the case or cause, that 1st file no longer fits into the space it was in. So you go down the desktop, lay a page here, and mark it's location on the FAT table, the next page goes a bit farther down and it's location is also marked. Now you have 1 file on that desktop which is fragmented. Should you need or want to read that file again, you must now go to various places of the desktop to find all of the file.


Top
  Profile  
 
 Post subject:
PostPosted: Tue Jan 09, 2007 8:00 am 
8086
8086

Joined: Mon Dec 18, 2006 12:45 pm
Posts: 18
I cheat:

"Initially, when a file system is initialized on a partition (the partition is formatted for the file system), the entire space allotted is empty.[1] This means that the allocator algorithm is completely free to position newly created files anywhere on the disk. For some time after creation, files on the file system can be laid out near-optimally. When the operating system and applications are installed or other archives are unpacked, laying out separate files sequentially also means that related files are likely to be positioned close to each other.

However, as existing files are deleted or truncated, new regions of free space are created. When existing files are appended to, it is often impossible to resume the write exactly where the file used to end, as another file may already be allocated there — thus, a new fragment has to be allocated. As time goes on, and the same factors are continuously present, free space as well as frequently appended files tend to fragment more. Shorter regions of free space also mean that the allocator is no longer able to allocate new files contiguously, and has to break them into fragments. This is especially true when the file system is more full — longer contiguous regions of free space are less likely to occur.

To summarize, factors that typically cause or facilitate fragmentation, include:

* low free space.
* frequent deletion, truncation or extension of files."
Wiki


Top
  Profile  
 
 Post subject:
PostPosted: Tue Jan 09, 2007 12:18 pm 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
Yoder, unfortunately, your reply doesn't say anything that hasn't already been said. And so far as I've been able to verify, my analogy holds true.
Yes, whenever files are added, deleted, or changed in any way that affects their size, they will get fragmented. As time goes on this happens to greater and greater degrees. Yet how is it possible for files, even those supposedly READ ONLY to become fragmented? Unless, as in my analogy, when they are 'Read' into active memory, the FAT table no longer 'Sees' them as being written to the disk. In which case the space they previously occupied is now considered 'Open' and the very next file to be written to the disk will land in that space. With the Boot Sector being the ONLYsection of the disk remaining 'Safe' from this happening, as that particular space is reserved specifically for those BOOT files. EVERYWHERE else on the disk is effectively 'Fair Game'.


Top
  Profile  
 
 Post subject:
PostPosted: Sat Jan 13, 2007 4:16 pm 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
And to be disgustingly precise, even a brand new, 'Clean Install' of Windows, of whatever flavor you might prefer, will have file fragmentation even before it completes the install. Simply because a file is written to the disk, then read again for whatever purpose. Perhaps it is the Command.Com itself (although with that portion of the disk being reserved for it, and it's size never changing, it's unlikely for IT to ever get fragmented. But in any case, a file is written, or copied to, the disk. Then that file is needed for some action or portion of the install process so it get's read into active memory. Another file is then written to the disk, perhaps taking up some or all of the space that first file was first written to. So when the system is finished with that first file, it is yet again written to the disk along with any possible changes made to it. But the next open space is perhaps not quite large enough o hold all of it,so it then becomes fragmented...


Top
  Profile  
 
 Post subject:
PostPosted: Mon Jan 15, 2007 9:57 pm 
King of All Voodoo2 Cards
King of All Voodoo2 Cards
User avatar

Joined: Tue Jun 22, 2004 10:41 am
Posts: 9316
RABical wrote:
Yet how is it possible for files, even those supposedly READ ONLY to become fragmented? Unless, as in my analogy, when they are 'Read' into active memory, the FAT table no longer 'Sees' them as being written to the disk.


Well, I've got one drive dedicated to my music, after ripping my entire CD collection to the drive I defragged it once.

Now, I play my music very frequently via WinAmp (ie - "Reading" the files) and Windows Disk Defragmenter reports that the entire drive remains contiguous, despite all these "reads".


So, either there's something wrong with my drive or your analogy is wrong. btw- if you're going to test this theory of yours it's best to do it on a drive other then the one with windows installed on it.


Top
  Profile  
 
 Post subject:
PostPosted: Tue Jan 16, 2007 4:16 pm 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
So, Flytrap7, are you editing those MP3 files? Are you making CHANGES to them? Are you adding NEW files while you are PLAYING the old files? The analogy still holds, just as strong as ever. That entire drive is, from what you say, dedicated for no other purpose then to store those music files. To all intents and purposes they are READ ONLY files because you aren't changing or editing them. When you add files to the ist, when you cut more MP3's and store them to the same disk, you are adding them, 1 file at a time, complete. And it is doubtful you are doing so WHILE playing some of those already present on the disk. Assuch, each file can easily fit right back into the space it previously held, and the new files are only added at the END of the disk. In such a case, getting any measureable fragmentation on that drive, while not impossible, would be so highly unlikely as to amount to the same thing. As each file is played FROM physical memory, and written back to the disk, there is no question of it being able to go straight back to the exact same physical location on that disk from whence it came. As the file itself has not changed, nor have any new files been added to the disk since it was first read into active memory.

Try again...


Top
  Profile  
 
 Post subject:
PostPosted: Tue Jan 16, 2007 5:36 pm 
King of All Voodoo2 Cards
King of All Voodoo2 Cards
User avatar

Joined: Tue Jun 22, 2004 10:41 am
Posts: 9316
What you're almost suggesting now is a Schrödinger's cat analogy in which it can never be proved nor disproved.

However one way to do it is simply to run some system montioring software while playing back some of those files, if there are any Disk Writes that occur then what you say would be true but give this some thought:

What happens when your MP3s are stored on a medium that can't be physically written to such as a DVD disc? Your analog no longer holds true.


Top
  Profile  
 
 Post subject:
PostPosted: Wed Jan 17, 2007 5:39 am 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
"However one way to do it is simply to run some system montioring software while playing back some of those files, if there are any Disk Writes that occur then what you say would be true" Go ahead and monitor that disk, or all disks. Whichever drive holds the Swap/Paging file will have any number of Reads and Writes to it during both the cutting and playing processes. The drive which stores the files in question will have the initial reads, for each file to be played. Then have the aprropriate number of WRITES when the playing is completed. I've run about every test I can track down to run.
"What happens when your MP3s are stored on a medium that can't be physically written to such as a DVD disc? Your analog no longer holds true" Try again... Using a Read Only media such as a CD or DVD only increases the number of times when the Swap/Paging file must be accessed. Watch your drives when you're playing those files on the CD. No, the CD won't get fragmented by the process, but unless you have a simply UNGAWDLY amount of physical RAM on the system so that the Swap/Paging file isn't needed, thenplaying those files from that CD will cause whatever drive holds that Swap/Paging file to become MORE fragmented.

But feel free to keep trying, I'll be happy to let you know if you ever come up with something new...


Top
  Profile  
 
 Post subject:
PostPosted: Wed Jan 17, 2007 5:49 am 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
BTW, even the OLD systems, as in PRE-Windows, suffered from fragmentation. And they had NO swap file. There was no Virtual Memory to contend with. The advent of Windows, especially later versions, only increased the problem. Early systems would only allow a file to be written contiguously. If that open space wasn't large enough to hold that file complete, then that space was bypassed. With these systems, it wasn't so much the Files being fragmented, as it was the DRIVE being fragmented. It was possible to have hundreds of megs free on a drive, but be unable to write a simple 50K file because there wasn't a 50K space open in one place. Windows came along and addressed this problem by allowing files to be fragmented. Write as much of this file as will fit in the first open space, then move to the next open space, and continue until the entire file had been written...


Top
  Profile  
 
 Post subject:
PostPosted: Wed Jan 17, 2007 6:09 am 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
Here's a test for YOU... Try to defrag a drive while having only a limited amount of physical RAM. Say your system NORMALLY holds 1 or 2 gigs, whatever. Dig around in that toybox and pull out a little 256Mb stick (or whatever the smallest amount your OS claims willing and able to operate with), install that in place of those mondo sticks, and see what happens during that defrag. And yes, I mean defragging C drive, or whatever drive holds the Swap/Paging file. You will find that even the most simple of defragging job takes near to forever, if it completes at all. Due to the program having to start over due to the near constant use of that Swap/Paging file. Restart Due To Disk Writes... In most cases, the defrag program will simply give up and cancel after about the 10th restart.


Top
  Profile  
 
 Post subject:
PostPosted: Wed Jan 17, 2007 2:15 pm 
King of All Voodoo2 Cards
King of All Voodoo2 Cards
User avatar

Joined: Tue Jun 22, 2004 10:41 am
Posts: 9316
RABical wrote:
Go ahead and monitor that disk, or all disks. Whichever drive holds the Swap/Paging file will have any number of Reads and Writes to it during both the cutting and playing processes. The drive which stores the files in question will have the initial reads, for each file to be played. Then have the aprropriate number of WRITES when the playing is completed. I've run about every test I can track down to run.


You're talking about two completely different things now. At the beginning of this thread, you were talking about file fragmentation, now you're talking about the swap file. These are two completely different things.

What I suggest you do is figure out exactly what you're trying to say rather then change your theory to whatever suits you.


RABical wrote:
Try again... Using a Read Only media such as a CD or DVD only increases the number of times when the Swap/Paging file must be accessed. Watch your drives when you're playing those files on the CD. No, the CD won't get fragmented by the process, but unless you have a simply UNGAWDLY amount of physical RAM on the system so that the Swap/Paging file isn't needed, thenplaying those files from that CD will cause whatever drive holds that Swap/Paging file to become MORE fragmented.


Again, now instead of talking about files on the hard drive (ie- those MP3s) you've switched to the swap file, which on any 2000/XP machine shows up as a locked chunk of space on the disk (that green area on the disk defragmentor) that cannot be moved. Being that it cannot be moved around the disk, it cannot cause fragmentation on the disk (unless of course, some dumbass configured it to be dynamic :P )


Top
  Profile  
 
 Post subject:
PostPosted: Wed Jan 17, 2007 2:25 pm 
King of All Voodoo2 Cards
King of All Voodoo2 Cards
User avatar

Joined: Tue Jun 22, 2004 10:41 am
Posts: 9316
RABical wrote:
BTW, even the OLD systems, as in PRE-Windows, suffered from fragmentation. And they had NO swap file. There was no Virtual Memory to contend with.


Ummm...okay, I'm sure Miscrosoft was thinking the same thing which was why the DEFRAG command showed up in DOS 5.0.

Because I'm getting tired of explaining why you're wrong I'll clear it all up for you in one shot. Read this book here, and then think about what you've been posting. Pay extra attention to chapters 2, 6, 7, 9, 10, 11, 12 and 13. That's your homework.


Top
  Profile  
 
 Post subject:
PostPosted: Thu Jan 18, 2007 12:34 pm 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
Fly, I'll look at your book offering. Although you might consider reading it yourself before you suggest it to others. As of yet, you fail to tell me just where it is that you think I'm wrong. All you've offered so far have been the same old tired lines. YES, we AL know file fragmentation happens. And yes, we ALL know the basic reason(s) for this happening. What I am asking is HOW this happens. How is it that even a READ ONLY file can become fragmented? The Analogy I offered may or may not be correct. That's why I'm asking the question(s), supposedly of those who either already know something or are at least actively trying to LEARN something. In any case, while my analogy may or may not be correct, it DOES answer the question. The only way for a READ ONLY file to become fragmented would be for it to be MOVED. After it was read into active memory, all or some of the physical space it took up on the disk was overwritten with part or all of some other file. If you will recall, a READ ONLY file will NOT be changed (other then by either a corrupted/damaged disk or corrupted RAM, which would cause it to have been read or written incorrectly). "You're talking about two completely different things now. At the beginning of this thread, you were talking about file fragmentation, now you're talking about the swap file. These are two completely different things." Sorry Dude, but guess again. And after you actually read that book you recommend so highly, you just might realize that I have never left the topic thread in the slightest. Because the Swap/Paging file has a very great deal to do with file fragmentation. To be disgustingly precise, it is the single greatest CAUSE of file fragmentation. Due to it's at or near constant disk reads and writes. WindowsXP calls itself the best about not becoming fragmented, with only 1 change from the first 95 version, or even before. It insists on keeping a LARGE amount of Slack Space around the Paging file to allow for it's fluctuations in size. Here's yet another test for you. Change your settings so that the Swap/Paging file is on a different drive then C: and see what happens with file fragmentation on C: drive. You will find that it is drastically reduced. Better yet, place that Paging File on the same drive with your MP3 files, then play some of those files. I'll bet you start finding fragmentation in that drive unless every single music file you have are all the exact same size. Or within a very few bytes difference so that they each require the exact same number of allocation units.

Do your own homework before you start assigning it to others...


Top
  Profile  
 
 Post subject:
PostPosted: Thu Jan 18, 2007 1:04 pm 
King of All Voodoo2 Cards
King of All Voodoo2 Cards
User avatar

Joined: Tue Jun 22, 2004 10:41 am
Posts: 9316
Well, you take my advice with a grain of salt that's your choice, but I've been reading that book cover to cover since the 4th edition back in the mid 90s and now I'm up to the 17th. Let me sum up a few things for you because you don't seem to be enthusiatic about reading a book:

1. Files are not removed from the disk when read into memory and written back to the disk when unloaded from memory. If you want to test this, play na MP3 in Winamp and while it's playing, pull the plug on your machine. When you power the machine back on is the MP3 still there? Yup.

2. The Windows Swap/Paging file resides in a locked portion of memory on the hard drive. Again, you can see for yourself in the Windows disk defragmenter it'll show up as a large chunk of green. This file cannot move around the disk at all, this is because those addresses on the disk that are associated with virtual memory locations.

And one thing that you mentioned that was totally wrong:

Quote:
To be disgustingly precise, it is the single greatest CAUSE of file fragmentation. Due to it's at or near constant disk reads and writes.


All that grinding and thrashing your hard disk is doing (most of the time due to lack of RAM) is all occuring inside that locked area of space on the hard drive, and will not affect files OUTSIDE of said locked area. So no, the system simply reading and writing to the swap file will not cause your data files on the hard drive to fragment, and if you still don't believe me ask someone else, or try picking up a book.


Top
  Profile  
 
 Post subject:
PostPosted: Fri Jan 19, 2007 2:27 am 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
So then, you are saying that file fragmentation does NOT exist? And quite possibly never has? Or are you just saying that the Swap/Paging file has nothing to do with it? Just what style of Cracker Jacks did you get your certification from? Because I guess I've been looking in the wrong boxes.

It has already been identified that any file, even a TMP file, once written to a hard disk, shall remain untill such time as the space it takes up is overwritten. That space will never be overwritten unless/until that space is identified in the FAT table as being open and available for use.
If you read that MP3 file from a disk seperate from that which holds the virtual memory, and you never make any changes to that MP3 file, then it just stands to reason that it won't get moved around. Simply because nothing else is likely to be written to that disk before that file's use is complete. It can then be listed as being right back exactly where it came from.
Like I said before, try moving the virtual memory to the same drive as holds those MP3 files, then monitor the fragmentation on all drives. The drive, usually C:, which first held the V/M will show a marked reduction in fragmentation. While the drive which NOW holds it will show a marked INCREASE in fragmentation. This is one of the top reasons why, for so very long, those who wished maximum performance from their systems kept one extra drive on their system which was NOT used for user files. This drive was effectively dedicated for no other purpose then to house the Swap/Paging file. On Windows95 systems, using an old Seagate 127Mb drive was considered best for this purpose. As those drives were tough enough to handle the near constant read/write. (the older LARGE size drives lacked dependability what they gained in size) On systems today, using a drive that small wouldn't work simply for the lack of size. Windows USUALLY will maintain a Swap/Paging file size at or about double the amount of physical RAM on the system. This, of course, depending a great deal on the amount of RAM present and the use that system is put to. A system which does little more then play Sol will of course have a smaller swap file, one which is working harder having a larger file. Another means of raising the performance of systems was to lock the Swap/Paging file to a specific size, depending of course on the use you planned for that system. Usually up to about 4X the amount of physical RAM on the system, although some few may have set it higher. This had the effect of ensuring that there was ALWAYS enough room for the virtual memory while allowing the user to be certain of just how much of that drive was available for his/her own, user programs. It was also found that this practice tended to REDUCE file fragmentation to a measurable extent


Top
  Profile  
 
 Post subject:
PostPosted: Fri Jan 19, 2007 2:39 am 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
From the looks and content of your replies here Fly, I can only figure that I've run into yet another Deadend. I've asked this same question to those at Microsoft, also at IBM. I've even gone to various Hard drive manufacturers. And all I ever get is the exact same old tired lines. According to what you have said so far, file fragmentation is physically impossile. Because those files are set for life once they are witten to the drive, unless for whatever reason they are changed. Yet those same files, even those which are READ ONLY and cannot be changed, are still getting moved and shuffled around. And FRAGMENTED...
I've offered an anology which would explain this, and you, like so many others, insist that my anology is wrong. Yet none of you are able to explain how it is possible for something that you claim to be physically impossible still happens. I had hoped to find someone here who actually knew something. It would appear that I hoped in vain.


Top
  Profile  
 
 Post subject:
PostPosted: Sat Jan 20, 2007 1:38 pm 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
A drive like you described, as being ONLY used to store music files, would of course have little or nothing to fear from fragmentation. As I've already at least tried to explain. If no NEW files are being written to that drive WHILE any of it's files are in active memory, then of course the old files won't be affected. When their use is complete, they can, and WILL, be written right back to the exact same location, with the exact same file allocation table addresses for however many allocation units they use. The drive which holds the Swap/Paging file will ALWAYS show the greatest amount of fragmentation. Even if that Swap/Paging file is the ONLY file stored there because even the paging file for XP gets fragmented. And it get's the most use of all files. If the Boot Sector were not dedicated so that NOTHING else were allowed to be witten there, then even the Command.COM would get fragmented. CD's and such simply cannot be fragmented, aside from possible Re-Writes. Due to the files being written there are placed contigiously to start with and each file so written is, at least for the time it is on that CD, given the attribute of READ ONLY. Copy that file to the hard drive, or just into active memory, and it can and will get fragmented till the end of the world as we know it. But the copy on that CD disk is pretty much safe so long as the disk doesn't get damaged beyond repair or recovery.

On your music storage drive, should you start editing any of those files and affecting their size(s), then and only then will that drive be likely to ever suffer from fragmentation. But like I said before, try moving your Swap/Paging file to that drive, then monitor fragmentation and see what happens. Who knows, you just might, accidentally, LEARN something.


Top
  Profile  
 
 Post subject:
PostPosted: Sat Jan 20, 2007 9:52 pm 
King of All Voodoo2 Cards
King of All Voodoo2 Cards
User avatar

Joined: Tue Jun 22, 2004 10:41 am
Posts: 9316
RABical wrote:
So then, you are saying that file fragmentation does NOT exist? And quite possibly never has? Or are you just saying that the Swap/Paging file has nothing to do with it?


Yeah, file fragmentation doesn't exist, that's exactly what I said. If you stopped skimming through my posts before getting diarrhea of the keyboard you would find you wouldn't be repeating the same sad rant over and over.

RABical wrote:
Just what style of Cracker Jacks did you get your certification from? Because I guess I've been looking in the wrong boxes.


Don't be so quick to pass judgement, I've still yet to see something intelligible in one of your posts.

RABical wrote:
It has already been identified that any file, even a TMP file, once written to a hard disk, shall remain untill such time as the space it takes up is overwritten. That space will never be overwritten unless/until that space is identified in the FAT table as being open and available for use.


Correct, it stays there until the entry in the FAT tables is deleted, and then it becomes free space.

RABical wrote:
If you read that MP3 file from a disk seperate from that which holds the virtual memory, and you never make any changes to that MP3 file, then it just stands to reason that it won't get moved around. Simply because nothing else is likely to be written to that disk before that file's use is complete. It can then be listed as being right back exactly where it came from.


See this is where I'm postive you're wrong. When you read a file from a hard disk the entry in the FAT table is not removed, and the file doesn't change at all. The file is still there on the disk, even if the swap file was on the same drive and data was read/written into the swap file. The file isn't removed from that portion of the drive and then written back when you close the program reading the file. It just doesn't happen that way, nor would it make sense to because of the unecessary bandwidth being eaten up.

RABical wrote:
Like I said before, try moving the virtual memory to the same drive as holds those MP3 files, then monitor the fragmentation on all drives. The drive, usually C:, which first held the V/M will show a marked reduction in fragmentation. While the drive which NOW holds it will show a marked INCREASE in fragmentation. This is one of the top reasons why, for so very long, those who wished maximum performance from their systems kept one extra drive on their system which was NOT used for user files. This drive was effectively dedicated for no other purpose then to house the Swap/Paging file.


Actually for maximum swap file performance the swap file should be striped across several drives not simply dumped onto a dedicated drive. That was even published in Maximum PC and can also be found in Logic's old UltraTech knowledge base.

As for file fragmentation, the virtual memory file doesn't cause it. I've got one system with 2GB of RAM and occasionally I've run it without any virtual memory at all. With this configuration, I'd still get fragmentation on my drives. Why? because at the very root of this entire thread which seems to be going on a lot longer then it should, fragmentation occurs simply because hard drives are lazy and will write any new data onto the nearest free space on the drive, which means if it needs space in several different places that's where it's going to put that data, which in turn, will be fragmented.

RABical wrote:
On Windows95 systems, using an old Seagate 127Mb drive was considered best for this purpose. As those drives were tough enough to handle the near constant read/write. (the older LARGE size drives lacked dependability what they gained in size)


I do hope you're referring to Windows 95 OSR2.x because the first two versions could only support 2GB volumes max. It'd really suck to have to partition a 127GB drive 60plus times.

RABical wrote:
Another means of raising the performance of systems was to lock the Swap/Paging file to a specific size, depending of course on the use you planned for that system. Usually up to about 4X the amount of physical RAM on the system, although some few may have set it higher. This had the effect of ensuring that there was ALWAYS enough room for the virtual memory while allowing the user to be certain of just how much of that drive was available for his/her own, user programs. It was also found that this practice tended to REDUCE file fragmentation to a measurable extent


Locking the swap file to a specific size won't reduce file fragmentation, it'll only stop fragmentation of the swap file itself because it won't be spread out in fragments across the drive when Windows decides to change the size.

I'm pretty sure this has been mentioned many times before on this forum that it's recommended to remove the swap file entirely, defrag the drive, then created a locked swap file, this will ensure that your swap file isn't fragmented at all, and being locked, never get fragmented.


Top
  Profile  
 
 Post subject:
PostPosted: Sat Jan 20, 2007 10:01 pm 
King of All Voodoo2 Cards
King of All Voodoo2 Cards
User avatar

Joined: Tue Jun 22, 2004 10:41 am
Posts: 9316
RABical wrote:
From the looks and content of your replies here Fly, I can only figure that I've run into yet another Deadend.


It might have something to do with your own convoluted ideas.

RABical wrote:
According to what you have said so far, file fragmentation is physically impossile. Because those files are set for life once they are witten to the drive, unless for whatever reason they are changed. Yet those same files, even those which are READ ONLY and cannot be changed, are still getting moved and shuffled around. And FRAGMENTED...


You'll have to offer up some kind of proof for this or some way it can be tested because I still say that's impossible. A file set to "Read Only" on a drive cannot get fragmented once it's been written, unless it's somehow being changed by either another program or a user. So my guess is that you're either moving stuff around, or you've got some other program running in the bacjground that's moving stuff around, possibly without your own knowledge, which wouldn't really surprise me.


RABical wrote:
I've offered an anology which would explain this, and you, like so many others, insist that my anology is wrong.


Probably because it is, and if IBM, and MS are telling you that you're wrong as well, it's probably time you moved on.


RABical wrote:
Yet none of you are able to explain how it is possible for something that you claim to be physically impossible still happens.


None of you? last time I check I was the only one trying to explain to you how files are written to the hard drive, the only other post was a copy/paste job from wikipedia.

RABical wrote:
I had hoped to find someone here who actually knew something. It would appear that I hoped in vain.


You sure have, don't let the door hit your ass to hard on the way out.


Top
  Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 57 posts ]  Go to page 1, 2, 3  Next

All times are UTC - 8 hours


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group

© 2014 Future US, Inc. All rights reserved.