Quantcast

Maximum PC

It is currently Mon Apr 21, 2014 8:21 am

All times are UTC - 8 hours




Post new topic Reply to topic  [ 57 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
 Post subject:
PostPosted: Sat Jan 20, 2007 10:21 pm 
King of All Voodoo2 Cards
King of All Voodoo2 Cards
User avatar

Joined: Tue Jun 22, 2004 10:41 am
Posts: 9316
RABical wrote:
A drive like you described, as being ONLY used to store music files, would of course have little or nothing to fear from fragmentation. As I've already at least tried to explain. If no NEW files are being written to that drive WHILE any of it's files are in active memory, then of course the old files won't be affected. When their use is complete, they can, and WILL, be written right back to the exact same location, with the exact same file allocation table addresses for however many allocation units they use.


Dude, I'm telling you this is wrong and I already gave you a way to prove that it's wrong above.

Open the file in a program
Pull plug from PC

Now if what you're saying is true, the file would be gone from the system because the hard drive couldn't write it back to the hard drive before you yanked the power.

When you turn your computer back on you'll find that the file is still there, simply because when you open a file it's not removed from it's space on the hard drive and then written back when the program is closed.

RABical wrote:
The drive which holds the Swap/Paging file will ALWAYS show the greatest amount of fragmentation. Even if that Swap/Paging file is the ONLY file stored there because even the paging file for XP gets fragmented.


If the swap file was the only file on the drive, what would get in the way in order to cause fragmentation?

For the swap file to get fragmented you'd need other stuff on the drive. If you've got a clear drive with the swap file being the only thing on it, and the swap file isn't locked to a specific size, Windows will simply extend it.

It's when Windows attempted to extend the swap file and there's another file or two taking up the immediate sectors on the hard drive next to the swap file, that's when the swap file will become fragmented.



RABical wrote:
If the Boot Sector were not dedicated so that NOTHING else were allowed to be witten there, then even the Command.COM would get fragmented.


Not to pint out that you don't know what you're talking about, but COMMAND.COM doesn't reside in the boot sector of the hard drive.

On DOS based machines COMMAND.COM is located in the root directory on the active drive in the system.

On NT based machine you'll find COMMAND.CON located in the windows/system32 directory.

Neither of which is part of the boot sector.

RABical wrote:
CD's and such simply cannot be fragmented, aside from possible Re-Writes. Due to the files being written there are placed contigiously to start with and each file so written is, at least for the time it is on that CD, given the attribute of READ ONLY. Copy that file to the hard drive, or just into active memory, and it can and will get fragmented till the end of the world as we know it.


Here's a question for you. How is a file, that's only located on a CD, and in active memory going to cause fragmentation on a hard drive?


RABical wrote:
On your music storage drive, should you start editing any of those files and affecting their size(s), then and only then will that drive be likely to ever suffer from fragmentation.


Correct, because then they would actually be getting changed.

Your whole theory of where the file is removed from the drive, into memory, and then placed back on the drive is bogus, because I can prove it's wrong simply by pulling the plug out of any computer with several files open in memory.

RABical wrote:
But like I said before, try moving your Swap/Paging file to that drive, then monitor fragmentation and see what happens. Who knows, you just might, accidentally, LEARN something.


Perhaps you'll eventually learn not to make assumptions of other people's hardware, don't simply assume that I've got a single swap file located on the same drive my Windows directory is located on.


Top
  Profile  
 
 Post subject: Re: Straight to the brains. An Old Question with no answers
PostPosted: Sat Jan 20, 2007 11:54 pm 
Boy in Black
Boy in Black
User avatar

Joined: Thu Jun 24, 2004 1:40 pm
Posts: 24322
Location: South of heaven
The question:
RABical wrote:
Is there someone in here capable of telling me just exactly HOW file fragmentation comes about?

Yes, I know that when a file is written to the disk it might require more space then is handy in one chunk, so part gets moved to the next chunk of open space...But I am also told that once a file is written to the disk, it remains effetively unchanged until such time as it is deleted, and then actually remains until the space it was in is overwritten. So how is it possible for a system that does nothing more complex then sitting there idle to get it's drive fragmented? It has been proven that the simple act of firing up a system creates at least some fragmentation...

I have a theory on just how this is possible, but as of yet have found no one able or willing to point me in the right direction to either prove or disprove this theory.
RABical wrote:
Well, I see there's been a fair number of Views, but no comments. Which tells me one of only a very few things have happened.
1. Either most of you have decided I am simply a raving lunatic and don't wish to risk being infected by my psychosis.
2. Or else it's just a matter of no one here KNOWS the answer but lacks the courage to admit it.
Or...

3. Will argue with anyone as long as your theory is correct. Who wants to comment when every comment is refused?

The question is "how does a disk become fragmented". It's because windows isn't perfect in it's placement. Fragment specific software can arrange it so much better than window's seems to do.

The statement that it stays untouched until deleted is not true at all. "Read Only" has nothing to do with the system and is being looked at wrong. It can move it around at it's own will. It just means WE can not alter it physically. The more you use it, the higher it's priority is placed on the fast section of the platter. If you listen to one song more than the others, it moves even if you don't delete it or anything else beside play it back.

Why is a fresh install of windows even fragmented? That time line is very short at this time in the OS's life span. If you've accessed one application 3 times, and everything else just once...it's "fragmented" if it's in weird section.

Swap files, while I shouldn't go here for arguement's sake, can also become fragmented. Most users simply ignore static settings and leave the option for "windows to manage swap file size" ticked. By giving it a static size, it should never increase or collapse. That change in size is the majority of swap file fragging because it has to move items around if it's in allocated space it's wanting to shrink. Even dynamics can be fragmented in some manor, and another reason for aftermarket defrag software. But swap file fragging is way off subject...


Top
  Profile  
 
 Post subject:
PostPosted: Sun Jan 21, 2007 9:21 am 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
As of so far, I've not 'Refused' any arguments, I've only popped holes in those which made little or no sense. I've even offered tests as proof of at least some of what I claim to know. And if I were to claim to know EVERYTHING, there would be no reason for me to ask the question which I have asked. Aside from any possible chance of showing just how big of fools you, who DO claim to know everything, really are. Which you appear to need no help from me doing anyway.
Windows, during a fresh install, or even just sitting there idling, creates and writes any number of temp files. It even keeps a specific folder to write these files into. This is aside from the swap file which is used as virtual memory. These Temporary Files are written, read, changed, and deleted almost constantly. If you need proof of this happening, here is a test simple enough for even the Great and Powerful Flytrap7 to accomplish. And I feel certain that Chumly just might be able to see verifiable results from it as well. I presume that you both have either CCleaner, or some other brand or style of in-depth type of drive cleaning software. Run this program on a system which has no Internet access, and no other means of acquiring 'New' files from any other outside source beyond what you, yourself, might decide to feed it through either a CD, a Floppy, Flash drives or extra hard drives that you might decide to connect. After running the cleaning program, allow that system to sit IDLE for a day or 2. Even turn off the monitor so that not even a screen saver would be run. Although running one would give more drastic results. After that idle time, run your cleaning program yet again and see the results. That cleaning program is not going to mess with the swap file. But it will find any number of TMP files which it will offer to remove. Now it is unlikely that the results will count into the hundreds of Megs of space to be freed up as would be the case should that system have been in active use. But those results will still be viewable, and MEASURABLE.
Now for the next test. And this one might be a bit Technical for you 2, but I have faith in your ability to learn, even if learning does go against your training. On this isolated system, run whatever brand of defrag program you have or choose to use. At one time, way back when, Norton's Speed Disk was the best defrag program out there. Nowadays there are any number of programs that are as good or possibly even better then what Norton offers. But choose your favorite brand and run it. Then yet again allow that system to sit IDLE for a day or 2 and then analyze for file fragmentation. No, I do not expect it to say that defragging is needed at that time, but you WILL find an increase of the amount of fragmentation listed from just after having run the defrag program. Now allow that system to sit idle for yet another day or 2 and analyze it yet again. This time you will find an even greater amount of fragmentation. Pretty strange to happen to a system which has done absolutely nothing but sit idle. On a system which is actually in use and working for this length of time, the percent of fragmentation shown would be that much greater. And the harder it is working, the greater the percentage found for the same time period.

Now, in my analogy, I have never stated, nor implied, that a file, after having been read into active memory, was actually deleted from the drive. Only that in the FAT table that space was listed as being available. If no new file is written to that drive during the time that specific file was in use, and that file was not changed in some manner as to affect it's size so as to need a greater number of allocation units to store it. Then when it's use was complete, it can easily remain in it's original position, with the same address(es) in the FAT table. If that file IS changed, either made large or smaller, then either a small space will be left open, and used by PART of the next file to be writen to that drive. Which of course means that next file will become fragmented. Or if that first file is made larger, it no lnger will fit into the original space, and IT becomes fragmented due to whatever is left over will be written into the NEXT open space.

Now, I've tried to keep these test simple, and I've tried to use only small words and terms, in the hope of keeping things to a level which our dear chumly and Flytrap7 can understand and possibly even implement (that means To DO). To any of the other readers of these posts, please feel free to conduct any or all of these tests and view the results for yourselves. I don't expect anyone to just take MY word for anything. See for yourselves, otherwise you learn nothing.


Top
  Profile  
 
 Post subject:
PostPosted: Sun Jan 21, 2007 5:30 pm 
Smithfield*
Smithfield*
User avatar

Joined: Fri Jul 09, 2004 9:17 am
Posts: 7159
Location: In HyperTransport
REBical, I would be very careful with your words concerning other forum members from this point forward, particularly those who are moderators and members of "PC".


Top
  Profile  
 
 Post subject:
PostPosted: Mon Jan 22, 2007 7:30 am 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
gramaton cleric wrote:
REBical, I would be very careful with your words concerning other forum members from this point forward, particularly those who are moderators and members of "PC".

Truth be told, I've had pretty much this same conversation with at least 1 of the contributing writers right here at MPC. Shortly after reading his glowing endorsement of yet another defrag program. He was only too happy to tell me of how this program was well worth the cost of acquiring, to fix a problem that he, in the same breath, claimed did not exist.

It's just like with Flytrap7's supposed test. That file written to that disk remains in place until physically moved, deleted, changed, or otherwise overwritten. If a file is currently in use, it is quite likely that it is not only present in active memory but also has been written into Virtual Memory. And we all know that Virtual memory is just another space on the hard drive. So long as the space it first took up is not overwritten, then it's safe right where it was. If it has been written into Virtual Memory then it is doubly safe unless the sudden loss of power causes that portion of the disk to become corrupted. Flytrap7's 'Test' doesn't test anything other then perhaps your hasd drive's ability to handle a sudden loss of power. Big Whoopy... So long as the space used by a file is not overwritten, quite possibly by that space being seen in the FAT table as being currently open and available, then that space shall remain with whatever info it had to start with. And if the file that previously held that space is currently written into the Virtual Memory, then it matters little whether or not it's original space is overwritten. Because that file still exists ON THAT DISK, and as such can be found by the system simply because the FAT TABLE will reflect that file as being located within the Virtual Memory at whatever specific address(es).
My own suggested tests on the other hand, actually DO get measurable results, although some do take a bit longer then others. Yet none of those tests require doing anything that stands to place a system at risk of damage.
On this specific system I have 1, hard drive, formatted into 2. The 2nd portion is used for only 2 purposes. It is STORAGE of files, about 500 Megs, which are never changed, READ ONLY. And it also holds the Paging File, which I have locked at a size equal to 4X the amount of physical RAM on the system. As a direct result of doing this, both locking the Paging File size, and placing it on a drive other then that which houses Windows(even though both 'Drives' are technically one and the same hard drive), I find the amount and severity of fragmentation on C: drive being measurably reduced, while the extent of fragmentation on that 2nd drive, although currently slight, to be increasing. Where previously, there was NO fragmentation on that drive at all for weeks on end.


Top
  Profile  
 
 Post subject: Re: Straight to the brains. An Old Question with no answers
PostPosted: Mon Jan 22, 2007 9:22 pm 
I'd rather be modding!
I'd rather be modding!
User avatar

Joined: Fri Jun 25, 2004 3:47 pm
Posts: 3731
Location: Las Vegas
OK I'll Bite....didn't read the whole thread, so forgive me if I repeat.

RABical wrote:
Is there someone in here capable of telling me just exactly HOW file fragmentation comes about? Yes, I know that when a file is written to the disk it might require more space then is handy in one chunk, so part gets moved to the next chunk of open space.


Welp - you just answered your own question. So I guess there is someone out there. BTW - its not "handy" - its "open". Thats important

Quote:
I also know that for every person who says file fragmentation is nothing to be concerned about, there is another person who says it IS something to be concened about.


Yep - I was concerned with this for a while. Turned out, that certain folks (who shall be unnamed) were benching systems before and after defragging - but the testing was flawed. How effective defragging is depends on your drive speed, unused capacity, OS, amount of RAM, typical file size, and third party utilities.

One thing is certain though, an unfragmented drive is best.


Quote:
But I am also told that once a file is written to the disk, it remains effetively unchanged until such time as it is deleted, and then actually remains until the space it was in is overwritten.


Mainly true

Quote:
So how is it possible for a system that does nothing more complex then sitting there idle to get it's drive fragmented? It has been proven that the simple act of firing up a system creates at least some fragmentation...


Here you have made an assumption. First, you equate a "system" with a file (yes I know what you meant). Its not like that. Second, There is nothing simple about firing a modern OS up. A number of files are written to the drive - especially if you use NT based systems. And, during that time, parts of the disk may be "reserved" for a bit - causeing files to be written around them


Top
  Profile  
 
 Post subject:
PostPosted: Mon Jan 22, 2007 9:32 pm 
I'd rather be modding!
I'd rather be modding!
User avatar

Joined: Fri Jun 25, 2004 3:47 pm
Posts: 3731
Location: Las Vegas
RABical wrote:
"What happens when your MP3s are stored on a medium that can't be physically written to such as a DVD disc? Your analog no longer holds true" Try again... Using a Read Only media such as a CD or DVD only increases the number of times when the Swap/Paging file must be accessed.


Thats not exactly true - but whatever.

Quote:
Watch your drives when you're playing those files on the CD. No, the CD won't get fragmented by the process, but unless you have a simply UNGAWDLY amount of physical RAM on the system so that the Swap/Paging file isn't needed, thenplaying those files from that CD will cause whatever drive holds that Swap/Paging file to become MORE fragmented.


The amount of ram you have on an NT based system (2000, XP, Vista, etc...) won't matter. The page file will be used. That said, depending on how you set up your system, the fragmentation will be minimal and reset on reboot. Assuming all you do is play read only media. Of course, if you look - you will see some. But try doing the test 50 times with a setup that purges the swap and you will notice it never really increases.

Oh, and a page file by its very nature is fragmented.

+ just to cover bases - I'm talking a machine that only read ROM media (like you are talking. No internet - no 3rd party media player phoning home, etc...

Quote:
But feel free to keep trying, I'll be happy to let you know if you ever come up with something new...


I'll do the same :)

Really now, if you know all this,l why ask?

Manta


Top
  Profile  
 
 Post subject:
PostPosted: Mon Jan 22, 2007 9:46 pm 
I'd rather be modding!
I'd rather be modding!
User avatar

Joined: Fri Jun 25, 2004 3:47 pm
Posts: 3731
Location: Las Vegas
RABical wrote:
BTW, even the OLD systems, as in PRE-Windows, suffered from fragmentation. And they had NO swap file. There was no Virtual Memory to contend with. The advent of Windows, especially later versions, only increased the problem.


Its not really a problem as long as you manage it. and BTW - Old systems didn't have this "problem" - of course, old to you might mean DOS. Its different for me.

Windows didn't create the problem. What created it was the need for performance. The process of fragmenting a drive actually adds to the performance of a system - until of course, it goes to far.

Before you disagree, consider that it is not magic that a system can write fragmented files. Its intentional.

Quote:
Early systems would only allow a file to be written contiguously.


See, you already knew that.

If that open space wasn't large enough to hold that file complete, then that space was bypassed. With these systems, it wasn't so much the Files being fragmented, as it was the DRIVE being fragmented. It was possible to have hundreds of megs free on a drive, but be unable to write a simple 50K file because there wasn't a 50K space open in one place. Windows came along and addressed this problem by allowing files to be fragmented. Write as much of this file as will fit in the first open space, then move to the next open space, and continue until the entire file had been written...[/quote]

See - again - you knew that. It wasn't windows alone though. And before you think the old system was better - consider - you could not delete a file larger larger than the free space of the disk. And, a drive more than half full could lead to data loss (trust me - it sucked). Back them, Drives were written to like tape - its was simple and easy - but not fast.

Now, I'm not saying that the old way is not something to revisit - it had advantages - we should look back into them.

Manta


Top
  Profile  
 
 Post subject:
PostPosted: Mon Jan 22, 2007 9:57 pm 
I'd rather be modding!
I'd rather be modding!
User avatar

Joined: Fri Jun 25, 2004 3:47 pm
Posts: 3731
Location: Las Vegas
RABical wrote:
What I am asking is HOW this happens.


You ahve already covered that. Its pretty simple.

How is it that even a READ ONLY file can become fragmented?

A read only file (at the system level) can only become fragmented if it is copied, or written in a fragmented state. A read only file at the user level is not actually read only - so it doesn't count.
Quote:
The only way for a READ ONLY file to become fragmented would be for it to be MOVED.


Correct - and one way to do that (at the system level) is to copy (or move it) during a defraggment session (ironically)

Quote:
To be disgustingly precise, it is the single greatest CAUSE of file fragmentation.


Here's one of those examples of not mattering. Unless you purge the page file, it will always be fragmented. It will also not matter. A fragmented pagefile is actually better than one that is not - until you reboot - but then it won't matter - because it shouls be mainly over written. Vista makes some changes to this. But you won't notice without a hybrid drive.

Manta


Top
  Profile  
 
 Post subject:
PostPosted: Mon Jan 22, 2007 10:08 pm 
I'd rather be modding!
I'd rather be modding!
User avatar

Joined: Fri Jun 25, 2004 3:47 pm
Posts: 3731
Location: Las Vegas
Quote:
That space will never be overwritten unless/until that space is identified in the FAT table as being open and available for use.


Step up into a modern File system really. Not that there is anything wrong with FAT, but if you are so dead set in not listening, at least be using a system from this millenium (and not, not Windows ME :) - The manta likes puns)

Quote:
Like I said before, try moving the virtual memory to the same drive as holds those MP3 files, then monitor the fragmentation on all drives. The drive, usually C:, which first held the V/M will show a marked reduction in fragmentation.


Of course it will, but it doesn't matter.

Quote:
While the drive which NOW holds it will show a marked INCREASE in fragmentation. This is one of the top reasons why, for so very long, those who wished maximum performance from their systems kept one extra drive on their system which was NOT used for user files. This drive was effectively dedicated for no other purpose then to house the Swap/Paging file.


Yep - its called a "scratch" - and we don't use them anymore unless we deal with 1GB+ files - because RAM is cheap now.

Quote:
Windows USUALLY will maintain a Swap/Paging file size at or about double the amount of physical RAM on the system. This, of course, depending a great deal on the amount of RAM present and the use that system is put to. A system which does little more then play Sol will of course have a smaller swap file, one which is working harder having a larger file. Another means of raising the performance of systems was to lock the Swap/Paging file to a specific size, depending of course on the use you planned for that system. Usually up to about 4X the amount of physical RAM on the system, although some few may have set it higher. This had the effect of ensuring that there was ALWAYS enough room for the virtual memory while allowing the user to be certain of just how much of that drive was available for his/her own, user programs. It was also found that this practice tended to REDUCE file fragmentation to a measurable extent.


That contradicts what you said before as I see it - and in fact, you can't compare 95 to NT systems when it come to page files. The system is so different it makes no sense to do so. They are not the same - not even close. One is prioritized, the other is little more than "extra ram".

Manta


Top
  Profile  
 
 Post subject:
PostPosted: Mon Jan 22, 2007 10:29 pm 
I'd rather be modding!
I'd rather be modding!
User avatar

Joined: Fri Jun 25, 2004 3:47 pm
Posts: 3731
Location: Las Vegas
RABical wrote:
Quote:
It's just like with Flytrap7's supposed test. That file written to that disk remains in place until physically moved, deleted, changed, or otherwise overwritten.


Yes it does. One of those must happen. If not by you, then the system. Its not magic.

Quote:
If a file is currently in use, it is quite likely that it is not only present in active memory but also has been written into Virtual Memory. And we all know that Virtual memory is just another space on the hard drive.


Another assumption - this is not always true. Who cares if it is though?


Quote:
So long as the space it first took up is not overwritten, then it's safe right where it was. If it has been written into Virtual Memory then it is doubly safe unless the sudden loss of power causes that portion of the disk to become corrupted.

Flytrap7's 'Test' doesn't test anything other then perhaps your hasd drive's ability to handle a sudden loss of power.


The HD doesn't "handle" the power loss. It stop writing.

Quote:
Big Whoopy... So long as the space used by a file is not overwritten, quite possibly by that space being seen in the FAT table as being currently open and available, then that space shall remain with whatever info it had to start with.


Of course - because reading a file does not delete its FAT or file system header - or the file itself.

Quote:
And if the file that previously held that space is currently written into the Virtual Memory, then it matters little whether or not it's original space is overwritten. Because that file still exists ON THAT DISK,


Except that virtual memory's allocation table is rewritten on boot (unless you have a fancy setup). SO the files (which are just fragments anyways) will be overwritten.

Quote:
and as such can be found by the system simply because the FAT TABLE will reflect that file as being located within the Virtual Memory at whatever specific address(es).


Care to toss me a reference - cuz its not like that AFAIK - and IK

Quote:
My own suggested tests on the other hand, actually DO get measurable results.


The problem is your point. What is it?

So you get some fragmentation results. You seem to already know (despite your misconceptions) what is causing them. But what do the results mean? What does it mean to performance?

Or better - what was your question?

AT the end of the day, if you are pursuing an issue - I can tell you that you really need to brush up on what an NT page file really does (compared to that of FAT) - I also suggest looking at Linux swaps. Then I suggest looking at embedded OS's and COWloops. After all, an emmbedded OS can't be fragmented right? Look into that.

In the mean time - if you have a real question ask it. But thus far, you seem to come off (intentionally or not) as someone who asked a question but doesn't want and answer (unless he/she already knows it)

My apologies if I've miseread


Top
  Profile  
 
 Post subject:
PostPosted: Tue Jan 23, 2007 5:47 am 
Java Junkie
Java Junkie
User avatar

Joined: Mon Jun 14, 2004 10:23 am
Posts: 24218
Location: Granite Heaven
RABical wrote:
From the looks and content of your replies here Fly, I can only figure that I've run into yet another Deadend. I've asked this same question to those at Microsoft, also at IBM. I've even gone to various Hard drive manufacturers. And all I ever get is the exact same old tired lines.


So, you insult the experts as much as you do the people here?

Given your ridiculous attitude, I'm shocked this thread has progressed as far as it has.

Use google .. you're not worth our time.


Top
  Profile  
 
 Post subject:
PostPosted: Thu Jan 25, 2007 7:37 pm 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
[quote="Jipstyle

So, you insult the experts as much as you do the people here?

Given your ridiculous attitude, I'm shocked this thread has progressed as far as it has.

Use google .. you're not worth our time.[/quote] I make no insults. I don't need to. And most of the comments made so far, your own included, simply aren't worth the effort of insulting All I do is poke fingers through the assortment of gaping holes in the arguments offered as excuses to simply accept what those so called 'Experts' (to define an expert, simply break down the word itself. You have an 'X', which is a HAS BEEN, and a SPERT {spurt}, which is a DRIP UNDER PRESSURE). So far, the only 1 here who has asked any intelligent questions has been the recent input from Manta. So I know at least 1 person here is actually interested in LEARNING SOMETHING. Or is at least thinking about it anyway. For one's such as yourself Jip? Before you post agian, please bring a brain. Even if you have to go borrow one. I despise having a battle of wits with unarmed opponents.

Now, MantaBase,
1. the ONLY time when Virtual Memory is NOT just one more section or space on a hard drive is when the space used to create that virtual memory is, itself, virtual. Such as using a RAM-Drive. Which is not a hard drive itself. If there are any other methods of creating Virtual Memory, I've not yet heard of them. Currently, the only RAM-Drive systems that I know of are not considered large enough to be worth the extreme cost of attaining. One such was written up, I believe right here at M-P at some point during the last year. With the +'s being extra fast access speeds, and the -'s being extreme cost per gig and limited size. I'm uncertain but I think it was maxxed out at 4 gig, but priced in a range at or near that of a complete system. That price was just for the card and software, NOT the memory sticks to be added to it. If a file, any file, is in use, the only time it would NOT have been written to the virtual memory would be if there were sufficient RAM that the extra memory were not considered needed at that time by the system.
And a hard drive's ability to 'Handle' a sudden power loss is the primary reason for the purchase of a UPS. Because data CAN be lost from a drive due to power loss, either total or 'Brown Out'. Drives can, and have, been turned into little better then oversized fishing weights, with ALL data contained being lost and unrecoverabe as a direct result of power outages.
2. ANY and EVERY file located in that system, regardless of which drive, or how many drives, it may be contained in, is listed in the FAT table. This table may or may not be NAMED as a FAT table in the usual sense. Without that listing, the system cannot know where on the drive to look for the file in question. This includes all of the various BOOT and STARTUP files. Every Mac, every Linux, every Windows, every O/S of any kind has it's own version of this table. Without it, without a list of what file is located where, not much would happen when a system was started up. And running an app could take hours or even days just for the app to be loaded before you could begin doing anything with it.

"In the mean time - if you have a real question ask it. But thus far, you seem to come off (intentionally or not) as someone who asked a question but doesn't want and answer (unless he/she already knows it) "
I already DID ask it. And if I already KNEW the answer to the question I asked, there would have been no reason to ask the question. I did offer up an analogy which DOES answer the question, but I do not know how to test it out to prove one way or another. So that analogy remains as nothing better then a Theory...
As for the various 'Embeded' O/S's, if EVERY file is effectively READ ONLY, UN-movable, then yes, under those conditions file fragmentation would not be a concern. Simply because if a file never changes and never moves, well, that pretty much covers it.
As for there being some great worlds of difference between a SWAP file as used in earlier versions of Windows, and the PAGING file used by WinXP. There is actually very little difference between them aside from the NAME of the file itself. BOTH are used for the exact same purpose and in the exact same fashion. They are the VIRTUAL MEMORY which allows Windows to function at the speeds it does. The only real difference between them being that XP keeps an open area, Slack Space if you will, around the Paging File. Which allows the room for it to expand or conract as needed by the system. Provided of course the SYSTEM is in control of it's size an location.
And with fragmentation on the OLDer systems, as in PRE-Win95, the fragmentation was of the DRIVE, not the FILES written on it. If there were no section of the drive, in contigious allocation units, large enough to hold the new file you wished written to the drive, you simply got back the error message on insufficient space and the operation was aborted. With the advent of Win95, it allowed a file to be written, or even SPLATTERED, just anywhere that was open. This was a great help against DRIVE fragmentation, as no space was then wasted or lost. But this same system allowed FILES to become fragmented instead.
FAT 16, FAT 32, NTFS, it really doesn't much matter what specific file system you use. Because they are all, regardless of the O/S in question, just various ways of accomplishing the same task using much the same equipment. They are all aimed at the same end result. Some get there a bit faster, or handle the 'Journey' a bit better. But the end results are the exact same. They are all simply a means to handling the various files on and used by that Computer. They may not have a FAT table that is called a FAT table, but they all have a table or file which performs the EXACT SAME TASK as a FAT table. And that task is nothing more nor less then maintaining a list of the address(es) of each file or portion thereof contained on that disk.


Top
  Profile  
 
 Post subject: Re: Straight to the brains. An Old Question with no answers
PostPosted: Thu Jan 25, 2007 7:46 pm 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
RABical wrote:
Is there someone in here capable of telling me just exactly HOW file fragmentation comes about? Yes, I know that when a file is written to the disk it might require more space then is handy in one chunk, so part gets moved to the next chunk of open space. I also know that for every person who says file fragmentation is nothing to be concerned about, there is another person who says it IS something to be concened about. But I am also told that once a file is written to the disk, it remains effetively unchanged until such time as it is deleted, and then actually remains until the space it was in is overwritten. So how is it possible for a system that does nothing more complex then sitting there idle to get it's drive fragmented? It has been proven that the simple act of firing up a system creates at least some fragmentation...

I have a theory on just how this is possible, but as of yet have found no one able or willing to point me in the right direction to either prove or disprove this theory. I figured those working at learning programming and such might be able to shine just a wee bit of light into this tunnel.
Manta, here's a repeat of the startup question. What I'm rying to find out is how it is possible for even READ ONLY and unmovable type files to get fragmented. The only files that appear immune to this are, of course, the BOOT files. Which have a section of the disk reserved specifically for them and nothing else is allowed to be written there. One time to the next, when defrag is run (or whatever brand of defrag software you use), even those files which are marked as being unmovable will be found in a different location(s). Some earlier versions of defrag software allowed the user to hover over the 'Picture' of the drive and be informed of just what file was located in that portion. I'm unsure if any of the newer versions do this as well. I know that Windows Defrag does not, nor ever has.


Top
  Profile  
 
 Post subject:
PostPosted: Thu Jan 25, 2007 8:41 pm 
I'd rather be modding!
I'd rather be modding!
User avatar

Joined: Fri Jun 25, 2004 3:47 pm
Posts: 3731
Location: Las Vegas
RABical wrote:

Now, MantaBase,
1. the ONLY time when Virtual Memory is NOT just one more section or space on a hard drive is when the space used to create that virtual memory is, itself, virtual. Such as using a RAM-Drive. Which is not a hard drive itself. If there are any other methods of creating Virtual Memory, I've not yet heard of them.


We actually used to use floppies. In the old school, virtual memory or ram driving was a way of simply dealing with the fact that a system could not hold much RAM. It wasn't a performance thing.

There was also the AST 6-pak - a virtual memory card - much like what you are thinking of as a RAM-Drive. In fact, its no coincidence that Gigabyte option is called a "RAM-Drive". It not new - its just repackaged.

Quote:
Currently, the only RAM-Drive systems that I know of are not considered large enough to be worth the extreme cost of attaining. One such was written up, I believe right here at M-P at some point during the last year. With the +'s being extra fast access speeds, and the -'s being extreme cost per gig and limited size. I'm uncertain but I think it was maxxed out at 4 gig, but priced in a range at or near that of a complete system. That price was just for the card and software, NOT the memory sticks to be added to it.


Actually, its not so pricey. Last I saw about $700 with 4gigs of RAM on board. And, if you are like me and use files in the GB range daily, its worth the money. MPC's review was tunnel visioned and didn't look at all of the uses. Only those they were accustomed to.

Quote:
If a file, any file, is in use, the only time it would NOT have been written to the virtual memory would be if there were sufficient RAM that the extra memory were not considered needed at that time by the system.


This is a slip on your part. 10 years back, the statement would be true. And, I'm sure (in fact, from fielding questions from gamers, I'm very sure) many still think it is. But it is not.

NT systems will use a page file regardless of how much RAM you have. Its intentional. While I am no MS fanboi, it's a bit of genius. A modern NT system uses the page file for far more than holding what won't fit in RAM. It stages bits of code in a fast access zone. Slower then RAM, but faster than a normal HDD read. This is very useful to keep RAM overhead low - while keeping performance up.

Quote:
And a hard drive's ability to 'Handle' a sudden power loss is the primary reason for the purchase of a UPS. Because data CAN be lost from a drive due to power loss, either total or 'Brown Out'. Drives can, and have, been turned into little better then oversized fishing weights, with ALL data contained being lost and unrecoverabe as a direct result of power outages.


True, but I have a feeling its not because of the reason you assume. A sudden power loss or surge at the right time can do any number of things to a drive. However, during a read, the file (or any part of it) is not removed from the drive, reallocated, or modified in what you call the FAT and then rewritten. At least not under normal circumstances. If it were, the lifetime on a drive would drop considerably. And by that, I mean considerably - like it would last month instead of years.

Quote:
2. ANY and EVERY file located in that system, regardless of which drive, or how many drives, it may be contained in, is listed in the FAT table. This table may or may not be NAMED as a FAT table in the usual sense.


I'll give you that for now - but I suspect I will tell you why I nitpick about the term FAT pretty quick..

Quote:
Without that listing, the system cannot know where on the drive to look for the file in question. This includes all of the various BOOT and STARTUP files. Every Mac, every Linux, every Windows, every O/S of any kind has it's own version of this table. Without it, without a list of what file is located where, not much would happen when a system was started up. And running an app could take hours or even days just for the app to be loaded before you could begin doing anything with it.


Actually, the boot sector is an exception me thinks.

Quote:
"In the mean time - if you have a real question ask it. But thus far, you seem to come off (intentionally or not) as someone who asked a question but doesn't want and answer (unless he/she already knows it) "
I already DID ask it. And if I already KNEW the answer to the question I asked, there would have been no reason to ask the question. I did offer up an analogy which DOES answer the question, but I do not know how to test it out to prove one way or another. So that analogy remains as nothing better then a Theory...


Well, maybe I'm blind, but I didn't see the question you asked that was not answered. Granted, I have limited time to read threads. What was the question?

Quote:
As for the various 'Embeded' O/S's, if EVERY file is effectively READ ONLY, UN-movable, then yes, under those conditions file fragmentation would not be a concern. Simply because if a file never changes and never moves, well, that pretty much covers it.


Quote:
As for there being some great worlds of difference between a SWAP file as used in earlier versions of Windows, and the PAGING file used by WinXP. There is actually very little difference between them aside from the NAME of the file itself. BOTH are used for the exact same purpose and in the exact same fashion. They are the VIRTUAL MEMORY which allows Windows to function at the speeds it does. The only real difference between them being that XP keeps an open area, Slack Space if you will, around the Paging File. Which allows the room for it to expand or conract as needed by the system. Provided of course the SYSTEM is in control of it's size an location.


Thats not true. The NT system is far more complex. Ponder the use of a hybrid drive. Under your assumption, its almost useless. However, the page file system in modern boxes is more intelligent. Its not simply and area for dumping stuff that could go into RAM - code is intentionally dumped there in favor of putting it in RAM. Think about it. Why put something you will rarely access into RAM? and yet, why not put it in a quick access area for the few times you know you will need it during a normal session? Its not as simple as you have been informed.


Quote:
And with fragmentation on the OLDer systems, as in PRE-Win95, the fragmentation was of the DRIVE, not the FILES written on it. If there were no section of the drive, in contigious allocation units, large enough to hold the new file you wished written to the drive, you simply got back the error message on insufficient space and the operation was aborted.


Which was sucktastic. A nearly empty drive could appear full when saving a large file. Worse, those system required a buffer to remove files (stupid really), you could find yourself with a nearly full drive and no way to free up space. Thus started the (current) myth that a drive should never be more than half full - because, back then it mattered.

Quote:
With the advent of Win95, it allowed a file to be written, or even SPLATTERED, just anywhere that was open. This was a great help against DRIVE fragmentation, as no space was then wasted or lost. But this same system allowed FILES to become fragmented instead.


And so? So files are fragmented? It saves space. And so only the drive is fragmented instead of files - its much faster. Wanna go contiguous? Then be prepared to wait during saves and - as well - for data loss. Fragmentation is opportunistic - but quick.

[/quote]FAT 16, FAT 32, NTFS, it really doesn't much matter what specific file system you use. Because they are all, regardless of the O/S in question, just various ways of accomplishing the same task using much the same equipment. They are all aimed at the same end result. Some get there a bit faster, or handle the 'Journey' a bit better. But the end results are the exact same. They are all simply a means to handling the various files on and used by that Computer. They may not have a FAT table that is called a FAT table, but they all have a table or file which performs the EXACT SAME TASK as a FAT table. And that task is nothing more nor less then maintaining a list of the address(es) of each file or portion thereof contained on that disk.[/quote]

If that were true, we could have one - and forget the others. FAT is small, simple, and very fast. Unfortunately, its also easy to corrupt. NTFS is large (10 megs minimum) and slower than FAT. However, its also far more robust and useful. That because its not the same as FAT. Where you can think of FAT as a Table (which it is), you can look at NTFS (and other modern systems) as a somewhat self healing file database. Its far more complex - and useful than FAT. The trade off is performance.

At anyrate, the folks around here actually know stuff. You might get further if you considered that your knowledge might need a tad of tuning.

I understand that FAT and NTFS are similar - but they are different enough not to think of them as the same. I also understand "virtual memory" seems simple enough (RAM substitute), but that is not what it is.


Manta


Top
  Profile  
 
 Post subject: Re: Straight to the brains. An Old Question with no answers
PostPosted: Thu Jan 25, 2007 8:49 pm 
I'd rather be modding!
I'd rather be modding!
User avatar

Joined: Fri Jun 25, 2004 3:47 pm
Posts: 3731
Location: Las Vegas
RABical wrote:
Manta, here's a repeat of the startup question. What I'm rying to find out is how it is possible for even READ ONLY and unmovable type files to get fragmented. The only files that appear immune to this are, of course, the BOOT files. Which have a section of the disk reserved specifically for them and nothing else is allowed to be written there. One time to the next, when defrag is run (or whatever brand of defrag software you use), even those files which are marked as being unmovable will be found in a different location(s).


Ah.....well - thats better - why not just say that? Some of that is about system use - and yes, a type of virtual memory. You recall, temp file are not in virtual memory? Certain system files are not either. They cannot be moved during a defrag because they are in use - but they are temporary.

Quote:
Some earlier versions of defrag software allowed the user to hover over the 'Picture' of the drive and be informed of just what file was located in that portion. I'm unsure if any of the newer versions do this as well. I know that Windows Defrag does not, nor ever has.


There are many different defrag options. Can't tell you one that would do that off the top of my head. Something to keep in mind though - get two different defrag programs and you can flip flop between them over and over with each defragging the others work. Thsi is because defragging is no longer about simply putting files back together in contiguous blocks. Now, the programs try to optimize somewhat and actually place certain files in the faster performing areas of the outside edge of the drive.

Hope that helps

Manta


Top
  Profile  
 
 Post subject:
PostPosted: Fri Jan 26, 2007 6:57 am 
I'd rather be modding!
I'd rather be modding!
User avatar

Joined: Fri Jun 25, 2004 3:47 pm
Posts: 3731
Location: Las Vegas
lol - I woke up this morning and realized what I think your real question is and why you are saying some strange things that don't seem to be true - but in fact are at very specific times. And then you are generalizing them.

I'll try to answer your question later today if I have time. However, its the very folks you insult (Like Jip) who will have to confirm that my speculation is right. I think you are talking about "open" files. A very special file state. Anyways, maybe later.

And I don't see why the answer is so important - but hey, its interesting.

Manta


Top
  Profile  
 
 Post subject:
PostPosted: Fri Jan 26, 2007 8:44 am 
I'd rather be modding!
I'd rather be modding!
User avatar

Joined: Fri Jun 25, 2004 3:47 pm
Posts: 3731
Location: Las Vegas
When you go to defrag a drive, there are files that are "unmovable". These are usually described as "system files". On a typical system, they make up a small percentage of the drive. The drive is defragmented around them. If, after a defrag, you shut the system down and reboot and then go back to the defrag utility, you might find that these files have disappeared - or moved.

Obviously, this would make one think that the files are actually movable and beg the question as to why they were not moved during the defrag operation.

Here is my speculation (based on what I know). These files are actually "open". If you have any experience with programming, you know that opening a file is not the same as reading it. Jip can elaborate if he wants.

Point: "Open" is not "read" or "write"

While a file is open, some odd things go on. If the file already existed and you made changes to it after opening it, the changes will not be written to the file (formally) until it is closed. So, if your system lost power, the changes (even though they were written to disk) would be gone.

I know on older systems (experience from an Apple II - but I suspect early DOS might be the same), you could lose the entire file because it was "open" and writing over the original file. New OS's are a bit safer, likely because they have the head room to spare.

Now, back to the unmovable files. These files are open. If you want to move them you must close them. And that is a PITA, because they are in use. In fact, many of them will never even be closed and turned into real files.

Its kinda like virtual memory - sort of. Not really. Its closer to a COWloop where the write sequence never occurs.

Why not move them to RAM or virtual memory? If you have enough RAM, I bet that would be pretty easy. But, having enough RAM is a big assumption. As far as the page file goes, maybe on an older system you could. I think doing it on a new NT system would be complicated. Add to the complication that each program using the file would have to be notified and would have to be able to accept the notification. Otherwise, the program would lose its working data and simply keep writing to the same place.

Why not shut each process down for a moment, close the file, move it, then open it? You would think that is you are using Windows Defrag utility on a windows machine it would have a strategy for putting the system in a safe state for doing this. But no - and I think I know why:

1. its a PITA for such little gain
2. At somepoint you are moving into a realm where a power failure would really cause a problem
3. OS's (especially Windows) are not written in an integrated manner. Think about it like this, in order to do what we are talking about, you would have to have intimate knowledge of the OS and the OS would have to allow for the utility to have some pretty deep access. The defrag utility for windows was not originally written by MS - its bought - like many other pieces of the OS. The same can be said for just about any OS with bundled utilities - they are 3rd party. The folks who wrote it (in Windows case) had pretty deep knowledge, but not so deep that they could shutdown pieces of the system. Further, its not just "system files" that are the problem. Any program with files in the open state is a problem. Thats why you typically drfrag with as few things running as possible.

IF I got any of that wrong, folks around here will tell me.

Long and short of it, you COULD move those unmovable files. However, its a total PITA and the gains are so small its not worth it.

They appear to move because when they are closed, they get written - and are no longer open - or they are never closed and simply go byebye.

Here's one to think about. I have a program and AFAIK its does the following upon start.

Its open an existing file and begins to append to it. When it finishes is save the file.

What its actually doing is more complicated.....it opens a NEW file, and reads the content of the old file into it. then it appends data to the new file. Then it erases the old file entry in the file system and writes the new file to the drive (in a totally different place likely).

Anyways -

Manta


Top
  Profile  
 
 Post subject:
PostPosted: Sat Jan 27, 2007 9:59 pm 
8086
8086

Joined: Wed Nov 01, 2006 3:06 pm
Posts: 93
Manta, most all of the Norton Speed Disk versions could/would tell you what specific file was located at whatever specific location on the graphical image of the drive you hovered over or clicked on. Either during or just after the defrag process was complete. Although doing so DURING the process most often caused the process to be restarted. This option or extra seems to have been left out in those versions compatible with XP. Or at least I haven't specifically tried it yet. I'll try it with my next defrag of this laptop. I no longer use Norton's apps.
Unless the version of Defrag that came with my install of XP on this laptop, or the other Install disk for XP that I have for my Desktop system, is different then that supplied with all others, watch it do it's thing sometime. Or at least pay attention to the finished work. More often then not, those same files that the legend tells you are immovable got shuffled by defrag. If not to be totally lumped together, at least they have been placed closer together. Am I the only one who gets these Special Versions? Or am I just the only one who pays attention to what they are doing?
And exactly as I said before, the primary purpose of ANY file system, be it FAT16, 32, NT, or even NTFS, remains the EXACT SAME. They are nothing more then varying ways to reach the exact same end. Some do so more efficiently, yes. And some offer extra capabilities as well. But the primary purpose of ALL of them is to maintain the files on/in the system. Both those used specifically by that system in it's basic operations, and by YOU, the User.
These basic purposes are as follows.
1. Efficiently use the space available on your hard drive to store the necessary data
2. Catalog all the files on your hard drive so that retrieval is fast and reliable. (hmmm... can you say FAT TABLE? sure you can)
3. Provide methods for performing basic file operations, such as delete, rename, copy, and move.
4. Provide some kind of data structure that allows a computer to boot off the file system.
http://www.pcnineoneone.com/howto/filesystems1.html
The only REAL difference between them is in how efficiently they handle those 4 basic tasks. If they cannot handle those 4 basic tasks, then anything else they might offer is useless because they simply won't work at that point.
And just for the record, while the virtual memory does get used regardless of whether it is actually NEEDED or not, a system with higher amounts of physical RAM installed will use it less often. Depending of course on the apps being used on that system as well. Few would be likely to build a system with 2 or even 4 gigs of RAM that was only used to play Solitaire. But those who would need or want a system with that much RAM are just as likely as anyone else to stop everything else for a time and do something as simple as play a game of Sol. And during that time, their system will access, read and write to, the virtual memory MUCH less often.


Top
  Profile  
 
 Post subject:
PostPosted: Sun Jan 28, 2007 9:11 am 
I'd rather be modding!
I'd rather be modding!
User avatar

Joined: Fri Jun 25, 2004 3:47 pm
Posts: 3731
Location: Las Vegas
RABical wrote:
Manta, most all of the Norton Speed Disk versions could/would tell you what specific file was located at whatever specific location on the graphical image of the drive you hovered over or clicked on. Either during or just after the defrag process was complete. Although doing so DURING the process most often caused the process to be restarted. This option or extra seems to have been left out in those versions compatible with XP. Or at least I haven't specifically tried it yet.


Its my understanding that Norton didn't supply the defrag utility for XP - so not "left out", more like "not there"

Quote:
I'll try it with my next defrag of this laptop. I no longer use Norton's apps.
Unless the version of Defrag that came with my install of XP on this laptop, or the other Install disk for XP that I have for my Desktop system, is different then that supplied with all others, watch it do it's thing sometime. Or at least pay attention to the finished work. More often then not, those same files that the legend tells you are immovable got shuffled by defrag. If not to be totally lumped together, at least they have been placed closer together. Am I the only one who gets these Special Versions? Or am I just the only one who pays attention to what they are doing?


Its generally such a minor operation that most folks don't pay it much attention - I typically don't. I'll look at it next time I do the operation.

QUESTION: Does this "movement" happen just after the defrag, or after a restart?

Quote:
And exactly as I said before, the primary purpose of ANY file system, be it FAT16, 32, NT, or even NTFS, remains the EXACT SAME......et al


I suppose you can say that, but its like saying a bicycle and a Toyota Prius are basically the same. After all, the point of them is transportation of some kind. Without that, they have little purpose. Since file systems don't seem to be your point, I guess lumping them all together is fine.

Quote:
And just for the record, while the virtual memory does get used regardless of whether it is actually NEEDED or not, a system with higher amounts of physical RAM installed will use it less often. Depending of course on the apps being used on that system as well. Few would be likely to build a system with 2 or even 4 gigs of RAM that was only used to play Solitaire. But those who would need or want a system with that much RAM are just as likely as anyone else to stop everything else for a time and do something as simple as play a game of Sol. And during that time, their system will access, read and write to, the virtual memory MUCH less often.


A bit circular. You can say the same about any system resource - RAM, CPU (in most cases), GPU.

The NT system at some levels doesn't care about how much RAM the machine has. As an example, my machine has about the same commit on boot if I have 512 or 1GB on board. I even wrote a little program to make sure I wasn't getting a false reading - because I didn't understand how the system worked at the time and it made me mad that more RAM didn't have the effect I thought it would. After a bit of self education I understood why, and realized that adding more RAM helped me even more than I thought - it just isn't reflected in the way it used to be.

However; we're talking about defrag here. So does the change happen before or after a restart? I suspect I know why either way - I suspect that by now, you do as well. Yet, I don't see why its so important to you. OTOH, I collect bottle caps *shrug*.

Manta


Top
  Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 57 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC - 8 hours


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group