1. the ONLY time when Virtual Memory is NOT just one more section or space on a hard drive is when the space used to create that virtual memory is, itself, virtual. Such as using a RAM-Drive. Which is not a hard drive itself. If there are any other methods of creating Virtual Memory, I've not yet heard of them.
We actually used to use floppies. In the old school, virtual memory or ram driving was a way of simply dealing with the fact that a system could not hold much RAM. It wasn't a performance thing.
There was also the AST 6-pak - a virtual memory card - much like what you are thinking of as a RAM-Drive. In fact, its no coincidence that Gigabyte option is called a "RAM-Drive". It not new - its just repackaged.
Currently, the only RAM-Drive systems that I know of are not considered large enough to be worth the extreme cost of attaining. One such was written up, I believe right here at M-P at some point during the last year. With the +'s being extra fast access speeds, and the -'s being extreme cost per gig and limited size. I'm uncertain but I think it was maxxed out at 4 gig, but priced in a range at or near that of a complete system. That price was just for the card and software, NOT the memory sticks to be added to it.
Actually, its not so pricey. Last I saw about $700 with 4gigs of RAM on board. And, if you are like me and use files in the GB range daily, its worth the money. MPC's review was tunnel visioned and didn't look at all of the uses. Only those they were accustomed to.
If a file, any file, is in use, the only time it would NOT have been written to the virtual memory would be if there were sufficient RAM that the extra memory were not considered needed at that time by the system.
This is a slip on your part. 10 years back, the statement would be true. And, I'm sure (in fact, from fielding questions from gamers, I'm very sure) many still think it is. But it is not.
NT systems will use a page file regardless of how much RAM you have. Its intentional. While I am no MS fanboi, it's a bit of genius. A modern NT system uses the page file for far more than holding what won't fit in RAM. It stages bits of code in a fast access zone. Slower then RAM, but faster than a normal HDD read. This is very useful to keep RAM overhead low - while keeping performance up.
And a hard drive's ability to 'Handle' a sudden power loss is the primary reason for the purchase of a UPS. Because data CAN be lost from a drive due to power loss, either total or 'Brown Out'. Drives can, and have, been turned into little better then oversized fishing weights, with ALL data contained being lost and unrecoverabe as a direct result of power outages.
True, but I have a feeling its not because of the reason you assume. A sudden power loss or surge at the right time can do any number of things to a drive. However, during a read, the file (or any part of it) is not removed from the drive, reallocated, or modified in what you call the FAT and then rewritten. At least not under normal circumstances. If it were, the lifetime on a drive would drop considerably. And by that, I mean considerably - like it would last month instead of years.
2. ANY and EVERY file located in that system, regardless of which drive, or how many drives, it may be contained in, is listed in the FAT table. This table may or may not be NAMED as a FAT table in the usual sense.
I'll give you that for now - but I suspect I will tell you why I nitpick about the term FAT pretty quick..
Without that listing, the system cannot know where on the drive to look for the file in question. This includes all of the various BOOT and STARTUP files. Every Mac, every Linux, every Windows, every O/S of any kind has it's own version of this table. Without it, without a list of what file is located where, not much would happen when a system was started up. And running an app could take hours or even days just for the app to be loaded before you could begin doing anything with it.
Actually, the boot sector is an exception me thinks.
"In the mean time - if you have a real question ask it. But thus far, you seem to come off (intentionally or not) as someone who asked a question but doesn't want and answer (unless he/she already knows it) "
I already DID ask it. And if I already KNEW the answer to the question I asked, there would have been no reason to ask the question. I did offer up an analogy which DOES answer the question, but I do not know how to test it out to prove one way or another. So that analogy remains as nothing better then a Theory...
Well, maybe I'm blind, but I didn't see the question you asked that was not answered. Granted, I have limited time to read threads. What was the question?
As for the various 'Embeded' O/S's, if EVERY file is effectively READ ONLY, UN-movable, then yes, under those conditions file fragmentation would not be a concern. Simply because if a file never changes and never moves, well, that pretty much covers it.
As for there being some great worlds of difference between a SWAP file as used in earlier versions of Windows, and the PAGING file used by WinXP. There is actually very little difference between them aside from the NAME of the file itself. BOTH are used for the exact same purpose and in the exact same fashion. They are the VIRTUAL MEMORY which allows Windows to function at the speeds it does. The only real difference between them being that XP keeps an open area, Slack Space if you will, around the Paging File. Which allows the room for it to expand or conract as needed by the system. Provided of course the SYSTEM is in control of it's size an location.
Thats not true. The NT system is far more complex. Ponder the use of a hybrid drive. Under your assumption, its almost useless. However, the page file system in modern boxes is more intelligent. Its not simply and area for dumping stuff that could go into RAM - code is intentionally dumped there in favor of putting it in RAM. Think about it. Why put something you will rarely access into RAM? and yet, why not put it in a quick access area for the few times you know you will need it during a normal session? Its not as simple as you have been informed.
And with fragmentation on the OLDer systems, as in PRE-Win95, the fragmentation was of the DRIVE, not the FILES written on it. If there were no section of the drive, in contigious allocation units, large enough to hold the new file you wished written to the drive, you simply got back the error message on insufficient space and the operation was aborted.
Which was sucktastic. A nearly empty drive could appear full when saving a large file. Worse, those system required a buffer to remove files (stupid really), you could find yourself with a nearly full drive and no way to free up space. Thus started the (current) myth that a drive should never be more than half full - because, back then it mattered.
With the advent of Win95, it allowed a file to be written, or even SPLATTERED, just anywhere that was open. This was a great help against DRIVE fragmentation, as no space was then wasted or lost. But this same system allowed FILES to become fragmented instead.
And so? So files are fragmented? It saves space. And so only the drive is fragmented instead of files - its much faster. Wanna go contiguous? Then be prepared to wait during saves and - as well - for data loss. Fragmentation is opportunistic - but quick.
[/quote]FAT 16, FAT 32, NTFS, it really doesn't much matter what specific file system you use. Because they are all, regardless of the O/S in question, just various ways of accomplishing the same task using much the same equipment. They are all aimed at the same end result. Some get there a bit faster, or handle the 'Journey' a bit better. But the end results are the exact same. They are all simply a means to handling the various files on and used by that Computer. They may not have a FAT table that is called a FAT table, but they all have a table or file which performs the EXACT SAME TASK as a FAT table. And that task is nothing more nor less then maintaining a list of the address(es) of each file or portion thereof contained on that disk.[/quote]
If that were true, we could have one - and forget the others. FAT is small, simple, and very fast. Unfortunately, its also easy to corrupt. NTFS is large (10 megs minimum) and slower than FAT. However, its also far more robust and useful. That because its not the same as FAT. Where you can think of FAT as a Table (which it is), you can look at NTFS (and other modern systems) as a somewhat self healing file database. Its far more complex - and useful than FAT. The trade off is performance.
At anyrate, the folks around here actually know stuff. You might get further if you considered that your knowledge might need a tad of tuning.
I understand that FAT and NTFS are similar - but they are different enough not to think of them as the same. I also understand "virtual memory" seems simple enough (RAM substitute), but that is not what it is.