Quantcast

Maximum PC

It is currently Mon Dec 22, 2014 2:26 pm

All times are UTC - 8 hours




Post new topic Reply to topic  [ 38 posts ]  Go to page 1, 2  Next
Author Message
 Post subject: Cross platform RAID - feasible?
PostPosted: Tue Sep 08, 2009 10:08 pm 
Smithfield
Smithfield
User avatar

Joined: Sun Sep 05, 2004 9:01 am
Posts: 8091
It sucks that this is the place to put this. I think this thread deserves attention from the HH, linux and Windows forum but w/e.

I am currently looking at migrating from 6 small capacity Sata drives to some kind of redundant storage solution. multiple RAID1's seems the easiest, but I've had one large RAID 5 on my mind for awhile. The problem is when I think RAID 5 (or almost any RAID for that matter) I think about the problems migrating it to new motherboards/etc. The way I see it there are the following options:

Buying a card - to expensive for hardware XOR etc. I wont spend $100+ on a RAID card I might have to buy another of for migrating/replacement. This option seems possible if I just need more SATA ports for a software driven RAID.

Mobo built in driver based raid - locked into that chipset lineups compatibility, but should work in windows/linux no?

Windows based raid - purely software driven, but should be transportable to any windows installation I'm likely to consider. Kill Linux support I think though.

I'm considering dual booting to Linux in the near future, or at least on another computer and having windows shares from my main computer available. No I dont want to deal with FTP etc for basic file sharing on my home network. This presents the problem of reading/writing to the RAID from linux on both a recognizing the RAID level and a filesystem level (NTFS write support is still not done in linux IIRC)

Right now I'm thinking since I'll probably stick with Intel for the foreseeable future it would make sense to build a 4 disc RAID 5 for raw storage with the ICH RAID functionality. That should be recognizable in Linux right? Which just leaves the file system. As long as I can make files larger than 12 GB I'm happy since blue ray rips are the biggest thing that should be on there. Unfortunately that kind of rules out FAT32. Does anyone know if there's any hacked support in windows xp/vista/7 for file systems such as ext2/3 etc?

Or is Linux NTFS support mature enough to trust at least reading from it and hopefully writing to it too? Has that one man project stalled out and dropped off the face of the earth or are NTFS volumes something most Linux users deal with frequently now?

I'm just not willing to shell out more than $100 for an external solution.


Top
  Profile  
 
 Post subject:
PostPosted: Wed Sep 09, 2009 6:58 am 
Smithfield
Smithfield
User avatar

Joined: Sun Sep 05, 2004 9:01 am
Posts: 8091
I'm not going away.


Top
  Profile  
 
 Post subject: Re: Cross platform RAID - feasible?
PostPosted: Wed Sep 09, 2009 8:12 am 
Super Mario Banhammer
Super Mario Banhammer

Joined: Fri Aug 25, 2006 11:20 am
Posts: 595
urmumsacow wrote:
It sucks that this is the place to put this. I think this thread deserves attention from the HH, linux and Windows forum but w/e.

I am currently looking at migrating from 6 small capacity Sata drives to some kind of redundant storage solution. multiple RAID1's seems the easiest, but I've had one large RAID 5 on my mind for awhile. The problem is when I think RAID 5 (or almost any RAID for that matter) I think about the problems migrating it to new motherboards/etc. The way I see it there are the following options:

Buying a card - to expensive for hardware XOR etc. I wont spend $100+ on a RAID card I might have to buy another of for migrating/replacement. This option seems possible if I just need more SATA ports for a software driven RAID.

Mobo built in driver based raid - locked into that chipset lineups compatibility, but should work in windows/linux no?

Windows based raid - purely software driven, but should be transportable to any windows installation I'm likely to consider. Kill Linux support I think though.

I'm considering dual booting to Linux in the near future, or at least on another computer and having windows shares from my main computer available. No I dont want to deal with FTP etc for basic file sharing on my home network. This presents the problem of reading/writing to the RAID from linux on both a recognizing the RAID level and a filesystem level (NTFS write support is still not done in linux IIRC)

Right now I'm thinking since I'll probably stick with Intel for the foreseeable future it would make sense to build a 4 disc RAID 5 for raw storage with the ICH RAID functionality. That should be recognizable in Linux right? Which just leaves the file system. As long as I can make files larger than 12 GB I'm happy since blue ray rips are the biggest thing that should be on there. Unfortunately that kind of rules out FAT32. Does anyone know if there's any hacked support in windows xp/vista/7 for file systems such as ext2/3 etc?

Or is Linux NTFS support mature enough to trust at least reading from it and hopefully writing to it too? Has that one man project stalled out and dropped off the face of the earth or are NTFS volumes something most Linux users deal with frequently now?

I'm just not willing to shell out more than $100 for an external solution.


@urmumsacow

Well. I currently have a RAID 5 solution using an external SATAII card through a PCIe slot. There are cards out there that you can do this on for less but my controller card of choice is the Areca 1210 PCI x8 4 port SATAII card
its a bit more than $100.00 but it has migrated from 4 different systems over its lifetime and has seen 3 sets of drives starting with a set of 4 WD 250GB drives and now up to 4 WD 750GB drives it is currently running in a dual boot configuration with Win 7 RTM on the RAID 0 Mobo configed Velociraptors and Ubuntu loading off the RAID controller itself.

Now admittedly this is a bit of overkill for a home system but it works very well for my needs and I have migrated it successfully through two different versions of XP 3 different Linux distro's and Vista and currently Win 7 so migrating it was no problem at all. The only issues I ever had with it were spurious S.M.A.R.T. Failures on all 4 of the then 500GB WD Drives under Vista 64bit Ult. (Creative soundblaster X-fi PCIe was interfering with the controller and causing dropouts from a faulty Creative driver go figure)

The card is Rock Solid and gives me all the info I need through the arcHTTP Server that comes bundled with the card. for < 300.00 you arent going to find many better controllers out there that do true XOR and are this stable. Oh btw it has an optional battery backup module that you can buy for an additional 160.00 or so that allows a safe harbor for write back caching.

If you are looking for RAID 5 with 4 ports on SATA I highly recommend a looksee on that one.

EDIT:
I have two of these cards. The second one is running in an Ubuntu Server 9.04 Jaunty in RAID 5 with 4 500GB WD's in a Box I built for NAS and its running SAMBA It reads off of my Gaming rig (the other Areca card rig which is NTFS).


Top
  Profile  
 
 Post subject:
PostPosted: Wed Sep 09, 2009 8:40 am 
Smithfield
Smithfield
User avatar

Joined: Sun Sep 05, 2004 9:01 am
Posts: 8091
Out of curiousity, what filesystem are you using on the (I assume) RAID 5? NTFS? How good was Linux support for NTFS functionality? No corrupted data whatso-ever?

Thanks for your experience but I can't bring myself to spend >100 for a RAID 5 card... I'm morally opposed to the idea.


Top
  Profile  
 
 Post subject:
PostPosted: Wed Sep 09, 2009 11:32 am 
Super Mario Banhammer
Super Mario Banhammer

Joined: Fri Aug 25, 2006 11:20 am
Posts: 595
urmumsacow wrote:
Out of curiousity, what filesystem are you using on the (I assume) RAID 5? NTFS? How good was Linux support for NTFS functionality? No corrupted data whatso-ever?

Thanks for your experience but I can't bring myself to spend >100 for a RAID 5 card... I'm morally opposed to the idea.


Depends on which of the two controller cards we are discussing. My gaming rig has 2xWD150GB 10krpm Velociraptors in RAID 0 (controlled by the onboard JMicron chip) booting to Windows 7 RTM and the Areca 1210 booting Ubuntu 9.04 Jaunty Jackalope from 4 WD 7501AALS 750GB drives configured in RAID 5 Read Ahead and Write Back Caching enabled (battery backup module installed on the Areca card) (Note: at the time I installed Jaunty I had to recompile the Areca drivers as Jaunty had been out for less than a week and Areca had not yet implemented a driver compile for Jaunty but the source is available so that you can recompile your own driver if you need to. I used yacc to compile the driver from source. Since that time Areca has released a compiled driver for Jaunty and I am told it works well :D ). When booted into Jaunty I can read and write to the RAID 0 array just fine and I can read and write to the RAID 5 Array just fine.

If we are discussing the NAS box I built It runs Kubuntu Server 9.04 with a KDE desktop in RAID 5 with 4xWD5000 AAKS 500GB drives running Samba so it is visible on the network as a collection of windows shares although at its heart it is Kubuntu 9.04 File Server. Main board there is a Foxconn Destroyer 780a with an AMD PhenomII x3 720 chip and 2xIntel EXPI9404PT Quad Port Server NICs in two of the 4 PCIe slots with the primary PCIe slot reserved for the Areca 1210. The NIC ports are teamed on each card (each card representing one team) and each seperate card is set up as a load balancer/Fail Over NIC so that throughput between the Netgear 16port Fibre capable switch is an effective 4000Kbps on that network segment (the wife is a photography buff and all of her digital pics are transmitted from her pc to the NAS box in RAW format--The Teaming and throughput and the Jumbo Frames Support on the NIC's and the Switch help with this) Realistically She can post her photos to the samba share while I play WoW or other Online game or stream a video and not see any latency on the local network.

Of course if you are morally opposed to spending > 100 on a RAID 5 card I doubt you are going to be that enthusiastic about 700 per card for two Server grade NICS with 4 ports per NIC


Top
  Profile  
 
 Post subject:
PostPosted: Wed Sep 09, 2009 12:40 pm 
Million Club [PC]
Million Club [PC]
User avatar

Joined: Sat Mar 28, 2009 11:38 am
Posts: 7690
urmumsacow wrote:
Out of curiousity, what filesystem are you using on the (I assume) RAID 5? NTFS? How good was Linux support for NTFS functionality? No corrupted data whatso-ever?

Thanks for your experience but I can't bring myself to spend >100 for a RAID 5 card... I'm morally opposed to the idea.

NTFS support is perfect on ubuntu 9.04, have two dual-boot machines with shared NTFS partitions, works perfect in both OS's.


Top
  Profile  
 
 Post subject:
PostPosted: Wed Sep 09, 2009 1:02 pm 
Super Mario Banhammer
Super Mario Banhammer

Joined: Fri Aug 25, 2006 11:20 am
Posts: 595
Food for thought on this one urmumsacow. Most of the SATA controller cards you will find that offer RAID for < 100.00 are not true RAID controllers. At worst they will be nothing more than a PCIe card with sata ports at best they will probably be a software or fakeraid solution with an XOR engine on it but offloading read/write calcs to the cpu rather than process that on board. Once you get into the 275.00 + range you start to see SATA II, Multiple Internal Ports, On Board Processors to handle reading, writing, and parity calculations, Upgradable Memory Modules for caching the reads and writes and parity calculations and even fibre/esata external ports. If you were building an enterprise or Data Center class server storage solution you could literally spend 10-20k on a pair of highly reliable PCI-X SAS/SATA or SCSI controller cards for RAID solutions.


Top
  Profile  
 
 Post subject:
PostPosted: Thu Sep 10, 2009 6:04 am 
Smithfield
Smithfield
User avatar

Joined: Sun Sep 05, 2004 9:01 am
Posts: 8091
I understand the difference between hardware functionality and driver/software driven one. As I said I definitely wont be buying a $100+ card for hardware functionality. I simply wont stress the RAID enough for it to be cost effective. If some video I'm working on writes 15-20% slower because one of my cores is working on the RAID functionality I'm fine with that. It wont happen often.

What I'm looking to do is create a *reasonably* safe big old gob of storage for movies/music so that I watch, add too and share over sneaker-net from it.

As long as Linux can deal with NTFS that's 50% of my concerns gone.

Now I just need to find out if its possible to create a RAID with a cheap 4 port card and read/write to it from windows AND at least read from it in Linux. Does anyone know if you can do that from an Intel ICH9 created RAID?


Top
  Profile  
 
 Post subject:
PostPosted: Thu Sep 10, 2009 6:32 am 
Super Mario Banhammer
Super Mario Banhammer

Joined: Fri Aug 25, 2006 11:20 am
Posts: 595
urmumsacow wrote:
I understand the difference between hardware functionality and driver/software driven one. As I said I definitely wont be buying a $100+ card for hardware functionality. I simply wont stress the RAID enough for it to be cost effective. If some video I'm working on writes 15-20% slower because one of my cores is working on the RAID functionality I'm fine with that. It wont happen often.

What I'm looking to do is create a *reasonably* safe big old gob of storage for movies/music so that I watch, add too and share over sneaker-net from it.

As long as Linux can deal with NTFS that's 50% of my concerns gone.

Now I just need to find out if its possible to create a RAID with a cheap 4 port card and read/write to it from windows AND at least read from it in Linux. Does anyone know if you can do that from an Intel ICH9 created RAID?

IIRC ICH9 has the capability you are looking for. I know for a fact that ICH10R Has that capability as the DFI LanParty has two on board RAID chips one is the JMICRON chip (which Im using in my Gamer to pair the Velociraptors) and the other is the ICH10R. As a test of the boards capabilities I ran the ICH10R under Ubuntu Intrepid Ibex in a RAID 5 configuration and had no issues with it recognizing the RAID array once the drivers for it were installed.

It sounds as if it should be able to do what you need to do.

I prefered the hardware solution as It offloaded parity calculations from my cpu and when you end up with a degraded array (as I did under vista) rebuilding an array with a replaced drive took > 18hrs under the IHC10R with the Areca card rebuilding the same array took less than 4 hours. But then again I'm an impatient person. Also for some odd reason, and I cant fault the ICH10R on this one as it was a corrupted soundblaster driver that ended up being the culprit, I had 4 older WD 500GB HDD's start reporting S.M.A.R.T. Errors under the ICH10R Those same HDD's under the areca card are still servicing the file servers storage needs quite nicely with nary a S.M.A.R.T. error to be found.


Top
  Profile  
 
 Post subject:
PostPosted: Thu Sep 10, 2009 6:53 am 
Smithfield
Smithfield
User avatar

Joined: Sun Sep 05, 2004 9:01 am
Posts: 8091
So the RAID you created in linux worked fine in windows as well?

I would expect something with a full blown dual/quad core processor and 2+gigs of ram to rebuild faster than some 400~mhz 128mb dedicated card. Then again my friend with an older nforce4 mobo and a decent processor said it took awhile to rebuild his array.

That sucks about SMART errors not showing up with your card =/ - unless you thought the ICH10 was incorrectly reporting them?

I'd love to spend the money on 2 good raid cards (one backup), but I have other things in my life that need it first.

It's sad, I used to keep thinking that with peoples ever expanding storage needs that hardware consumer RAID 5 cards would become commodities, but since 1.5+ TB drives have become so cheap and with most peoples data fitting onto just one of them fine... I doubt it will happen =/


Top
  Profile  
 
 Post subject:
PostPosted: Thu Sep 10, 2009 7:40 am 
Super Mario Banhammer
Super Mario Banhammer

Joined: Fri Aug 25, 2006 11:20 am
Posts: 595
urmumsacow wrote:
So the RAID you created in linux worked fine in windows as well?


It did except for the fact that it was Vista 64 and the creative driver after just three days and the infamous update from Microsoft corrupted the x-Fi driver which in turn seemed to interfere with the ICH10R.

urmumsacow wrote:
I would expect something with a full blown dual/quad core processor and 2+gigs of ram to rebuild faster than some 400~mhz 128mb dedicated card. Then again my friend with an older nforce4 mobo and a decent processor said it took awhile to rebuild his array.


I expected it would also. Surprisingly it did not. I later found after doing some further research that the XOR engine used in a discreet card since it is desinged ONLY to handle parity calculations and read and write assignments is very much more efficient than the cpu. Thus the delay from the ICH10R chip. Not to mention it had to go across the SATA bus and through main memory and get in the FIFO stream before it could be calculated. With each block being 64k and a total of 1.38TB of data space to calculate through that was a loooooong time.

urmumsacow wrote:
That sucks about SMART errors not showing up with your card =/ - unless you thought the ICH10 was incorrectly reporting them?


I think that the corrupted x-Fi driver (which caused a host of other issues that I had to sort through) caused the ICH10R to begin polling the HDD's which were not enterprise class HDD's and the Drives firmware would take longer than 8 seconds to report back and the ICH10R would record a S.M.A.R.T. event and then drop that drive from the array. The Areca 1210 controller cards have never reported any of my HDD's as having had a S.M.A.R.T. Error. And as the ones that were reported as having a S.M.A.R.T. error under the ICH10R spun up and just plain worked without any modification under the Areca controller card I chalked it up to a "perfect storm" situation where multiple events happening simultaneously caused my issues and pointed at the exactly wrong place for the type of errors had.

urmumsacow wrote:
I'd love to spend the money on 2 good raid cards (one backup), but I have other things in my life that need it first.

It's sad, I used to keep thinking that with peoples ever expanding storage needs that hardware consumer RAID 5 cards would become commodities, but since 1.5+ TB drives have become so cheap and with most peoples data fitting onto just one of them fine... I doubt it will happen =/


I agree. I always thought that if the HDD manufacturers also had an interest in the discreet controller market that Consumer RAID would become a thing of advertised necessity. But, sadly, you are correct. with the advent of 1.5TB+ drives most people see that as enough storage space and all they see is the amount of storage. Rare is the individual who sees that 1.5TB mechanical device for what it is. A mechanical device. Bound at some point by the laws that govern engineering to fail.

At some future time when I least need it to happen a drive will fail. If all the data I have written to that drive is only on that drive then I am doomed to data loss. If I implement a backup plan then only the time to replace the drive and restore the data is lost. I dont like to waste time. I am impatient by nature. But If I am a belt and suspenders type guy, and I am, and I have multiple drives to which the same data is written to two or more of those drives simultaneously, then I have lost only the time for me to swap out the failed drive for a known good one and for the XOR engine to recalculate parity and rebuild the array. In the case of the Controller card going out I still have the backup from which to restore after I replace the card. Its just a damn shame that the average consumer doesnt see this or isnt informed of this up front when buying a pc. If they were I would wager that most folk would opt for a RAID 1 or RAID 5 or RAID something solution that would protect their time and their data simultaneously.


Top
  Profile  
 
 Post subject:
PostPosted: Thu Sep 10, 2009 8:19 am 
Smithfield
Smithfield
User avatar

Joined: Sun Sep 05, 2004 9:01 am
Posts: 8091
I'm unclear what happened with your RAID under windows - did the creative driver prevent the RAID from showing up? Or did it slow down a rebuild?

I remember hearing about hardware XOR calculation but never hearing that the efficiency was orders of magnitude better than P4's and AMD's best 4 years ago.

It really sucks that I didnt have the money to build this last June (god time flies) I coulda bought 2 Netcell cards with hardware XOR for $60~. For the pair. Sure they dont have DDR slots on them but cmon, that was still a bargain. Now I cant even find them for sale =/

BTW thanks for your time and input on this.


Top
  Profile  
 
 Post subject:
PostPosted: Thu Sep 10, 2009 9:39 am 
Super Mario Banhammer
Super Mario Banhammer

Joined: Fri Aug 25, 2006 11:20 am
Posts: 595
urmumsacow wrote:
I'm unclear what happened with your RAID under windows - did the creative driver prevent the RAID from showing up? Or did it slow down a rebuild?


It actually did quite a lot of damage that had me hotfooting it about trying to correct one situation after another. BSODS, Drives dropping out of the array, undetectable array, card unbootable, when I did get it back up and running, first one then another then another drive reported errors through ICH10R. Went through 3 degraded array rebuilds finally said hell with it pulled out everything but the video card and the two velociraptors and the RAM and started reconnecting things back one at a time. when I got down to the SATA array through the ICH10R it booted detected did everything as it was supposed to do and revealed no drive errors. reinstalled the soundcard it worked. reinstalled the driver and the ICH10R Immediately began seeing errors on drive3 the last drive in the array. Went AHA! and uninstalled the soundcard driver and the smart error went away.

Never ever ever have I had anything like that happen. Turns out that Microsoft had released an update patch the day before. Said patch had code in it that interacted with the PCIe bus when a Creative x-Fi card was installed on an X58 chipset board with a whql certified driver. So bottom line there was Bad Driver from Creative + Bad Certification Processes from WHQL plus faulty patch update from Microsoft + sensitive ICH10R chip = a whold nest of snakes to untangle. As a direct answer to the question though It did not prevent the array from showing up it caused the RAID chip to think that the drives were throwing errors. When in fact no such errors were being reported.

urmumsacow wrote:
I remember hearing about hardware XOR calculation but never hearing that the efficiency was orders of magnitude better than P4's and AMD's best 4 years ago.

Its not just that the XOR chip is more efficient its that a discreet hardware RAID controller has on board cache memory (256mb in the case of the Areca 1210 upgradable to 2gb) on the card that allows parity calculations to happen and be processed and sent to the appropriate platter and head independently of the other calculations happening on the cpu and all of that type of processing involving Read/Write IO is done as a single command from the cpu. Main Memory dumps to the Memory modules on the controller card and the controller card takes it from there allowing other applications and services to continue as if the data they need is already there. Add to that the ability of the XOR chips to initiate pre-fetch routines (read ahead caching or write back caching) and the order of magnitude in rebuilding an array becomes very apparent. The ICH9 and ICH10R chips still rely on the cpu to do something that is best left to a dedicated XOR engine.


urmumsacow wrote:
It really sucks that I didnt have the money to build this last June (god time flies) I coulda bought 2 Netcell cards with hardware XOR for $60~. For the pair. Sure they dont have DDR slots on them but cmon, that was still a bargain. Now I cant even find them for sale =/


I feel your pain. Wasnt the Netcell Revolution the controller card that was used in the MPC Dream Machine 05 the one where they did everything dual? dual core Processors on a dual processor Workstation board with 8GB of DDR2 (new at the time I think) with a Netcell Revolution controller card with 5 Hitachi Deskstar :wink: :wink: nudge nudge say no more 7k500's in RAID 3 running under Windows XP 32 with two nVidia 7800 GTX's in SLI? I was warming up to my first dream machine build of my own back then and wanted one of those cards so bad I could taste it.

urmumsacow wrote:
BTW thanks for your time and input on this.

Not a problem Im a RAID afficianado if you hadnt already guessed and I consider this type of conversation fun.


Top
  Profile  
 
 Post subject:
PostPosted: Thu Sep 10, 2009 10:49 am 
Smithfield
Smithfield
User avatar

Joined: Sun Sep 05, 2004 9:01 am
Posts: 8091
Hang on though, if a CPU and its Gigabytes of RAM aren't up to anything else and 95%+ of the CPU's resources are free... I just cant seeing it being at that big of a dissadvantage unless CPU's really REALLY suck at XOR. I mean sure if you're talking about video cards and discreet RAM vs system RAM where latency differences in the ms/ns make a big difference.

But with RAID writing/rebuilding you shouldn't need more than 300MB/sec (maybe 600 if you count both directions from cpu to controller) and you have 1000MB/sec between the northbridge/southbridge/cpu/memory controller - I would think that for the lowly task of rebuilding an array bandwidth wouldn't be a problem. Latency... eh I'd like to see the full explanation as to why that would cripple things so badly.

I think I DO remember MPC using it at some point, or at least having a nice big review and hyping it. I'll check the archives when I get home. I was real excited when I saw a company making a hardware RAID solution oriented towards consumers and not corporations ($$$) I figured not only would a netcell fit my needs just fine, but in time it should attract competition and depreciate. Sadly I was only right about the second point, and supplies have dried up completely =/


Top
  Profile  
 
 Post subject:
PostPosted: Thu Sep 10, 2009 12:04 pm 
Super Mario Banhammer
Super Mario Banhammer

Joined: Fri Aug 25, 2006 11:20 am
Posts: 595
urmumsacow wrote:
Hang on though, if a CPU and its Gigabytes of RAM aren't up to anything else and 95%+ of the CPU's resources are free... I just cant seeing it being at that big of a dissadvantage unless CPU's really REALLY suck at XOR. I mean sure if you're talking about video cards and discreet RAM vs system RAM where latency differences in the ms/ns make a big difference.

But with RAID writing/rebuilding you shouldn't need more than 300MB/sec (maybe 600 if you count both directions from cpu to controller) and you have 1000MB/sec between the northbridge/southbridge/cpu/memory controller - I would think that for the lowly task of rebuilding an array bandwidth wouldn't be a problem. Latency... eh I'd like to see the full explanation as to why that would cripple things so badly.

I think I DO remember MPC using it at some point, or at least having a nice big review and hyping it. I'll check the archives when I get home. I was real excited when I saw a company making a hardware RAID solution oriented towards consumers and not corporations ($$$) I figured not only would a netcell fit my needs just fine, but in time it should attract competition and depreciate. Sadly I was only right about the second point, and supplies have dried up completely =/


Ok there is one possible reason I left out and it is a BIG hit on resources. The Areca Card allows me to rebuild the array through its BIOS. The ICH10R chip had to complete Rebuilding the array after the OS loaded. So with the Areca rebuild it probably went so much faster because it was doing it all foreground. Whereas Under Vista the task was limited to the background only and although I could change the priority the ICH10R chip would only use one thread (8 available core i7 920 processor) and it was limited to a maximum of 20% in the background. I didnt actually do anything with the pc while it was rebuilding under Vista and unecessary processes were killed but System Idle was still the most expensive cpu operation during that time. Went to bed at 1:00 AM left the pc rebuilding No network access at all during that time period (unplugged both of the connectors from the back of the pc and disabled both nics) got up at 6:30 am ICH10R had yet to complete (35%) its rebuild. Went to work got home at 6:30pm still not completed (85%). 9:00pm that night or there abouts ICH10R completed rebuild. ~18 hours

On the Areca card I replaced an older drive with a newer drive and rebuilt the array. Started the rebuild process at 6:30pm on a Tuesday night, went to my bowling league with my wife, returned home at 10:45 walked the dog and when I got back in from walking the dog the Areca array had finished its rebuild. ~4 hours. Of course the ONLY thing happening with the pc at that point was that the regular BIOS had been loaded and the Areca Array Controller had loaded its BIOS and was running the calculations and rebuilding the array. Windows had not even loaded. So one way to look at that was that that night it took me 4+ hours to boot my pc.

Think of it this way: The only actual processing happening during the Areca rebuild was that the drives were spinning and the drive read/write heads were moving and the XOR chip on the Controller card was rebuilding the array. The motherboard was not involved at all, main memory was not involved and no other buses were involved just the card and the cables and the drives all operating below the Operating System level with the exception of the status bar being updated through the video card there was no other activity.

And yes CPU's generally suck at XOR operations.


Top
  Profile  
 
 Post subject:
PostPosted: Thu Sep 10, 2009 12:25 pm 
Willamette
Willamette
User avatar

Joined: Fri Jul 06, 2007 9:29 am
Posts: 1447
I'm just pulling this out of my hat. But essentially:

RAID5 consists of 2 writes at minimum. It writes to a stripe of data on the hard drives (A data block and a parity block). It's not writing everything at once, it's writing it in chunks the size of that stripe. If there's data already in that stripe, it has to recalculate the parity for the whole block (so more back and forth).

So at simplest:
Hardware RAID5: Write command -> RAID Controller -> Hard drive Data & Parity.
Software RAID5: Write command -> Software Raid -> CPU to get Parity -> Software Raid -> Hard Drive Data & Parity

Logically speaking, you'd think that the latency might not be as big a deal with large ram + high Ghz CPU. However, you're bringing in all sorts of unknowns though with a software solution (you are never giving full priority to something in a multi-tasking OS vs a hardware solution).


Top
  Profile  
 
 Post subject: Re: Cross platform RAID - feasible?
PostPosted: Fri Sep 11, 2009 6:22 pm 
SON OF A GUN
SON OF A GUN
User avatar

Joined: Mon Nov 01, 2004 5:41 am
Posts: 11605
urmumsacow wrote:
It sucks that this is the place to put this. I think this thread deserves attention from the HH, linux and Windows forum but w/e.
Didn't like my response?

urmumsacow wrote:
Out of curiousity, what filesystem are you using on the (I assume) RAID 5? NTFS? How good was Linux support for NTFS functionality? No corrupted data whatso-ever?
No corrupted data? That is kinda silly IMO. Corrupt data happens. RAID won't protect you from that.

urmumsacow wrote:
I understand the difference between hardware functionality and driver/software driven one. As I said I definitely wont be buying a $100+ card for hardware functionality.
I am not sure you can NOT buy the card and do it properly.

Like I said in Alt.OS, if you want cross platform, software raid is out. You HAVE to do it in hardware. Which means buying either a motherboard or expansion card that supports it, which means spending money you don't want to spend.

Software RAID will work great in a single OS environment. You can still access it from computers under any OS (As the host system handles that access) but you can't do software and dual boot and expect both OS installs to see the same RAID array.


Top
  Profile  
 
 Post subject: Re: Cross platform RAID - feasible?
PostPosted: Wed Sep 16, 2009 1:22 pm 
Monkey Federation (Top 10)*
Monkey Federation (Top 10)*
User avatar

Joined: Sun May 22, 2005 8:28 am
Posts: 3673
Location: The Blue Nowhere
CrashTECH wrote:
Like I said in Alt.OS, if you want cross platform, software raid is out. You HAVE to do it in hardware. Which means buying either a motherboard or expansion card that supports it, which means spending money you don't want to spend.

Software RAID will work great in a single OS environment. You can still access it from computers under any OS (As the host system handles that access) but you can't do software and dual boot and expect both OS installs to see the same RAID array.

This.


If you want an array that shows up on the same machine across operating systems you are gong to have to go with a hardware RAID. Now, that being said, why not re-purpose an old machine and put the array on that, toss it in the basement, and just share it out over the network? This way it would not matter what the file system is nor would it matter what OS is running. You don't have to use FTP for that just use Windows shared folders or samba. Both can be setup in a matter of minutes and tuned as specifically as you like.


Top
  Profile  
 
 Post subject: Re: Cross platform RAID - feasible?
PostPosted: Wed Sep 16, 2009 5:19 pm 
Smithfield
Smithfield
User avatar

Joined: Sun Sep 05, 2004 9:01 am
Posts: 8091
CrashTECH wrote:
No corrupted data? That is kinda silly IMO. Corrupt data happens. RAID won't protect you from that.

Corrupt data related to cross platform RAID issues, not corrupt data period.

Lodis4 wrote:
If you want an array that shows up on the same machine across operating systems you are gong to have to go with a hardware RAID. Now, that being said, why not re-purpose an old machine and put the array on that, toss it in the basement, and just share it out over the network? This way it would not matter what the file system is nor would it matter what OS is running. You don't have to use FTP for that just use Windows shared folders or samba. Both can be setup in a matter of minutes and tuned as specifically as you like.

Because that solution sucks ass and isnt what I described above?

Oh and BTW motherboard RAID, the kind I've been talking about, is NOT hardware RAID.


Top
  Profile  
 
 Post subject:
PostPosted: Thu Sep 17, 2009 4:30 am 
Monkey Federation (Top 10)*
Monkey Federation (Top 10)*
User avatar

Joined: Sun May 22, 2005 8:28 am
Posts: 3673
Location: The Blue Nowhere
In that case what you have done is effectively stated you are not willing to do the ONLY thing that will allow you to get the functionality you are looking for.


Not that I care but is there some reason that this has to be local to the machine you are working on? I understand the arguments both ways (local and remote) but reading this thread you seem to have some knowledge of computer systems beyond just point and click yet are not willing to make a compromise.

It comes down to money and features; one is at one side of the scale and the other at the opposite. More money, more features, less money... well, you know the rest.


Top
  Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 38 posts ]  Go to page 1, 2  Next

All times are UTC - 8 hours


Who is online

Users browsing this forum: Sundancer268 and 8 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group

© 2014 Future US, Inc. All rights reserved.