Yes, the marvell is a proper raid chip/controller that is used on separate controller cards; while generally hardware raid tends to be more reliable and have better performance than comparable software raid, software raids (like Intel's ICH's) aren't exactly slow in Raid 0 or 1 where no parity is involved. And the CPU overhead for RAID isn't exactly a huge drain in performance, even when the CPU needs to calculate RAID5 parity; cpu's today are much more powerful than the raid chip will ever be, but software raid has to follow the rules of the OS. This can cause scheduling conflict and potential (in theory) bit corruption.
And since its software based, the cpu has to calculate the bit parities across all the disks for a raid 5 setup; the more you have, the more it will need to "figure out" and then send the data to the drives. A dedicated raid card tends to work better since all it has to worry about which drive to send the data and for parity, it doesn't have to fight or wait for resources from the OS to start the calculations since its on a hardware level.
But this doesn't include failures of your raid array if say power was lost. Raid's with parity are notorious for dropping out after power is lost during write operations, then you have to spend hours for the raid to be resilver or be rebuilt. Inexpensive controller cards tend to not have have any dedicated non-volatile ram to help mitigate this issue; you have to invest in some moola for this feature. If power was lost during writing, turn it back on and what was stored in the buffer is good and the process continues as though the power loss never happened. There might be some need to resilver, but it doesn't have to be completely rebuilt bit by bit.
The biggest issues with standard raid, specially with parity, is write-hole
issues, which can be caused by out-of-order caching and/or power loss issues during parity writes (but not limited to raid's with parity). Raid with some sort of non-volatile ram buffer (onboard or a separated SSD) and/or a backup power setup would limit write-hole issues and/or performance loss.ZFS software "raid"
A great software based "raid-like" system is the non-standard ZFS which is actually a file system and a volume disk manager in one, so it controld both OS level operations and disk managing. It's a very very powerful and expansive file system that doesn't require specialized hardware or software to work (although there are, built by sun microsystems which is now Oracle... that huge company that does Java and enterprise level server solutions). ZFS is not supported under windows environment, but its under BSD and Solaris based server OS'. I only bring this up because its software based and very capable than hardware based, standardized RAIDS; actually more capable.
For a software based "raid 5" or "raid 6" setup using parity, its not as fast as a proper raid card, but it doesn't suffer the same horrible write speed losses that ICH does because the way that ZFS handles information from the OS level to the drives is much more streamlined (specially if you have their specialized and expensive hardware to go with it). It also actually gains performance as you add more disk volumes to the pool, just like raid setups on hardware controllers.
And because it's software based, you can build "raid" pools across various sata controllers, instead of being limited to whatever the raid controller can support. This means you could combine the ports of the ICH with your Marvel controller and build an even bigger raid setup. And since Oracle suggests using non-raid based controller cards with ZFS, that cuts down your overhead costs. But Oracale does have their own specialized controllers/hardware for ZFS for large enterprise solutions.
It's also offers way more data integrity protection using checksum or hash values during every write operation; it may seem like it would limit performance/speed, but it actually is very limited since it handles information differently and more efficient than standardized raid's. You do need high IOPS drives to really benefit from this, so 7200, 10k and 15k disks are suggested. Not only that, but it supports even higher parity bit disk redudancy; up to 3 drives instead of raid 6's 2, which means you can create even bigger pools for one large array, while having a sufficient parity/disk volume ratio. (suggested 1 parity for every 7 disks)
To sum this up, software based Intel ICH sucks for Raid 5, but works good enough for Raid 0 and 1. Raid controller tends to work better for raid 5, but since they are "suppose" to run on the pcie bus with more bandwidth, they aren't limited to the total sum of Intel's allocated 6Gb/sec set by the chipset (not for each drive, for the entire SATA disk pool).
If what you say is true that the new Marvell SE92xx controllers used on Intel motherboards have fixed the pcie problems, that's fantastic! I'd say give it a try, test both Intel's ICH vs. Marvell in raid 0 config and report back.