
Data isnt data unless it's in (at least) two places (Preferably far apart!)
Agree full with the other "Old Timers" comments on here.
RAID should NEVER beconsidered as backup.
Skimping a few bucks on the RIGHT hardware to balance the cost of downtime and potential data loss is always false economy.
El Cheapo "Fake RAID" cards should be avioded like the plague with the possible exception of ICH SB onboard RAID 1 for boot drives ONLY in small systems. Disks created by that are generally highly portable to another box if the motherboard dies. Also handy to get around the braindead ESX/XenServer installer decisions.
Battery backup in a RAID controler is nice, but even nicer are dual powersupplies to an A & B rail (Fully independent power). If you're not in a data center then two separate UPSes will do at a pinch. It may be overkill for a home NAS serving up movies, but if your job is on the line then make a strong case for it. If you're overridden, get it in writing so if things do go south then you know you'll still be employed.. ;-)
On the subject of backups, if you even half care about protecting your data you must have at minimum "Offline" backups in a fireproof safe, or preferably at a remote site (LVM with snapshots is great for this).
The classic error is to assume that just because you have two copies of data on the LAN you're golden. While unlikely, a decent fire or lightning strike will quickly show you the error of your ways.
Lighning's rare, but I've had to deal with the aftermath of a big strike 30ft from a rack of servers. Induced current travelled over the cat5, powerlines and in a whole host of other "Interesting" ways. Succeeded in destroying every hard disk at the site - both in the server rack and every PC on the LAN along with pretty much everything else IT related. The 20x disks in the SAN were actually turned into magnets! Yup, you could pick up screws with them...
All in all, about 30TB of data was blown away. Once new hardware was sourced it took less than a day to have everything back up and running, with no luck involved.
There's a reason it's called "Disaster Recovery PLANNNING"... ;-)
Unfortunately it's not until you've had your fingers burned at least once do most people start to pay attention to this unglamorous part of keeping systems running. In my case it was cutting corners on when building a big RAID6 array (many moons ago). Being young and inexperienced at the time I overlooked the small matter that if you have 8 IDE disks on a 4 port controller when the drive that fails is the Master, you'll loose the slave as well... Ooops.
Glad this worked out for you, but consider yourself bloody lucky you dodged a bullet here. It's easy for eveyone to lecture over a rookie mistake, but I'm still astonished how many "Pros" have no idea about what it really takes to avoid data loss.
Hopefully this story and subsequent comments will enable at least one reader to learn from the mistakes of others, but I'm sure there also will be more than one that scoffed but gets bitten later by something similar.