Answer the question
In order to leave comments, you need to log in
Does it make sense to build a RAID from an enterprise-class SSD if there is enough performance?
There are virtual machine hosts. Each host has one SSD Intel S3500 (240GB). Performance is more than enough, but I'm afraid for the safety of data in the event of a sudden disk death. The solution is obvious: assemble a mirror - write performance will not drop much, but a brief Google reveals the following pitfalls:
1) Some resources claim that RAID drives will have to be disassembled to update the firmware.
2) If the same data set is written to both SSDs, they will die at the same time, no?
3) There is no hardware raid controller and is not planned, will Intel ESRT2 become an additional performance bottleneck?
Hosts are not required to be 100% uptime, and a couple of hours of downtime to recover outside business hours is perfectly acceptable. Now we protect ourselves from data loss by scheduled VM replication to a separate server with RAID10 on hard drives. The only thing that does not suit me is the possible USN rollback during recovery (CDs are virtualized, there is no physical CD yet, but it is planned). Is it worth reinventing the wheel in the form of SSD arrays in such a situation?
Answer the question
In order to leave comments, you need to log in
So, in order:
1. All disks die. Generally any. Just with varying degrees of probability.
1.2 The write performance, in the case of a mirror, will be equal to the write performance of the worst of the disks (minus a slight synchronization overhead), i.e. so it's not critical for ssd of two disks
(when there are a lot of disks in the stripe and it's not ssd - yes, there can be a significant drawdown, especially if one of the disks is "dead", but alive enough that the controller still keeps it in the group and does not eject).
2. Collecting a mirror from two disks for the sake of data protection is not the most obvious way out (there are other types of raid, with other indicators and areas of application).
3. The firmware update goes (most often) without parsing the raid, in the event that it goes without data loss.
4. Disks "die" for a dozen different reasons, ranging from factory defects to the fact that controllers can distribute data cells differently and their wear will be different (pure mathematics and statistics - there are 140 trillion pieces of data cells on your disks, it is clear that it is impossible to distribute them exactly the same on two disks).
5. Software raid is bad, but if you turn off all write caches, which are usually enabled by default, then there will be nothing to be afraid of, data will still go directly to disks.
6. Well, a raid is needed (if, of course, it is economically justified) in any case. How is it in production without a raid? But for backups, the 6th raid would be enough there, why you use the 10th raid there is not clear.
I recommend installing a cluster FS like GlusterFS or Ceph on all hosts and storing virtual machine images on it. Get a bunch of benefits right away: fault tolerance, simple and very fast migration, easy expansion of disk space, snapshots and other goodies.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question