Answer the question
In order to leave comments, you need to log in
Answer the question
In order to leave comments, you need to log in
I don't think it's possible to give a general answer to your question.
Firstly, individual systems can write some checksums to disks (algorithms depend on the manufacturer), which allow you to judge the reliability of the information in this block (something like parity). And already on the basis of this information, you can decide what to do: recalculate the amount or overwrite the data block.
In such cases, most of the time during array initialization is just spent on calculating these checksums.
Secondly, for example, LSI controllers choose one of the drives they write to first. The choice seems to be random and does not depend on the number of the slot, disk or OS.
The data is written to this disk, the write is completed, and only after that the data is written to the second mirror disk. If a file corruption error occurs while writing to the first disk, the controller will not allow writing to the symmetrical disk.
If a hardware-level error occurs on the first disk, it is corrected if possible, then the write attempt is repeated. If the error on the first disk cannot be corrected, the array goes into the "degraded" state, usually followed by writing to the second disk.
If the "second" disk fails earlier, then in the algorithm above they are swapped.
If there is such documentation, study the logic of your hardware for RAID 1. Most likely, it is the same for RAID 10.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question