Answer the question
In order to leave comments, you need to log in
Questions about SSD NVMe for 1C?
Good afternoon!
Comrades, please share your experience / knowledge about using SSD NVMe for 1C.
1. How to make RAID 1 on NVMe SSD in ESXi?
Thoughts:
a) It turns out there is no point in using Raid controllers in this case. in the case of NVMe, we are just trying to get rid of intermediaries with wires that slow down the speed, switching to PCIe with direct access to the processor.
b) It remains only to forward 2 NVMe SSDs to the VM and create a software raid already inside? (for processors that support PCI-passthrough)
c) But a simple Raid controller will only need Raid 1 for ESXi itself and a VM without any special disk loads. Without a Raid controller, it's impossible to make Raid 1 for ESXi, right?
2. For NVMe SSDs, only raid 1 is needed for reliability, and the rest of the raids are not economically justified?
Thoughts:
a) For productive work with server 1C for many users, first of all, a random reading speed of a large number of small blocks is needed, and secondly, a random writing speed of a large number of small blocks. Or vice versa? At peak load - what is usually the bottleneck first?
b) All levels of raids give only an advantage in the linear read / write speed, which in our case is practically not needed. Therefore, a raid is needed only for reliability (mirror) or an increase in the array (although what prevents you from doing a lot of Raid1 if you don’t directly need a single large space?)
3. Because Enterprise level NVMe SSDs are quite expensive, is it possible to somehow save money, but still have reliability?
For example, somehow use slower SSDs, but many times cheaper.
Raid 1 is no longer needed, because slower SSDs will be a bottleneck.
Is there any solution when the NVMe SSD is the first drive and always gives maximum performance (both during normal operation and with short-term peak loads), while other SSDs are overwritten with a queue in only peak loads? (the rest of the time they will work synchronously, because the peak load occurs only 5% of the time of the entire work of the disks.).
If the NVMe SSD dies, then the current data queue is written from the buffer (some RAM disk is acceptable) to another SSD and the system continues to work on an SSD with lower performance until a new NVMe SSD is inserted.
---
Not very technically described, but something like this. At the same time, not 2 expensive disks will wear out, but only 1, the second disk in stock lies for a quick replacement. Let less expensive SSDs wear out, change, they are still many times cheaper.
Or there is no point in this, because. Hi Enterprise SSD TBW is outrageous and we will replace a bunch of cheaper SSDs over its lifetime, what will the cost be the same in the end?
4. How to hotswap an NVMe SSD?
Intel has U.2. SuperMicro also has solutions in the box.
Are there any adapters that can be hot-swapped, but at the same time without loss of performance due to wires / adapter?
5. Will an NVMe SSD work on an old X9DRi-LN4F+ https://www.supermicro.com/products/motherboard/xe... ?
Maybe someone has experience?
Answer the question
In order to leave comments, you need to log in
Raid controllers make no sense to use in this caseWhy not?
But a simple Raid controller will only need Raid 1 for ESXi itself
More operatives, sql loves it.
Disks are regular ssd sata. Change as you die... what will happen to me in 3 years, not earlier... (of course, it depends on the amount of "changeable" information.
If for ESXI - I would do this - Forward disks directly to the virtual machine, assemble a soft raid there ..
Or KVM virtualization on linux... Again, there is a soft raid by the system itself
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question