S
S
Sergey2017-08-01 11:09:00
Storage Area Network
Sergey, 2017-08-01 11:09:00

Which vhdx ghetto storage to choose?

What is now.
Servers:
Supermicro X10RDi Xeon x2 6 cores, 64 GB RAM
Supermicro X10RDi Xeon x2 6 cores, 64 GB RAM (weaker processors)
Supermicro X7DVL-3 Xeon x2 2 cores, 16 GB RAM
On all three WS2012R2 Standard .
There is a bunch of different types of servers for little things, all dead or even on desktop hardware. We generally forget about them. Network everywhere 1 gigabit. HP Managed Switches.
On servers - DHCP, 2 CD, DNS, AD RMS. Some auxiliary software, one has a hypervisor and some virtual machines are running there - zabbix, redmine. There is no heavy load on the server, the most voracious is the file server, it encrypts and decrypts a lot of cad files. The amount of space for the file server is 3 terabytes with room for growth.
It has become difficult to manage all this, it is difficult to add new services, no isolation - this is not the case when we put relatively speaking PDF conversion software and a print server on the CD. I know for sure that it is necessary to implement virtualization :)
Now, the question.
What you need:
You need a budget (in general, the maximum budget) storage for virtual machines, and for a single file dump (now the file dump is spread over all servers). Options that I see-understand:
1) we build iSCSI SAN. Theoretically, we do not need new switches, we need to buy a shelf, disks. What are the problems: one LUN must be attached to one server, that is, in principle, you will have to make two LUNs + one for files under virtual machines. If we lose a shelf (controller or power), then there is no clear recovery plan. The budget does not pull two shelves, adnaznachna. And even one is expensive. On the plus side, I know for sure that it works and that RAID10 LUN per gigabit will be enough in terms of speed (from experience and current load).
2) We use a DAS shelf, we cling to one or two servers and build an array from StorageSpaces. Minus - you have to buy HBA, the shelf itself. The shelf turns out again spf without a clear plan of what to do with it next. The bubble is not much different from the solution with iSCSI.
3) We buy the cheapest 2U server with a large number of baskets for disks (from 12), we build a file server from StorageSpaces from it, we give vhdx virtual machines from it via smb3, we also put the file washer in vhdx. From what I tested (I wrote below), we have enough speed. S2 pool is easy to transfer to other servers / machines in case of server failure, the main thing is that there are enough sata connectors. It is necessary to buy a server and a license (and CAL?) for the server.
4) Approximately the same as the previous version, but we make hardware raid LUNs, we give volumes to other servers via iSCSI from under Windows. I don’t see any obvious pluses, minuses - volumes will be difficult to transfer.
What would you choose? What other options are there? What didn't I learn? Maybe you know some specific models of pieces of iron?
PS I put WS2012R2 on a regular computer, 4 SATA HDD disks in Storage Spaces, tested it - the read-write speed rests on the gigabit network, and not on the disks.

Answer the question

In order to leave comments, you need to log in

2 answer(s)
A
athacker, 2017-08-01
@athacker

Consider convergent solutions. So that hypervisors and storage are provided by the same servers. From open source - Ceph, from paid - ScaleIO, ONTAP Select, StoreVirtual. ScaleIO is free to deploy and spin, but there will be no technical support.

S
Sergey, 2017-08-01
@NewAccount

I thought about it. And how do you see it with so many servers?
Well, let's say I have 3 servers, each with 8 disks. We lose the server - we lose a third of the "array". Are there any solutions that will survive this?

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question