Automatic2017-04-03 11:41:04
Automatic, 2017-04-03 11:41:04

Affordable fault-tolerant storage is it possible?

Hello comrades!
In general, the question is as follows: It is
necessary to store about 40Tb of data, 90% of this data consists of video / photo files, the rest of the documents are division balls, user profile balls or profile backups. High availability is a must. File access smb 2.0 and 3.0, block access iscsi may be needed in the future. Snapshots and deduplication. Cluster of 2-4 nodes, nodes with redundant power supplies, one processor each, 10 GbE interconnect. And most importantly, the price of the issue, there are no open prices for wooden storage systems, a million budget maximum.
Is there a real possibility to make a fault-tolerant data storage system on inexpensive components, such as household but reliable sata hdd\ssd, based on failure rate data .
Without using wooden raid controllers, I want to get software-defined storage (SDS, Software-defined storage). Regular disk/node failover is desirable, automatic rebalancing. Support for ssd tiring/caching, deduplication. Not using grandfather and enterprise FC. Not using sas drives.
So far I see a bunch of 3-4 SSG-5018D8-AR12L , 4 x ssd 400Gb, 12 x hdd 3Tb. 1-2 10GbE switches (netgear\dlink\tp-link). As operating system and management software: Windows Server 2016 - Storage Spaces Direct.
Another option that seems very harsh is Hyper-V in conjunction with ScaleIO or Starwind
Storage Spaces Direct - I would like to replace it with Nutanix CE (a lot of goodies, but there is a limit on disks :(, it is somehow bypassed, but I did not find these methods on the Internet), or Nexenta, or ScaleIO, StarWind, or something else "Adequate money, which I don't know.
Perhaps a wooden storage system from fashionable vendors will also work, but it should be with file access, since installing separate filers is an additional cost. It
will be a plus if light virtual machines can be used on top of the hosts.
The most extreme option is the Dell PE 730XD full of disks, with a backup for a penny storage collected on the knee with a bunch of disks.
Does it make sense to have a mirror/cluster of 2-4 Synology, Qnap and the like? Has anyone used this? The lack of redundant power supplies is very confusing. From the practice of Promise ns6700, two pieces were covered.
Relevant and fashionable things like iops and latency are not very relevant, only hardcore only 24/7 availability

Answer the question

In order to leave comments, you need to log in

5 answer(s)
athacker, 2017-04-03

Well, think for yourself what you would say to a person who says: I want to assemble an industrial storage system with strapping (switches) and disks on my knee, the budget for everything is a million rubles (~ $ 17,000).
Here, dear friend, either knees or industrial storage (especially considering Wishlist in the form of tiring, deduplication, snapshots and other things).
1) SATA drives under heavy load will sin with silent errors (i.e., data distortions that no one will notice until it's too late (for example, when rebuilding an array). Therefore, the software will have to be able to detect these errors offline (like scrub in ZFS or ScaleIO's background device scanner)
. For example, here: stss.ru/products.html
3) If you are going to buy ScaleIO, then do not expect adequate money there. Asked a few months ago for a price of $44k for 50TB of RAW capacity. When you build an array on 4 nodes, you will get less than 20 TB of real capacity. ScaleIO can be used for this as well, but forget about support. If you have a really severe prod, then without support is to look for adventures on your lower back muscles.
4) Why the hell do you care about deduplication? Given the array of data that you are going to store there (documents / photos / videos), the profit from deduplication will be slightly less than none, but the load on memory and potential glitches will be hoo.

5tgb5tgb, 2017-04-03

Look towards Netapp FAS on capacious NL-SAS disks. For example Netapp FAS 2554 with 4 TB disks

pred8or, 2017-04-03

Without using wooden raid controllers, I want to get software-defined storage (SDS, Software-defined storage)

If you look at the definition of SDS on Wikipedia, and read other sources, you can find out that this term is the same marketing bullshit as everything else about SD *. As a matter of fact - abstraction and virtualization of available hardware (wooden?) storages. The same SAN or NAS.
So decide what you really need, and then everything can be solved within fairly modest budgets. Without using EMC or, for example, IBM.

Fiasco, 2017-04-04

have you read this article?
or buy an unlimited starwind for two nodes for ~ 500k rubles, build a windows cluster on it, which can do everything you want, just for a million it will probably work

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question