Answer the question
In order to leave comments, you need to log in
What for in general and is storage reliable?
I asked about clusters in parallel here and decided to highlight part of the question.
The essence of what I question: the reliability of storage systems.
What I mean by storage: a kind of device into which disks are inserted and with a network interface. The interface is now relevant optical right? In this way, I can conditionally store files in one place and access them from another. Well, as a file server.
What are the advantages over a file server? There too it is possible to make RAID. There you can put optics. I do not understand.
And what is important... Reliability. I had the honor to communicate with the support of various vendors and suppliers. How is it usually? Something goes out of order on Monday at the height of the working day. Everyone eats IT people - the work is worth it. Those in support. Support chews snot and after a few hours (this is at best, often days - I'm not lying) gives out "but we don't have a riser to replace, we have to order in the USA and wait a month." All sailed.
If a file server fails, you can take and rearrange the disks to another one. Well, it's so ... Simplified ... And if storage? Often this is not done - there may well be a cunningly #booty format. Look for a server with the same disk interface and stick it there, pouring backups?
In general, colleagues... I don't understand. Tell me, huh?
Answer the question
In order to leave comments, you need to log in
Storage, like many other things, is such a thing ... If you have a question "why do you need storage systems," then it means that you definitely don't need it :-)
What is the benefit of storage - this is a multiply reserved system. It has two to N controllers, each disk shelf is connected at least two ways, each disk is connected two ways (SAS disk, of course), and the disks use mechanisms like T10-PI to guarantee data integrity. The disks have patched firmware, the logic of this firmware is as follows: "it's better that we falsely fail the disk than a real failure occurs, and we slap it and lose data."
There are also all sorts of functional things like instant snapshots, replication mechanisms (when you have two geographically separated volumes that have exactly the same content and are synchronized almost live). SHD is able to tiered storage when the system implements a cache of the hottest data based on SSD disks or in RAM.
But you need to understand that the storage system itself and consumables for it will cost as much as a wing from the F-35, and the storage system should be taken ONLY with the appropriate service contracts. Without a service contract, any failure will cost you a pretty penny, and the repair time can be calculated in months. On the other hand, storage systems are quite reliable, and the number of failures is not very high. On the third hand, vendors are very fond of taking money for air. For example, IBM has such a practice - if your service contract expired, and you did not renew it, and then you had a failure and you decided to buy the service contract back, then you will add the cost of a one-year service contract in the form of a fine. That is, if you decide after N years to purchase a service contract for 2 years, then in fact you will pay as for 3 years - "1 year conditionally" you will be fined.
Storage is needed only for cases when you have critical business processes, and any shutdown of them is such a big loss that the cost of these losses exceeds the cost of storage and service contracts for it. In other cases, I would recommend self-assembly storages and the organization of the process of working with them in such a way as to minimize the risks and consequences of possible failures.
For pure file storage, for example, Windows Storage Spaces is suitable in a configuration with two servers and one two-controller DAS bucket connected to both servers. It is much cheaper than storage, and such a configuration can do many things - there is no single point of failure, it can RAID on disks, it can conditionally tiered storage, and it can take snapshots using VSS on Windows.
On the fra, High Availability in ctld is also being actively sawed now. I suppose they will include it in 11.0, and it will be possible to build fault-tolerant iSCSI storages based on FreeBSD + ZVOL + ctld - an excellent option for storing virtual machines. Moreover, ZFS can also take snapshots, so if Veeam learned how to work with them, it would be a plague... :-)
What are the advantages over a file server?That it is not a file server.
1. a contract with a support with a normal reaction speed (minimum 8*5).
2. reservation. if you have one storage system and more data, there is nowhere to deploy it - you don't need storage, start by building a normal process.
the advantage is better utilization of resources. It's like virtualization. There is a certain amount of resources (processors / memory / disk space). When only 1 process / system has access to 1 resource, resources are used less efficiently than when the resource is utilized by several processes. In both cases, you must have a reserve, in the second it is a general reserve.
As goodies - a good storage system can do a lot of nice things.
Disclaimer 1: I intentionally didn't read other comments, I suspect that it says something completely different from what I'm about to say.
Disclaimer 2: In my life I had and have some relation to the most diverse storage systems. Including high-end solutions.
Now the gist:
Once upon a time (when sugar was sweeter and water was wetter), mankind faced an insoluble problem: the inability to remove more than 100-200 iops from one hard drive. The only way to remove a large number of iops was to increase the number of hard drives. Since it is illogical to shove dozens or even hundreds of disks into one server, they came up with the idea of shove them into a separate box and give access to it to individual servers. so each server had the potential to receive thousands of eps. There was simply no other logical solution at that time. Then the concept with large boxes of many disks was overgrown with a marketing bullshit (functionality, support, fault tolerance, etc.) Now, with the advent of SSD, all this has lost its meaning, but corporations that receive very large profits from the sale of storage systems are not ready to admit that
IMHO:
the only plus of storage is cases / expanders
- I can put several controllers / expanders in any server, having received the same redundancy
- I can also create different arrays on them -
I can also use ZFS
- set up replication using OS tools
- I don’t know about tiried
and do not pay hundreds of thousands / millions of rubles for support / breakdown / upgrade, because one fine moment, the storage system will be discontinued, and after some time the warehouses will run out of kit for it - and the new storage system, contracts, expenses,
in self-made is much more freedom of choice as in controllers (and the backward compatibility of older models with younger ones seems to be done ), expanders, mothers/processors/memory/PSU, and yes, screws
and in case of problems it will be much easier to deal with them
ps. in dreams, it’s a “mobile disk shelf” at home, possibly in a self-made case with a purchased expander from “vendors”
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question