Answer the question
In order to leave comments, you need to log in
SBB (Storage Bridge Bay) - how to build the perfect NAS?
Briefly about the subject
Storage Brigde Bay (SBB) is a specification created by the Storage Bridge Bay Working Group. It describes the requirements for a disk shelf, to which various “controllers” (canisters) are directly connected, most often representing a regular server in their own (also defined by the specification) form factor.
Roughly speaking, a typical SBB shelf consists of 2 servers interconnected by two 10Gbps interfaces and connected to one disk shelf with a common SAS bus with the ability to connect additional disk shelves through external SAS interfaces.
Given
The most popular (and almost the only known) hardware on the Russian market * Supermicro 6036ST-6LR , filled with processors / disks and other things.
*-according to my observations
I would like
to
get a fault-tolerant NAS without excessive data duplication (ie disks in RAID and no DRBD). I want to use this NAS as an NFS / CIFS data storage, iSCSI target and ... That seems to be it. But it is imperative that it be fault-tolerant.
Available thoughts
So far, there is only one: we load the OS from some external device (SATA on the board, for example), combine the existing disks into RAID arrays, use LVM to make VG, cut LV and already activate or deactivate them on each of the nodes using Pacemaker.
But there are some doubts:
1) It is not clear how the iSCSI initiator will behave if the target does not respond for a long time (up to 10 seconds).
2) How much data can be lost when switching
3) Is iSCSI multipath possible with the scheme “the client connects to two iSCSI targets, each of which is already connected via SAS to disks”?
Do Khabrovites have any ideas or use cases?
Answer the question
In order to leave comments, you need to log in
You are not looking at the situation correctly. You need a failover iSCSI Target, not multiple. A good example of Fail-over iSCSI Target is in WinSvr2012+, I recommend that you read it, even if you will not use it.
In your version, you can try, but not the fact that it will be stable or guaranteed to work.
By somennyam:
1) Depends solely on the initiator. MS iSCSI often survives approx. 10 is normal, the only question is whether the software that writes to this target will fall off during this time.
2) As many as you like. Under normal conditions, there is access to the storage via two paths, and MPIO itself sees that one of the paths has risen and sends only along the other. In the case of any "switching" and "activating / deactivating" - no guarantees.
3) As I said above - in WinSvr2012 this is implemented quite regularly, you have 1 Target, when one cluster node dies, all its services (including IP) move to another node and the client continues to use it without even noticing anything.
For a non-FO/HA target, this works in exactly the same way (several connections to different IPs of the same target), which is how FO is achieved at the level of network connections.
In your version, it all depends on whether you can persuade MPIO on clients to see two targets as one device with two paths.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question