Answer the question
In order to leave comments, you need to log in
What is the best way to organize a fault-tolerant file storage, taking into account the territorial distribution of servers?
Good day. There are 3 offices and a data center.
Raised site-to-site IPsec VPN between all nodes.
And there is file storage. Now it is DFS, one node in the DC, one in the largest office (100 people). But it is clearly not suitable for our needs, because it will not work with common excel files. It's still 2 different sources with replication.
The data center has a terminal server and file storage. About 150 people work with it through the terminal server.
In 1 branch about 100 people work with him
And in two branches 25 each.
Daily increase in the volume of files - about giga - scans, and joint work in excel files.
In the direction of which solution is it better to drip, so that the whole thing works fine and the files are on all storage nodes "more or less in real time"?
the channel between the DC and the office with 100 employees is 100 megabits.
between DCs and additional offices 25 each.
How will the file cluster behave on Win, taking into account this information? Who implemented this? Thank you.
Answer the question
In order to leave comments, you need to log in
Isn't it easier to work with Excel using Excel itself? Excel 2016 allows you to co-edit documents, show who has changed what.
If you subscribe to Office 365, then each employee will receive a terabyte of data storage. 1GB a day you say? This is enough for you for 1000 days. It's 3 years old. Do you often need documents that are three years old? You can either delete them or transfer them to other places: to another account in the drive or to your server in the archive.
MirrorFolder ( major features )
Supports automatic synchronization of mirror folders at idle time, system startup, shutdown, logon, logoff, or when a removable/network drive is connected.
Fedor Ananin
2 different people work with a file on one server, 2 people work with the same file on another. As a result, the first group will save the file in an hour. and the second after 2 hours will save the file.
During synchronization, the file with the changes of the first group will be overwritten by the file with the changes of the second group.
Therefore, a cluster is needed to avoid these collisions.
Yes, not simple, but which will not fall apart after, say, a daily absence of the Internet)
I don’t really want to go to the clouds due to the lack of AD integration, but to resolve collaboration in Excel, it’s still necessary.
xmoonlight , thanks, we will study. sounds interesting, especially raid1 over network and copying the difference byte by byte, not file by byte
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question