A
A
alexander0072014-06-19 12:29:39
Xen
alexander007, 2014-06-19 12:29:39

How to build a budget storage system for virtualization?

Good day to all. Perhaps the question is already hackneyed, but I did not find the answer on the Internet for myself. Therefore, a request to all who are in the subject, plizzz, share your experience.
The task is as follows: to build storage for small virtualization, initially 2 servers are planned, each with 20-30 not heavily loaded virtual machines (mainly linux, but there will be a couple of windows). In the future, it is planned to expand to 4 servers. It is planned to take xen or kvm as a hypervisor.
At the moment I see the following technologies for building a SAN:
- fiber channel, technically the best option, but too expensive - we can't afford it;
- infiniband, cheaper, but too expensive;
- 10gb ethernet.
This is the last thing I would like to know more about. How well the following configuration will work, it is desirable to give not a qualitative (good-sucks) characteristic, but a quantitative one (so many IOPS, such and such bandwidth, the processor is loaded for so many):
- Soft iSCSI target on *BSD/Linux / Win (underline as appropriate)
- Network cards intel x540-t2
- The cards are connected directly to each other without a switch. See diagram. (there is no money for the switch yet)
- It is planned to assemble RAID 10 from SSD in the storage. Large volume is not needed, speed is more important.
f18043e4a7b5464e821289789d314e30.jpg
Interested in how much iSCSI over 10G will eat up the processor on the target?
Interested in how much iSCSI over 10G will eat up the processor on the initiator?
Approximately what throughput will be?
Whether yuzal who such configuration? What happened in the end?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
C
Cool Admin, 2014-06-20
@ifaustrue

Colleague, it is not surprising that the question is still unanswered. You ask the following:
"We want to build a sports race track, there are three options for the surface
Asphalt grade 1 - but it is expensive
Asphalt grade 2 - cheaper, but also expensive, and we have never used it.
Asphalt grade 3 - we will focus on it "The ratio of mineral elements to binders seems to be optimal in it.
Tell me what speed our sports cars will develop (it doesn't matter which ones)? How much gasoline will be spent on friction? And how long will it take to accelerate to a hundred?"
And now seriously.
It’s almost impossible to answer your question based on the data you provided, it’s not fortune-telling for coffee (although you can’t answer it without coffee either). Modern data transmission environments will easily cope with certain loads, and also not at all cope with some others. In the general case, everything is fine with you and the cards, most likely, will have a hardware initiator, therefore there will be no overhead for "friction" in their case.
Bands in 10G are more than enough for you for a lot of load patterns, I'm not sure if you have problems with sequential burst recording in 4M, but if there is, then of course everything becomes more complicated =)
Targets are different, and a lot depends on 1) environment 2) tuning 3) raid 4) pattern. To put it this way - the target on the fryakha slows down, but on Windows it’s ok (sic!) Well, perhaps it’s impossible. Yet they have different purposes.
Well, write at least the number of disks! Better yet, contact me on Skype =)

B
baskoy, 2014-07-05
@baskoy

No, please!
You can now afford FiberChannel, just look on ebay for 4 gigabit FC cards. They are cheaper than mushrooms. To start a couple of cards is enough if the storage is self-assembly, like www.smallnetbuilder.com/nas/nas-howto/31485-build-...
Well, then, with growth / expansion, buying a 4GB FC switch also looks very reasonable, with its penny price.

P
Pavel, 2016-04-25
@ispecialist

How about this budget storage option? efsol.ru/articles/storage-system.html

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question