Answer the question
In order to leave comments, you need to log in
File server parameters calculation?
Kind.
Task: to carry out the transfer of heavy file content from the main server to a new dedicated one.
Content Options: The content is predominantly audio files ranging in size from 5 to 200 MB with a total size of 800 GB at the moment and will grow progressively.
Monthly traffic: ~10.000.000 page views, ~5.000.000 visits, ~1.000.000 unique visitors.
Daily traffic: ~350.000 page views, ~160.000 visits, ~100.000 unique visitors.
Simultaneously on the site: from ~3,000 to ~10,000 people (according to chartbeat).
Average channel load: from 1 Gbps to 4 Gbps.
Based on such data, I would like to approach the iron requirements of the new server as competently as possible, i.e. memory, disk subsystem, processor.
Software-wise, nginx will run on the server. Other possible services will not work with users.
According to current estimates: processor - 4 pieces at 2.6 GHz, memory - 24 GB, disk subsystem - 2 TB to start, channel - 10 Gb / s
Budget - it is desirable to fit in 100,000 rubles / month.
Here in such static nature, what pitfalls can there be, what technology to deploy the disk subsystem? What can be read, what information to collect now? At the moment, everything in the portal is rubbish on the universal cluster. Everything is held comfortably, but architecturally and religiously it is all wrong and will not withstand constant vertical expansion. Now 24 processors at 2.2GHz, 48 GB of memory, 1 TB Raid 5 + 100 GB 2xSSD.
Answer the question
In order to leave comments, you need to log in
It is necessary to proceed not from the general parameters, but from the load on the disk.
How many iops do you need in a peak? How much is the CPU and RAM load on current machines?
What will you do if a new single server comes up?
It is better to use a server cluster - and more reliable and cheaper than buying one super powerful server.
In mainstream servers, there are two 1G network cards, they can be combined and get 2Gbps per server (real 1.5-1.8 without losing packets)
Again, modern inexpensive 1U cases can accommodate 4 screws, it’s quite normal for the task, you can even combine everything into a raid1 to increase reading performance.
The percentage is completely unimportant for the return of statics, it is better to take fewer cores - but more frequency - will have a positive effect on the speed of processing network interrupts.
Memory should be stuffed to the maximum, for mainstream mothers this is 16G.
Be sure to pay attention to the network card, it should be either intel or broadcast, in no case realtek and other marvels.
It is not necessary to balance the traffic randomly, but so that the same file is given from the same server - this is how ram is used more efficiently for the file cache, roughly speaking - it is summed up over all servers.
ssd is a separate song, in your case (a small amount of content) it may even be more preferable to take one ssd for 480G for 500 than 4 screws, you can save on memory.
In general, this is already a cdn, perhaps it will be easier and more profitable for you to use the cdn services already operating on the market.
If you only distribute content, without processing it, you practically do not need a processor. Practically at you the main bottleneck is disks.
Such a solution is more reliable and usually cheaper to do not on one powerful server, but on several rather simple ones.
Perhaps it makes sense to make a couple of main storages and several caching frontends with ssd, if there are some files that are requested much more often than others.
If requests are spread over all files, it makes sense to take several simple single-processor 1U servers with 4 screws each, more memory. And distribute requests between them.
Such solutions are both more reliable and usually cheaper than a single server that can handle the same load. In addition, it is easier to scale later, gradually adding standard and inexpensive servers, with the same hardware and software configuration.
One server is good, but what about fault tolerance? It's better to take care of it right away. As a budget option for fault tolerance, I would put another server next to it with the same roles in an omitted state, I would synchronize the file structure using rsync. If the first one has fallen, we raise the roles of the role on the second and jump the ports on the router or change the IP to the one that belonged to the first
Current load cutoff: cpu - less than 50% of 2400% used, memory - 12 GB of 48 in use
iostat
avg-cpu:
%user: 1.91
%nice: 0.00
%system: 2.28
%iowait: 0.26
%steal: 0.00
%idle: 95.55
tps: 244.74
Blk_read/s: 18145.77
Blk_wrtn/s: 1446.57
Blk_read: 39002384910
Blk_wrtn: 3109236520
FS: ext3, ext4
Scaling a specific file server does not matter how to scale. According to the specifics, it is quite simple to produce both vertical and horizontal.
The requirements for svc, iops and streams are highly dependent on the access technology. I can't say specific numbers.
Access to files mainly (95%) via direct link and dynamic personal rewrite links in nginx.
If a new single server bends, we will be sad and deploy a new one from the backup for this time, remaining without files. Unpleasant, but not a critical necessity. For the sweep period, in the event of a potential failure of the new server, its functions will again be taken over by the main one.
The volume is not very large and the reading is more or less consistent, although in many threads.
It seems to me that the easiest way is to take several pieces of iron with 8x450 10k rpm SAS each, 16+ frames, 4 cores.
For your budget, it will be more likely to be one server than 2. If it falls, you will have to pick it up and not give files until it rises.
But if about the rest. To distribute 10G, one i7 2700K processor or equivalent is enough (980, 3770, not a hair dryer!).
I think you will use a network card like x520, which means that there will be no problems with slowing down the network card.
The easiest way to squeeze out 10G is to use an SSD. For your money, it will be something like 8 * 240Gb, for example, such: hotline.ua/computer-diski-ssd/ocz_agt3-25sat3-240g/ Or vertex series, there will not be much difference on such a task.
Choose a mother with 6 sata ports (sata3 does not matter), built-in video and at least 2 pci-ex x16 slots (actually it will be one x8, one x4).
To connect eight screws, you need to put the controller. I recommend this one: hotline.ua/computer-kontrollery-raid/adaptec_raid_1430sa/. sil, despite the same chipset, you do not need to use it, the server will hang. 4 ports, not 2, so that another system screw fits.
In total, it turns out that you plug a network card into the 8x port, and this controller into the 4x port.
Use xfs as the file system (with default settings, specify noatime when mounting), respectively, the new linux (in the case of ubunta, this is 12.04). Old versions do not need to be installed, they may not have a very good network subsystem. You don't need to install FreeBSD either, the new Linux works much faster with the network.
Do not use raids, scatter files yourself. If there is more data than 4 SSDs, then it is better to scatter randomly and separately make a second copy for 20% of the most popular files. How many copies and for how many percent of files, in reality, will need to be selected depending on the load.
4 or 8 GB of memory is enough, nothing will fit into it anyway, and as a disk cache it is practically useless for this task.
Install nginx, disable sendfile, enable aio.
These are tips for a site with, for example, mp3 or, say, online movies for a file size of 0.8-1.5TB.
If you describe how many files you have in total and what percentage of the volume of these files create 50% of the traffic (it is better to specify another 80% of the traffic), you may need to adjust the configuration.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question