Answer the question
In order to leave comments, you need to log in
Render farm: rack vs powerful system unit?
Hello.
Please advise those who had practical experience with systems with more than 2 CPUs.
Task: to build a powerful render farm for rendering rendering. The budget is not unlimited, but all the options below are quite possible. So.
Options:
1. Build a powerful system unit containing up to 8 Intel Xeon(10-core) CPUs.
For example, here on this board.
I have never seen so many CPUs in one case. What about the heat sink? Does anyone have real experience, links, or examples of how this can even be squeezed into any kind of corpus? Has anyone seen live examples?
2. Or build a rack (with a capacity of 10 blades). Where later one could simply buy and insert new blades into the rack (for example, each blade has 2-4 Intel Xeon 10-core CPUs, and 128Gb DDR3).
For example, something like this .
I would very much like a rack, where then just buy more blades.
Questions:
- How exactly in practice are the computing resources of a rack combined together? How does it all work? I do not have experience. Please tell me who knows for sure.
In other words: If it's a rack, then to create a single subset of CPU and RAM will you need to deploy something like OpenStack, or does the hardware in the racks at the native hardware level share the hardware resources of all the blades?
- If it is possible to build a single computing cloud (OpenStack): will you need to buy 3ds max backburner + vray licenses for one machine?
- Is there a way to use Intel Xeon Phi, or GPGPU (OpenCL or CUDA) computing resources to render Vray?
If something is not exactly expressed, or you have questions, you can ask.
Thank you.
Answer the question
In order to leave comments, you need to log in
Some blade racks, like the HP C3000, logically look like a normal set of servers connected to an L3 Ethernet switch and a FiberChannel switch to access the SAN.
All administration is carried out as with regular servers, the plus is that you can do it directly from the browser - and find out the state of the system (in all details: power consumption, temperatures in different zones, operation of fans) and mount the disk from your machine (without the need run to the counter), well, it’s trite, to virtually see the “monitor” of the blade and press the power / restart buttons.
By supplementing this basket with a data storage system and deploying a GFS2-type cluster file system on it, you can get a good digital grinder.
render farm is a set of servers (or regular system units) connected by a network. Memory and processors are not the same.
What you call a "powerful system unit containing up to 8 Intel Xeon(10-core) CPUs" is a supercomputer in which memory and processors are combined and function together. You most likely do not need it, and you do not have money for it.
The price difference between these solutions is several tens of times.
I would advise you to spend some money and order the terms of reference of iron from the people involved in this, so that later it would not be excruciatingly painful.
In fact, the alignment is now clear :) After viewing the t-platforms website, there are no more questions :)
Thank you all very much for the detailed answers :)
The topic can be closed
Go to an integrator, you will pile up more problems than solutions with such questions.
To answer the question, it is worth studying the features of the renderers themselves that you are going to run on this farm.
For example, a very popular vray with network rendering is not able to start light cache rendering on several machines. As a result, on complex scenes, you spend 40 minutes on the leading node counting the light, and then rendering everything else in 10 minutes.
Regarding vray and GPU, there is a modification vray rt that drives both gpu and cpu.
Anyway, from personal experience. Using Amazon's c3.8xlarge virtual machines can be a very profitable solution if you don't keep them on all the time :)
www.tilera.com - manufacturer's website. I don't know the site of the new owner. Each computer has 8 processors. You can watch 200-core processors. The server has 500,000 virtual cores. This processor was manufactured prior to the company's repurchase and is obsolete. After that, 512-core processors also appeared. I don’t know how many virtual cores there are in such a computer, presumably 2 and a half times more. Information about this processor is outdated and, most likely, an even more powerful processor has already been released and more are still in development.
You can compare with the calculations in the MSI Geforse Ti video card installed 4 in the rack. It is more profitable to buy video cards now. Processors are faster than video cards. If you find something new - let me know and do not delete the answer.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question