Answer the question
In order to leave comments, you need to log in
Is it possible to create a virtual core on Linux if there is an ARM cluster?
I connected 1 wire from the router to the eight port switch, 1 wire to the laptop, and 6 wires to the Orange Pi Zero Plus, which are equipped with Alwinner H5 processors, 4 cores at 1.44 GHz. Wondering about unloading a Linux laptop. When it comes to clustering, the question arises of how to transfer running tasks to these microcomputers. All network interfaces are 1Gb/s.
I understand that clusters are used for certain tasks when there is a web service that counts a lot, for example. The thing is, I don't want to tie my laptop to one place. But when I'm at home - I want to connect the wire and remove the load from the processor, maybe from the RAM. If you teach linux to address the correct address and redirect the data flow for processing to the cluster. What is more profitable for the laptop processor, calculate it yourself, or transfer it to the cluster for processing. I thought so, the cluster gets 6 * 4 = 24 cores at 1.44 GHz, my laptop has 2 cores at 1.5 GHz without Intel HT. You can get rid of heavy processes if you manage tasks. For example, my local server sees the load of all ARM linuxes and starts processing on free ones. But I still don't understand how to do it. I would like to understand how Linux understands that it has 2 cores,
I am very interested in your opinion on this matter, I will be glad to any ideas what can be done.
Answer the question
In order to leave comments, you need to log in
Clustering is a very diverse and non-trivial science. If multiprocessor/multi-core systems have already become commonplace, then clustering hasn't settled down yet.
There is a variant of clustering that mimics multiprocessor/multi-core systems:
Each computer (the computer is called a "node", i.e. a member of the cluster)) runs its own kernel with clustering support. These cores are regularly exchanged with each other to compare the queues of executable tasks (processes). If a significant imbalance of priorities is found out (this is a long time to explain, while we skip the details), then the kernels start transferring tasks from one node to another to equalize the queues. In this case, it is necessary to transfer the executable code, task data, and kernel control structures associated with this task (for example, open file descriptors). Transferring a task to a new node is hard work, so it is started if the imbalance is very strong. (The transfer of tasks between cores is also started less often; it is easier there, it is also desirable to do it less often.)
Transferring a task to a new node is possible only if there are identical processors (or if the task code is interpreted; for example, JVM bytecode).
As a rule, multi-threaded tasks are not distributed to several nodes. For threads often exchange data through memory - and it is too expensive to synchronize memory between nodes.
But there is another type of clustering:
Each node has a program code compiled for it. The program (which was compiled for different nodes) is written in such a way that it can communicate with copies on other nodes (how it knows the list of nodes is a separate issue altogether) and redistribute work. Those. the original problem e.b. framed as a processing of a large array of data; moreover, the data is divided into pieces that can be processed independently. And now different pieces of data are redistributed between the nodes.
For example, brute-force password guessing is just such a task that can be scattered over different nodes with processors of different architectures. There are no complex data streams at all - there is a huge array of initial data in the form of "all possible passwords", and this data can be easily divided into completely independent blocks.
It seems I have loaded you enough. You need to talk further after you specify what kind of heavy tasks you want to parallelize.
It will work only on those tasks that you implement.
From the description, you want some kind of magic with unloading the laptop from its tasks, it won’t work like that.
Kernels from other machines cannot be "mounted" in any way.
A cluster is several machines united by one task and software that distributes a specific task.
It will also be useful to know the following table: https://gist.github.com/jboner/2841832
Well, firstly, 1.5 GHz on x86 and 1.5 GHz on arm are two big differences, x86 will be faster. Although on tasks that are well paralleled, you can really win due to the total number of cores.
Secondly, even if you figure out how and write a patch to the Linux kernel, so that under certain conditions some processes are sent over the network instead of being launched, then all the same, the arm processor will not be able to execute the code compiled under x86_64.
Well, a bit of pure theory. I will put forward one interesting hypothesis, which is not rational to check if we compare labor costs and exhaust.
You can write a hypervisor that will run on the bare metal of our oranges (raspberries, bananas) and provide their resources to some network virtual machine that, working simultaneously on several machines, will virtualize them as one multi-core. In principle, it will even be possible to emulate x86, although this will be rather slow, although it may not be quite noticeable against the background of network delays when accessing memory.
But on top of this virtual machine, it will already be possible to run at least Linux, even Fryahu, and even Windows, if we still decide to emulate x86.
You can sponsor this theory, I will find guys who will implement a prototype of this idea in another year.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question