Answer the question
In order to leave comments, you need to log in
Are the processes running on different processor cores or not?
At the moment I'm studying the Message Passing Interface and, for example, I have a program that runs on two processes. Will it work correctly and as fast as possible on my single dual-core processor? Or do I need two processors for maximum fast and correct operation, and the number of cores does not matter for programs that have many processes? Please explain in plain language or tell me where you can read about it.
Answer the question
In order to leave comments, you need to log in
Yes.
from the point of view of the program, there are no "iron" processors.
is the number of the maximum number of simultaneously executable threads.
even if the program stamps more than the maximum number of executable threads, the operating system will simply shred resources of execution threads between all requesters.
those. a multi-threaded program will be executed on a single-processor computer.
You will start to run into this when it comes to big (really big) servers with lots of sockets and NUMA.
There you will have to take into account various factors - that it is desirable to hard-pin processes to the cores so that they do not jump between regions, that the memory connected physically to one processor is of course available to you from another processor, but at the cost of QPI transactions and some other bus.
In short, as long as you have a PC / laptop for experiments, everything will be the same for you, do not think about it.
Roughly speaking,
two single-core processors = two cores under one cover (not counting the cache = it's
shorter than the total = don't worry))) FSE will be OK
in terms of software, there is no difference between "dual-core processor" and "dual-processor".
and at the expense of speed it is necessary to talk about specific systems and tasks.
Multitasking is handled by the operating system. From the point of view of the program itself, it knows nothing about this, and the program itself can only use the language tools (create threads), but these threads will be processed by one core, one processor or different ones - the program knows nothing.
The OS controls which process and which thread will run where. For the fastest possible work, you need to understand that creating more threads than there are cores will not make much sense to speed up calculations.
For the OS, there is almost no fundamental difference between a process and a thread. When a process is created, a new piece of memory is allocated, when a thread is created, this does not necessarily happen, the thread has access to the parent's memory, the process does not. Ie, in the context of the question, you can limit yourself to talking about flows.
There can be as many threads as memory allows, even on a single-processor and single-core piece of hardware.
The activity of threads can be controlled at several levels. Firstly, the OS can stop the execution of the thread at any time. Even if it can be prevented, it is hardly easy, and it is hardly necessary to do it. Runtimes sometimes allow you to specify how many threads to run to execute a program. For example, the erlang or go runtime allows programs to create not expensive OS threads, which take a lot of time to create, but lighter ones, the life cycle of which the environment itself manages. There is a mapping of N program threads to M OS threads, which are requested once, at startup.
A thread can be in one of the states: sleeping - when the scheduler stopped it, active - when the thread has received control, blocked - when the thread is waiting for some event or the end of some action, for example, data arrives over the network, error - when an error is being processed or the release of resources and the death of the thread.
The question of the optimality of creating more than one thread/process should always be considered in the context of a specific task. Even with non-blocking communication between threads, at a minimum, thread switching, unless the number of threads is less than or equal to the number of cores present, has a time cost. I suspect that a multi-threaded program begins to outperform a single-threaded program when the number of threads that need to be executed in parallel does not exceed the number of available cores. As soon as there are more threads, the scheduler has to step in, and naturally, it does not work instantly.
The question of architecture optimality also, in my opinion, does not have a general answer. For computing tasks, it may be comparable to use one dual-core processor, or a motherboard with support for 2 processors, and, for example, to run a web server and a database for it, in general, two independent computers may be the best option.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question