Answer the question
In order to leave comments, you need to log in
How to use 6 video cards at once for calculations?
There is a former farm of six 1060 video cards. Now it is planned to be used to create face matrices based on dlib. The OS is Ubuntu 18.04, but you can also install Windows. The question is how to use the power of all video cards at once for dlib to work? It is desirable to be as detailed as possible
. After all, the software for mining, as the same, uses all the cards
Answer the question
In order to leave comments, you need to log in
I decided as follows: I run copies of the script, for each video card separately. Before running the script in the console: export CUDA_VISIBLE_DEVICES="0", where 0 is the video card number. Then python script.py
NVIDIA SLI go by three,
well, maybe something has already changed
Start by building dlib with cuda. Next, you should look towards multigpu in cuda, it should be either out of the box, or some kind of nccl type.
Quick googling really suggests that people have faced such a task, and have not found a beautiful solution. https://github.com/davisking/dlib/issues/1482
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question