V
V
Valentine2020-06-18 15:54:26
Python
Valentine, 2020-06-18 15:54:26

Module dlib.cuda not found, but dlib.DLIB_USE_CUDA=True. this is normal?

I wanted to play a little with face recognition. I use face_recognition, dlib and python3 for this. But I had questions about whether I configured everything correctly to use CUDA cores. So far, I haven’t launched anything other than this example from github, uploading my photo there:

import face_recognition
import cv2
import numpy as np

# Get a reference to webcam #0 (the default one)
video_capture = cv2.VideoCapture(0)

# Load a sample picture and learn how to recognize it.
obama_image = face_recognition.load_image_file("my_photo.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]

# Load a second sample picture and learn how to recognize it.
biden_image = face_recognition.load_image_file("biden.jpg")
biden_face_encoding = face_recognition.face_encodings(biden_image)[0]

# Create arrays of known face encodings and their names
known_face_encodings = [
    obama_face_encoding,
    biden_face_encoding
]
known_face_names = [
    "Barack Obama",
    "Joe Biden"
]

# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True

while True:
    # Grab a single frame of video
    ret, frame = video_capture.read()

    # Resize frame of video to 1/4 size for faster face recognition processing
    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)

    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]

    # Only process every other frame of video to save time
    if process_this_frame:
        # Find all the faces and face encodings in the current frame of video
        face_locations = face_recognition.face_locations(rgb_small_frame)
        face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)

        face_names = []
        for face_encoding in face_encodings:
            # See if the face is a match for the known face(s)
            matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
            name = "Unknown"

            # Or instead, use the known face with the smallest distance to the new face
            face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
            best_match_index = np.argmin(face_distances)
            if matches[best_match_index]:
                name = known_face_names[best_match_index]

            face_names.append(name)

    process_this_frame = not process_this_frame


    # Display the results
    for (top, right, bottom, left), name in zip(face_locations, face_names):
        # Scale back up face locations since the frame we detected in was scaled to 1/4 size
        top *= 4
        right *= 4
        bottom *= 4
        left *= 4

        # Draw a box around the face
        cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)

        # Draw a label with a name below the face
        cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
        font = cv2.FONT_HERSHEY_DUPLEX
        cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)

    # Display the resulting image
    cv2.imshow('Video', frame)

    # Hit 'q' on the keyboard to quit!
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()


Apparently, dlib uses CUDA, because when I execute the dlib.DLIB_USE_CUDA command in the python console, I get True and when I run nvidia-smi, I get the following output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 446.14       Driver Version: 446.14       CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 105... WDDM  | 00000000:01:00.0  On |                  N/A |
|  0%   41C    P0    N/A / 120W |    552MiB /  4096MiB |      1%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU                  PID   Type   Process name                  GPU Memory |
|                                                                  Usage      |
|=============================================================================|
|    0                 1068      C   ...Files\Python37\python.exe    N/A      |
|    0                 1096    C+G   Insufficient Permissions        N/A      |
|    0                 2160    C+G   ...w5n1h2txyewy\SearchUI.exe    N/A      |
|    0                 3680    C+G   ...y\ShellExperienceHost.exe    N/A      |
|    0                 5936    C+G   ...es.TextInput.InputApp.exe    N/A      |
|    0                 7968    C+G   C:\Windows\explorer.exe         N/A      |
|    0                 8540    C+G   ...ty\Common7\IDE\devenv.exe    N/A      |
|    0                 8904    C+G   ...lPanel\SystemSettings.exe    N/A      |
|    0                 9568    C+G   ...ekyb3d8bbwe\YourPhone.exe    N/A      |
|    0                13280    C+G   ...ub.ThreadedWaitDialog.exe    N/A      |
+-----------------------------------------------------------------------------+


At the same time, my video card is not loaded above 1-2% when executing the code, and when I try to import dlib.cuda, I get an error. Does all this mean that I installed something wrong? Or is everything okay?

Answer the question

In order to leave comments, you need to log in

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question