A
A
art16362032017-09-08 19:35:28
Python
art1636203, 2017-09-08 19:35:28

How to add GPU usage to Python programs?

I started learning Python a couple of months ago, and there are many different scripts on it, for example, this one:
iterating over characters to search for seed in aes256 - https://github.com/daedalus/misc/blob/master/crack...
Written "as usually" using only CPU power by default.
Actually the question is how to add the ability to use the GPU for such programs? Need to completely rewrite the code?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
S
SolidMinus, 2017-09-08
@art1636203

PyCUDA https://documen.tician.de/pycuda/
No, absolutely not. Only where the seed is searched. It is necessary to parallelize the cores where. Specifically, how is it done in python, I xs, I use CUDA from C. I'm sure it's even easier in PyCuda.
PS To work, you need an Nvidia video card with CUDA toolkit
installed. UPD: I googled it. Python does not support the CUDA API. PyCUDA uses Nvidia's C-code wrapper for parallel programming. You will have to rewrite all the enumeration code in this context, and execute it through PyCUDA. The documentation has an example of such an implementation. Therefore, the verdict: if you want to use the GPU, you will have to learn how to code for the GPU.

A
Anton B, 2017-09-10
@nizzit

Here is an article on the subject. In addition to PyCUDA, there is also Numba. This is a JIT compiler that can run your code on both CPU and GPU. It will be slower than PyCUDA, but you won't have to rewrite the code.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question