E
E
Eugene Lerner2021-10-28 17:25:24
GPGPU
Eugene Lerner, 2021-10-28 17:25:24

Do Neural Networks Use Reduced Bit Calculations?

The bit depth of 64 and even 32 is too large for neural networks. For example, the googooa tensor calculator has a bit depth of 8.
The question consists of two: 1) Do you know programs in which they would calculate on cpu or gpu with a bit depth of 8? A 64-bit processor does 8 additions or multiplications at a time. For example, it would be interesting to know if 8-bit calculations are used in the most common packages, such as tensorflow
2) Do you know programs in which the neural network would be trained with a bit depth of 8, and then the coefficient would be refined with a bit depth of 16 or 32 - something similar to a greedy algorithm?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
F
freeExec, 2021-10-28
@ehevnlem

The NVIDIA library only supports float(32) and double(64).
Of course, you can cycle on bytes, but why, after all, the accuracy is nil?

A
Alexander Skusnov, 2021-10-28
@AlexSku

In fact, the GPU just processes 4 32-bit (float) numbers at a time (special 128-bit format when a special flag is set).
typedef __m128 XMVECTOR
When passed through function parameters, FXMVECTOR, GXMVECTOR, HXMVERCTOR, and CXMVECTOR are used, and the function itself must have the XM_CALLCONV call type.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question