Answer the question
In order to leave comments, you need to log in
RuntimeError CUDA out of memory. How can I make code executable for my graphics card?
I'm trying to use OpenAI Jukebox, but the video card does not allow me to complete the task (Video card, RAM 16384 megabytes).
Full error text:
RuntimeError: CUDA out of memory. Tried to allocate 44.00 MiB (GPU 0; 15.90 GiB total capacity; 14.80 GiB already allocated; 43.62 MiB free; 15.04 GiB reserved in total by PyTorch)
How can I make this code executable for a wimp?
The code:
model = "5b_lyrics" # or "1b_lyrics"
hps = Hyperparams()
hps.sr = 44100
hps.n_samples = 3 if model=='5b_lyrics' else 8
hps.name = 'samples'
chunk_size = 16 if model=="5b_lyrics" else 32
max_batch_size = 3 if model=="5b_lyrics" else 16
hps.levels = 3
hps.hop_fraction = [.5,.5,.125]
vqvae, *priors = MODELS[model]
vqvae = make_vqvae(setup_hparams(vqvae, dict(sample_length = 1048576)), device)
top_prior = make_prior(setup_hparams(priors[-1], dict()), vqvae, device)
Answer the question
In order to leave comments, you need to log in
Faced a similar phenomenon of GPU memory overflow. But in my case, this did not happen immediately, but after processing several packets. That is, it turned out not to be the size of the input tensor, but the fact that at a certain stage I saved metrics in the form of scalar tensors, and I extracted the values from them on the CPU later.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question