Answer the question
In order to leave comments, you need to log in
Does it make sense to always (almost) use short instead of int?
As far as I know, the data type short takes 2 bytes, and int 4. And short can hold up to ~ 32000. Most of the variables don't really matter anymore. Then it turns out almost always you can use short instead of int.
But my tutorial almost always uses the int data type. Maybe there is some catch that I don't know.
Answer the question
In order to leave comments, you need to log in
If the task is to make a very optimized program, or a program that uses tens or even hundreds of millions of integer variables not exceeding 1-2 bytes, then perhaps this makes sense, otherwise I don’t know ... since there are 20 more years ago, when computers had about 256 MB of memory, it was recommended to use int everywhere as a standard.
You can short, if not laziness. You can use unsigned short, there is an even larger positive range.
Useful if you write some firmware for some AVR.
In new code, neither int nor short should be used. Only int32_t
, int64_t
and others from . Because int, according to the standard, can be 16, 32 or 64 bits (and this is even more simplified).
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question