D
D
dandropov952018-08-16 21:51:28
C++ / C#
dandropov95, 2018-08-16 21:51:28

Why is printf outputting wrong?

In the book of S. Prat. The C programming language presents a code example that shows that in certain cases, even using the correct specifier, the result is not what was originally expected*.
It seems like the general logic is clear, but some points in the explanation are incomprehensible. There is the following example:

#define _CRT_SECURE_NO_WARNINGS

#include <stdio.h>
#include <conio.h>

int main(void)
{
  float a = 3.0;
  double b = 3.0;
  long c = 2000000000;
  long d = 1234567890;

  printf("%ld %ld %ld %ld", a, b, c, d);

  _getch();

  return 0;
}

And the results are shown. It is not entirely clear where such a result comes from after the explanation. It says that the values ​​of the arguments are written to the stack guided by types and not by specifiers. Therefore, "a" (64 bits) will be written to the stack first, then "b" (64 bits), then "c" (32 bits) and finally "d" (32 bits). But when outputting, the function will be guided by specifiers (everywhere %ld (32 bits)). Start reading the stack. Here it is not clear. The stack is read as if from the end, and the specifiers are written in the order in which the arguments were written to the stack. Here the first specifier is met, it must be replaced with the first argument, but the last argument will be the first to be read from the stack. This moment is not clear what happens at the time of reading the stack and replacing the specifiers.
5b75c4aa3ab02844925938.png
There is such a picture. It creates even more confusion. It seems like the entry to the stack is displayed correctly, but the reading is not clear, it started from the wrong end. It turns out that this is not a stack at all, but a queue.
As a result, it turned out that real numbers with the %ld specifier were displayed as zeros, and variables with the long type were displayed as they should. (In the book, the result is: 0 unexpected number 0 unexpected number).
Something doesn't add up in my head. The explanation in the book, the result in the book, and my results are all different and don't add up.
Please explain how this example works, preferably with a graphical representation of memory.
PS.Different compilers produce different output results. It turns out even a different sequence of values. In one case, numbers with type long are displayed first, and then the wrong values ​​of double numbers. In the other case, the opposite is true.
P.S.S. Subtracted that the standard such behavior does not define. But all the same it would be desirable to understand as all the same compilers process the given situation.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
M
Mercury13, 2018-08-16
@Mercury13

I got: 0 1074266112 0 1074266112
Or, in the 16th system, 0 40080000 0 40080000
This is due to such things.
1. Arguments of float type are written on the stack as doubles.
2. On x86, Intel byte order (reverse).
3. Fractional numbers are stored without a leading bit (which is always 1), in mantissa-exponent-sign format (Intel byte order).
4. For unity (xxx·2 0 ) the order will be 011…11.
3 = 1.10…0 2 2¹, and taking into account the discarded leading bit, the mantissa will be 10…0.
Order 011…11 + 1 = 10…0.
With big endian, double 3.0 will look like this
• 6 zero bytes - mantissa
• 0000.1000: the lower half bytes are the mantissa, the upper ones are the exponent
• 0100.0000: the sign bit and seven more exponent bits This is
00.00.00.00.00.00.08.40.
We split into two pieces of memory, 4 bytes each.
[00.00.00.00] [00.00.08.40]
Again, let's not forget that integers also have little endianness - and it turns out 0 and 40080000.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question