Answer the question
In order to leave comments, you need to log in
Increasing the size of the library when compiling all sources in one file?
I decided to conduct an experiment - to combine all the sources in the library into one file and build the library from it. In theory, it should significantly speed up compilation and, theoretically, improve the performance of the optimizer.
I took a couple of libraries (about 20-40 source codes in each), and assembled it the old way and the new way.
Environment: Linux (RHEL), g++ 3.4.6
Compilation time sped up and not sped up at the same time :-)
Significantly - about 4 times less CPU time.
At the same time, since the build parallels the compilation of sources very well (there are 4 cores on the machine), the difference in the time spent (by hours) was small. Real compilation time decreased by only 10%.
I understand that on machines with one CPU, the effect will be cool (if the compiler has enough memory to cope with a huge source file).
Now about the size of the resulting library - I expected it to decrease - there should also be expanse for optimization. In fact, the library grew by 10% if the assembly was carried out from one healthy source. And this is something I don’t understand ... I thought it would become smaller ...
At the same time, if we take only object files, then just their volume is greatly reduced in size - one large object file takes 1.5-2 times less than in total small - this is just expected.
Question: what can cause the increase in the size of the library?
And what can you poke into the library to understand where the size comes from? (nm, objdump, etc.?)
So far, the only suspicion I have is a “more efficient” inline… or maybe something with templates?..
Regards,
Andrey
Answer the question
In order to leave comments, you need to log in
Most likely, the compiler simply inlined everything possible, and the code thus turned out to be thicker.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question