Answer the question
In order to leave comments, you need to log in
Implementation of arithmetic operations. How is it arranged?
Hello.
Preamble: I need to develop a fast algorithm, or improve analogs by improving their speed.
Essence of the question: I do not have a clear understanding of how the operations of division, multiplication, root calculation and others are arranged directly. What is faster, what is slower. Didn't think about it before. I want to fill the gap. Advise the literature and where to dig.
Thank you!
Answer the question
In order to leave comments, you need to log in
The implementation is platform dependent. You can see what operations you are interested in are compiled for a particular compiler using a disassembler (the studio has a built-in one, for unix there is objdump -D > somefile.asm).
In general, optimizations at the level of arithmetic instructions are the last thing. To begin with, it is desirable to make sure that there is no better asymptotic algorithm (assuming that all arithmetic operations work for 1).
For processing the same type of data, it is possible to get acceleration due to vector operations from the SSE* processor extensions or on the GPU.
Read Charles Petzold "K.O.D."
About ~250 pages you will find answers to all questions.
In short:
All operations are performed at the bit level.
Addition - The basic operation by which all others are implemented.
A standard adder consists of a set of logic gates, i.e. from the elements AND, OR, OR-NOT, EXCLUSIVE-OR. Occurs bit by bit from low order to high order.
Subtraction - Implemented by adding to 1 (bit-level inversion and +1) and then adding. Roughly speaking 5-2 == 5+(-2)
Multiplication - Multiple addition.
Division - Multiple subtraction.
In terms of speed, it is logical that the fastest is addition. Then subtraction, well, and division / multiplication - the same.
any books (usually found in Soviet times) with the mathematical processing of computers, up to circuitry (decoders, registers, etc., binary codes and systems are described in great detail there) and starting from various basic computer science textbooks (operations in binary codes, additional codes, number system, as information is presented in a computer). very intelligibly, you can comprehend the entire grassroots level of computer operations (no matter what platform and architecture, since this is the base) in the books of our compatriots, Western and Eastern colleagues (Soviet times)
Take processor specs, everything is written there. All algorithms will be written without fail. Alternatively, you can look for fast implementations for the Z80, it does not know how to multiply and divide out of the box, during the development for the Z80 it found fast implementations.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question