M
M
MR.TOSTER Gipard Valerievich2016-08-25 19:25:31
Programming
MR.TOSTER Gipard Valerievich, 2016-08-25 19:25:31

How resource-consuming is the operation of getting a date - microseconds since 1970 in modern languages?

Imagine that we need to get nano seconds or micro seconds very often through a standard language function.
And think about how much this resource is costly.
And is it harder to get nano seconds than micro seconds or miles seconds.
Or are these operations of low cost and these values ​​are contained at any moment in the environment of the language or OS and without calls?
I set out the essence of the question completely, I hope for your experience and maybe someone really tested in any of the languages.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
N
nirvimel, 2016-08-25
@zoceb

On most (all?) modern OSes, getting the time is a function of the kernel. Therefore, for a user process in userspace, this requires exactly one call to the kernel , which is a syscall, that is, a software interrupt. On the kernel side, the code is usually trivial and consists in getting the value of the variable that stores the time (this variable is incremented from the timer interrupt handler), and returning this value via sysret, i.e. return from the interrupt. Therefore, the main time costs are reduced to the execution of syscall / sysret and saving / restoring the context of the user process when entering / exiting to / from the kernel.
The call speed practically does not depend on the units of measurement (nano/mirco/milli). The conversion of one unit to another occurs on the side of the user process, the libraries and environment of a particular language are responsible for this, but these calculations are only a few machine instructions and do not make a significant contribution to the total call time.
In Linux, there are several timers in the kernel: high / regular resolution and translatable / non-translatable back, but getting the time occurs through one clock_gettime function, no matter how differently it looks in different DLLs.
On Windows, the standard GetSystemTime and GetTickCount mechanisms only return time in millisecond precision. And the high resolution timer QueryPerformanceCounter is only for measuring time intervals, its absolute value is meaningless.
Historical background: In x86 real mode , as long as the BIOS has a hardware timer interrupt and the timer is configured by default (it can still be reconfigured), that is, with an interval of 55ms and a frequency of 18.2 hertz, for each tick of the timer, a four-byte value is incremented by absolute address 0:046C. So the user program (the term process is not quite applicable here) can get the time value instantly by simply reading this value from memory without any calls.
It is worth noting that this functionality has nothing to do with DOS or any OS in general, but is flashed into the BIOS of any x86-compatible (even modern) computer and is valid every time the machine boots up until the processor is switched to protected mode at boot time. kernels of modern OS.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question