V
V
Vladimir Korshunov2021-08-10 17:21:01
C++ / C#
Vladimir Korshunov, 2021-08-10 17:21:01

How to simulate the correct speed of various sorts with usleep?

I am writing a visualization of various sorts in c++ and SFML. I wish the sortings could be noticed at all. I would also like to see the difference and similarity in execution time between different sorts, i.e. insertion sort and selection sort should be done in about the same time on the same array. I artificially slow down sorts, here is an example:

void selection_sort (elem a[], int size)
{
    int min = a[0].value, min_index;
    
    for (int i = 0 ; i < size; i++)
    {
        min_index = i; usleep(90);
        min = a[i].value; usleep(90);
        for (int j = i; j < size; j++)
        {
            usleep(90);
            if (a[j].value< min)
            {
                min = a[j].value; usleep(90);
                min_index = j; usleep(90);
            }
        }
        swap_elements(a, i, min_index); usleep(90);
    }
}

In the book on algorithms, according to which I am currently learning sorting, the execution time is calculated based on the fact that, say, string n, where some number is written to the array, and string n + 1, where two numbers are added, may have different times execution. I just suspend sorting for a fixed time everywhere. However, if you just do this after each line, the execution time of sorts with the same complexity differs, sometimes by 30 percent. Are there other ways to slow down sorting? If not, then how to slow it down more intelligently so that the visualization displays the real difference between the sorts?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
W
Wataru, 2021-08-10
@GashhLab

You have stumbled upon an important property of asymptotic analysis. O(n^2) - means that for sufficiently large n, the running time will be approximately n^2 up to a constant. This same constant can be at least 0.5, at least 10000000.
Compare 2 algorithms:

// n(n-1)/2 инкремента.
for(i = 0; i < n; ++i){
  for (j = i+1; j < n; j++) {
    cnt1++;
  }
}
// n^2 инкремента.
for(i = 0; i < n; ++i){
  for (j =0; j < n; j++) {
    cnt2++;
  }
}

Both algorithms have O(n^2) complexity, but one does about 2 times fewer operations.
The bottom line is that although the algorithms work with the same asymptotics, they can work for different times! Special oddities can occur for small n. The O(n log n) algorithm can often be slower than the O(n^2) algorithm. Therefore, all real implementations of sorts for small arrays run dumber quadratic algorithms.
If you want to show only asymptotics, then take a larger n, and then these 30% will be invisible compared to a tenfold slowdown of O(n^2) relative to O(n log n). Or normalize your sorts. Artificially speed up some algorithms and slow down others to get exactly the result you want. But it's kind of unfair. If that's what you really want, give each sort a time of C\*n^2 or C\*n\*log n milliseconds. Run the sorting algorithms without visualization, count how many slowdown operations you would do (same code as you have, only do sleep_count++ instead of usleep()). At the end, calculate the deceleration factor - how much each usleep needs to sleep, so that in total sleep_count they sleep for the given time. And run each sort with already calculated parameters for usleep.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question