Answer the question
In order to leave comments, you need to log in
Almost any algorithm can be slightly modified, radically reducing its running time at best. How?
Started reading Kormen. There is such a question, but there is no answer. I guess the answer is: "Separately handle the best case (via if)".
I'm probably wrong, that's why I'm asking you this question.
Answer the question
In order to leave comments, you need to log in
Yes, just process some case separately by precalculating it. Not necessarily the best, just some. As a result, this case will become the best one and its running time will be linear (you need to compare the input data).
At the same time, here is the answer to the question, why almost any? You can reduce the time to linear in one case. Therefore, if the algorithm at best works no worse than for linear complexity, then it is not always possible to improve it in this way.
There are, however, special effects when you can check only part of the input data. For example, for a bean search in an array, you can compare the element you are looking for with the first two, and find the answer if the element you are looking for lies somewhere between them.
But, for example, you will not speed up the algorithm for summing all the numbers in an array or finding the minimum.
Since the best case is much rarer than others, you can immediately add a hardcode condition, but this is a "drop in the ocean" and the performance gain will be extremely small.
I think that frequency heuristics with "accumulation of knowledge" will help here better.
That is, the most frequent output data is compared with the input data and automatically added to the right place in the algorithm to prevent / bypass all miscalculations (or some one part of them).
There is a dynamic auto-optimization of the algorithm "on the fly", as if..
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question