Answer the question
In order to leave comments, you need to log in
Loop of 100,000 iterations vs smart loop?
In general, imagine a loop of 100,000 iterations. And the condition is that we need to perform an action with those iterations at which i = array cell, the value of which is not 0. In other words - if (Array[i] == 0) continue;
So. What will be faster, such a cycle of 100,000 iterations, or its analogue with an algorithm?
The essence of the algorithm is as follows: initially, the iteration step is equal not to one, but to ten thousand: 0.10000,20000,30000,40000.. Then we check if Array[i] is full. And, therefore, if it is full, then we "flew", but only by 10,000 iterations. Next, set the iteration step to one, and set i-=10000.
Thus, with a similar check, the first loop performs 100,000 iterations, and the second as much as the array is full + extra 10,000 iterations.
PS The method is useful only when you have filled not all 100.000 array cells, but for example up to 30%.
Although, if we take into account the rule that why we need extra cells at all or if we have all 100,000 cells filled, we still have to do 100,000 iterations :) Thus, this can be perceived as a "temporary hut" if there are few resources and so on.
Answer the question
In order to leave comments, you need to log in
It seems to me that some kind of nonsense was written to be honest
There is such a good old "Method of half division", Google calmly, if I correctly interpreted your Kalekan.
P.S. in general, to be honest, there is no clear idea of \u200b\u200bwhat you tried to do and what you want to get.
What is known about filled cells? Do they form a continuous fragment? Is it pinned to the beginning of the array, or to the end, or does it have a minimum length?
If your algorithm is applied to an array in which cells from 10001 to 19999 are filled, it will not notice them and will say that the array is empty. Is that how it's supposed to be?
If a continuous section with a length L unknown in advance is filled, then there is an easy way to find it in O(N/L) operations (and then process it in O(L)).
Something I did not understand at all.
The first iteration we look at Array[10000] and if it is not !=0 then what do we do next? going back or forward? where do you get the data that 0 is not in the first or subsequent interval?
As I understand it, you are trying to implement something like a classical dichotomy. But it seems to me that in your case it will not work, because. you don’t know exactly where the zeros are, and in any case you will have to stupidly go around all the cells and perform an action for each. So don't sweat it and keep cycling.
There is no information about what kind of array, where it came from. To apply such an algorithm, you need to know about the patterns of information location in the array. You did not provide such information, so it is not possible to say whether your algorithm is effective. If the placement of data in the array is random, then exhaustive search will solve the problem.
Alternatively, you can try to sort the array by value, determine the position of the first non-zero element and start working from there. Well, or sort in descending order and bypass the array until the first zero element is met. But the performance of such a solution must first be tested, since it worries you.
I interpret the stream of consciousness of the author of the question:
Given - ?
The solution is something.
The question is, is this solution right?
The answer is equivalent to the question - "why are you saving?"
But curiosity won over laziness)
As far as I understand, the author needs to process non-empty array elements. Big array.
Remember the abbreviation KISS: go through all the elements in order and not reinvent the wheel, in which you will then look for bugs for two days. And also minimize the risk of being cursed by the person who inherits your code.
If the array arrives sorted, go through it from the filled edge.
If not, there is no point in sorting. In any case, spend more time on sorting than on one pass through the array.
If you can influence the filling of the array with values, do not insert empty ones, but only those that need to be processed.
In general, for normal optimization in any case, you need more input data.
100000 iterations, are you serious?
$arr = \range (0, 100000);
$res = [];
$start = \microtime(true);
$i = 0;
for ($i; $i<100000; ++$i) {
if ($arr[$i] != 0) {
$res[$i] = $arr[$i];
}
}
$end = \round(\microtime(true) - $start, 2);
echo $end;
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question