Z
Z
zdravnik2018-11-07 09:38:13
bash
zdravnik, 2018-11-07 09:38:13

Why does the loop break unexpectedly?

There is a script:

#!/bin/bash

file="/tmp/test.csv"    # 879314 strok

for ((;;))
do

index=0
while read line; do
        array[$index]="$line"
        index=$(($index+1))
done < $file

for ((a=0; a < ${#array[*]}; a++))
do
echo "${array[$a]}" >> /tmp/test-cycle.csv
done

echo > /tmp/test-cycle.csv
done

The essence of the script is as follows, the /tmp/test.csv file is re-read and added line by line to /tmp/test-cycle.csv, after the entire file has been re-read, it is reset (echo > /tmp/test-cycle.csv) and re-read again and line folding.
So, even if I comment out the line echo > /tmp/test-cycle.csv, my cycle for rereading the file does not complete, if I follow the filling of the /tmp/test-cycle.csv file using the tail command, then it pops up " file is truncated".
Please note that my source file has 879314 lines. If you set on a file, for example, from 10k lines, then everything works like clockwork, and with a large file an unexpected truncation occurs, and each time in a different place (i.e. somewhere approximately in the same, but still different)
Why is this happening? please help me to do it right.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
S
Saboteur, 2018-11-07
@zdravnik

I suspect you're just hitting the memory limit that bash allocates for an array.
For example, for ksh93 it is ~4 MB by default. Offhand what is the limitation in bash and how to look at it I will not say.
But in this script, why would you want to create an array at all? You can immediately output to a file

while read line; do
  echo "$line" >> /tmp/test-cycle.csv
done < $file

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question