Answer the question
In order to leave comments, you need to log in
How in bash (awk, sed) to read a file (or output further to the pipeline) in blocks of N lines?
You need to feed a multi-gigabyte CSV file to the DBMS via web-api, but not line by line, but in portions of at least 1000 lines, and on the other hand, memory is limited. In this case, each block of CSV rows will need to be wrapped in an API call.
Is it possible to do this in bash (awk, sed), or will I have to use some kind of PL?
Answer the question
In order to leave comments, you need to log in
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question