Answer the question
In order to leave comments, you need to log in
Answer the question
In order to leave comments, you need to log in
Depending on what is in the file, you can consider options for working through a stream.
Googling this word is actually easy.
(xml || json || other data type) streaming/stream parser/reader.
This is a standard solution to the problem of reading files that are larger than RAM.
The solution for non-standard data is line-by-line reading or in chunks through fread (that is, in fact, these lines will also be units of measurement, otherwise the output will be messy).
If the file contains super strange data, which is indivisible and spread over the lines, then most likely, alas. Or write your own version of stream reading and come up with starting points for parsing and navigating through the file.
I parsed an xml dump of 25GB adding to the database, the script ran 2 times the first time it added only the id, as everything was added, rebooted the server, then based on the available id in the database, I added everything else, the hardware was corei7 24GB of RAM, you can try to split it into several files on a regular basis so as not to break the structure, when splitting, write to a temporary file the line on which the previous breakdown ended, in case of overloading the server, you can start from the same place
yes, without problems
in parts, you can process a file of any size,
and in order not to fit into the limits both in time and in terms of load
, you need to process it in fairly small parts and at sufficient intervals, the
simplest option
is to output js to the browser after processing the piece, which immediately or after an interval will start processing
like so
echo "<script>
setTimeout(\"window.location.href='another.php?s=$strt';\", 1000)
</script>";
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question