Answer the question
In order to leave comments, you need to log in
How can I open a 140GB dump in ubuntu sql
And in general, will iron pull: a laptop of average performance 2GB of RAM, 2 cores. There is plenty of free space.
In general, ideally, it would be imported into mysql, but I'm afraid it will not survive this process, so at least open it and find the desired text in it :)
Answer the question
In order to leave comments, you need to log in
The main question is why? If you find the text, then you can not open it. You describe the essence of the problem. And so you can try vim (the main thing is to run it without plugins, it will be faster)
Uninstall Ubuntu. Install Windows and open with notepad, "the main thing is to run it without plugins, it will be faster": D
Raise the SQL server, feed the dump to the server, go through pkhmyadmin, delete 100% unnecessary databases, dump with the rest.
140GB? Shch, I'm afraid to ask what kind of dump this is.
The first thing that comes to mind is to split the dump into 2-3 dozen smaller parts, a quick googling revealed how to do it in Linux:
split --bytes=1m /path/to/large/file /path/to/output/file/prefix
Where 1m is the size of each file. Then I think you'll understand.
open - nano, vi
import via muscle source
$ mysql -u user -ppass
> source /path/to/file for
a long time - yes, but it will survive, and not imported like that
Ahh demon! Bite him back! :)
140GB oh****! :)
"and then I don't even know how to search for smaller ones :)"
cat /path/to/directory_with_many_small_files/* | grep "pattern"
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question