M
M
Mikhail Krasilnikov2012-09-17 18:29:48
MySQL
Mikhail Krasilnikov, 2012-09-17 18:29:48

A file that is a reflection of a MySQL DB dump?

Our programmers often need to save SVN snapshots of various sites hosted on our server (FreeBSD). These snapshots should include site files and a dump of its database (MySQL). If everything is more or less simple with files, then making dumps is a slightly tedious task (which is why programmers often forget about them). And then such a stupid idea was born: it would be great if we could place a special file in the home folder of the site (of course, inaccessible via the web), which, when read, would return a text dump of the database. Then the programmer, downloading site files from the server, would also receive a dump.
A search in the search engines for ready-made solutions (for example, for creating files, when reading which a script would be called) did not give anything (maybe I was looking for it wrong?). The next thought is to write your own device driver, and put a symbolic link to it in the site folders. But here a difficulty arises: there are a lot of databases on the server, how can the driver know which one to dump? Writing the database name to the device before reading it is not an option, for obvious reasons. Making your own device for each database is also not the most convenient solution.
Does anyone have any bright ideas on how to solve the problem?
Update. Apparently it needs to be explained right here. This is not a production issue. It's more of an interest - is it possible to do something similar? If it succeeds, good, if it succeeds, even better. Therefore, alternatives like “write a script that will do this by cron” are not interesting, because they are too boring.

Answer the question

In order to leave comments, you need to log in

6 answer(s)
S
Sergey, 2012-09-17
@seriyPS

You are doing something stupid there ...
But in general, you need to use fifo
something like

DUMP_FILE=/path/to/fifo
mkfifo $DUMP_FILE
while 1; do
    mysql_dump > $DUMP_FILE
done

Those. muscledump will only start emitting data when someone starts reading from /path/to/fifo. Because it is looped, then a new dump will be produced each time it is read. But I'm not sure that mysql_dump will not lock the database immediately on startup.
But it would be better for you to just write a dump script and not deal with garbage.

E
egorinsk, 2012-09-17
@egorinsk

Attention, correct answer.
Instead of downloading files manually and, moreover, writing your own file systems, you need to write a script (in any language you like, I would choose bash) that will connect to the server via ssh, dump the database, and commit all the necessary files to SVN. He will do all the work himself, not manually. Then the problem "forgot to commit the file" disappears automatically.

G
gaelpa, 2012-09-17
@gaelpa

Maybe for a start to get the dump scripts themselves into a file? It won't be so lazy anymore.

A
AntonioK, 2012-09-17
@AntonioK

Invent a bicycle.
Crontab, dump every night during off hours, rotation script (deletes oldest dumps when space is low). And you don't have to do anything by hand.
The dump files themselves need to be compressed and dropped to a remote server (or some kind of dropbox) for safety.

A
AxisPod, 2012-09-17
@AxisPod

Well, it’s unlikely that you will find anything here, but search for the keywords file system.

S
SleepingLion, 2012-09-18
@SleepingLion

You can write a script that pretends to be FS using FUSE.
In this virtual FS, the bases are represented as files. For each read operation, prepare and return a dump.
Only there may be problems with determining the file size. we can find out the actual size of the dump only when it is ready.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question