I
I
inheaven2010-12-15 12:17:52
MySQL
inheaven, 2010-12-15 12:17:52

HELP: How to work with a large amount of data? Oracle or MySQL?

There is a mysql base, it has an innodb table with 120 million rows, and somehow everything works meeeddddlynnnnno.
I wrote a procedure, the cursor sorts through a part of the data (1 million) from a large table, for each record it makes 10 simple queries on index keys to the same table, if the condition passes, then a record is inserted into another table (an average of 1 time in 200 records) .
The first 1000 records are quick, then slower, slower and moreeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
I still don't understand what the delay is. It seems to me that it will be faster if rewritten via jdbc. Although I was sure that if everything is native, then it should be super fast. Maybe some feature with cursors or where there is not enough memory, or where to correct the settings. It can split a large table into several small ones, although I think indexes are everything. I believe that the speed limit is the speed of reading data from the hard drive. But in fact, it works for 10 hours, mysqld completely loads one core.
And who-thread knows a difference with oracle database on the big data?

Answer the question

In order to leave comments, you need to log in

10 answer(s)
P
pentarh, 2010-12-16
@pentarh

And you did not look, is there a plug in IO? iostat -dkx 3 for example. If there is a IO plug (%util>90), then Oracle will not save you.
In general, with the procedures in the muscle, everything is somehow bad. They say that they seem to be precompiled, but in fact they are in the original plaintext ...

J
JeanLouis, 2010-12-15
@JeanLouis

The same problem was with cursors in MySQL. But I had a lot of inserts, then I thought that this was due to the regeneration of indexes for each insert. But in your case, 1 insert per 200 parameters… means something else.

G
Georgy Khromchenko, 2010-12-15
@Mox

I would not hope that orakel will save you. In my experience, he knows how to slow down in such cases perfectly :)
- You'll have to spend a lot of time fiddling with him.
- And finally it is also paid - can you buy additional operatives, Xeons and SSD screws for these grandmas?
I don’t know how much RAM you have on the server, but look at the index files, their size and in the muscle settings set aside, if possible, the amount of memory for indexes is slightly larger than the size of these index files
Maybe 10 requests can be combined into one transaction / one request? (If not already done)?
The table can probably be split somehow, but there will be a special effect if different parts are on different screws.
Or maybe try drizzle or some other forks of MySQL?

G
Georgy Khromchenko, 2010-12-15
@Mox

And, also, I almost forgot - if there is a lot of reading from the table, then maybe try the MyISAM table? But it's just to try, see what happens.

Y
yktoo, 2010-12-15
@yktoo

A hundred or two million rows is a very modest database for Oracle. I did not notice any degradation of performance on long operations, even very complex ones.
In any case, you should try to pack everything as much as possible into standard DML statements with joins, without additional code, cursor loops, etc. With very large volumes of changes, it is necessary to split the transaction into several transactions in order to limit the use of undo tablespace. Of course, enable parallelism and carefully consider the query execution plan.

P
pwlnw, 2010-12-15
@pwlnw


>First 1000 records are fast, then slower, slower and moreeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee Maybe just do an autocommit?
It's just a guess. If it does not help, then you will have to investigate thoroughly.

N
Niazza, 2010-12-15
@Niazza

Did you consider migration to MS SQL server?

N
Niazza, 2010-12-15
@Niazza

security, and speed. just switching to Oracle will be more expensive ...

N
Niazza, 2010-12-15
@Niazza

security, and speed. just switching to Oracle will be more expensive ...

N
Niazza, 2010-12-15
@Niazza

Also - a nuance - many Western companies - what is called Oracle databases - have already gone and returned - are massively switching to MS SQL Server

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question