G
G
Genghis2013-09-16 14:56:51
Java
Genghis, 2013-09-16 14:56:51

Ways to store and read large amounts of data in a Java application for Android

Good day, dear ones.

After reading various articles, I found information only that Android locally allows you to use SQLite from all possible databases and not a single mention of others. Since SQLite has a truncated functionality, a reasonable question arises - are there more advanced analogues or other ways of storing information with quick read / write access?
In particular, the topic of multi-inserts is of most interest. Since they are not available in SQLite, what about a data volume of 20-30 thousand rows? Such a number of inserts into the database takes a very long time, and when it is necessary to perform 3-5 such procedures per day, the usability of the application for Android is sharply reduced.
I chose XML files and, accordingly, XPath for sampling as the only acceptable way of storing and working with data at the moment, however, such queries are noticeably slower than similar ones when retrieving from a database (for example, MySQL).
What ways to implement the functionality of writing / reading large amounts of tabular data in a Java application for Android can you recommend?

Answer the question

In order to leave comments, you need to log in

5 answer(s)
S
SabMakc, 2013-09-16
@SabMakc

What is the speed in SQLite?
Insertions go within one transaction?
Were prepared statements used? It allows you to parse your query once, and then only the insert data changes.
PS A strange solution is to parse XML in cases where there is not enough SQLite.
XPath in my opinion will be much longer + there may not be enough memory.
For what purposes will you use this data?
If conditions are needed in the selection, then you can’t get away from SQLite.
In general, SQLite is a specialized storage, i.e. its speed can hardly be beaten.
Only if the use of this data is specific, then only then is there a chance of building a special storage that will bypass SQLite in speed.
In one project, I used protobuf, but not in order to speed up sampling, for prototyping purposes - you can immediately generate classes for working with data. As a result, I stayed on protobuf - the speed turned out to be quite acceptable.

T
timka05, 2013-09-16
@timka05

Well, for SQLITE, when inserting large volumes, play with PRAGMA.
PRAGMA synchronous=OFF- disables checking for writing data to the file system
and transferring the transaction file and temporary buffer to memory (we save on accessing the disk (built-in / memory card)
PRAGMA journal_mode=MEMORY
PRAGMA temp_store=MEMORY
More sqlite.org/pragma.html

G
Genghis, 2013-09-17
@Genghis

Thank you in advance for the solutions provided. I'll try them out and post the results.

V
Vladimir Smirnov, 2013-09-17
@bobzer

Ways to store and read large amounts of data
This is exactly what DBMS are designed for, and they should be used.
The only currently acceptable way to store and work with data is XML files.
It is not clear, and here in general XML. The standard was developed to unify formats for information exchange, with a focus on human readability, and therefore redundant, resource-intensive and absolutely not adapted for optimal data storage. Even if you decide to reinvent the wheel, then XML is perhaps the worst of all possible options. Then merge everything into a delimited file and write your own save and fetch mechanisms. It will work faster than XML, and it will require many times fewer resources. At the same time, that XML, that files with delimiters, whatever - if you do not use the DBMS to " storage and read large amounts of data ", then with 90% probability you can assume that you are doing something wrong (unless you Google, of course).
And the server cannot be loaded with such procedures, since about 150 devices with this application work with it. Imagine how many such batches of insert requests he will have to make per day, if for each of them there are 10-30 thousand and 3-5 times a day
Most modern DBMS, when installed on an average modern server, can easily swallow a couple of million inserts per day. Yes, what is there per day, literally the day before yesterday, my Oracle worked 15 million inserts into a table with three dozen fields in an hour, and in a test environment.
on a tablet ...... 10-30 thousand and 3-5 times a day
If this is a tablet, then the data is generated by the user, and not by some automated sensor. Here the question is, what can a user generate there? Maybe you should re-architect the application? Or normalize the stored data? I’m just asking, just in case, otherwise you never know ... You have a competent architect on the project, right?

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question