Answer the question
In order to leave comments, you need to log in
[.Net] Record creation speed in MySQL compared to MS SQL (test results + question)
Faced such a need: to ensure the creation of 100 (relatively simple) records per second in a relational database. And, to my surprise, MySQL, when used from .Net , no longer copes with this.
In order not to be abstract, I will describe my experiment in more detail:
Hardware
RAM 2 GB, Celeron 2-core processor, 1.87 GHz.
Software
Windows 7 Ultimate 32. The latest version of MySQL with default settings (during installation, I chose the “developer machine” mode). Connector from the official site.
ORM - EntityFramework.
Table in database (2 fields):
Id int identity primarykey,
Value text
Code. For tests, the same code was used (for all databases), only the connection string was changed. Here is the code itself:
namespace CodeFirstProj
{
public class TestEntity
{
public int Id { get; set; }
public string Value { get; set; }
}
class Program
{
public class TestContext : DbContext
{
public TestContext()
: base("postgreSQL")
{
}
public DbSet TestEntitySet { get; set; }
}
static void Main(string[] args)
{
// Размер константы 2000 символов
const string value = "01234567890...";
using(var testContext = new TestContext())
{
Console.WriteLine(DateTime.Now.ToString()); // можно и Stopwatch, но это ничего не изменит
for (int i = 0; i < 1000; i++)
{
var testEntity = new TestEntity();
testEntity.Value = value + i;
testContext.TestEntitySet.Add(testEntity);
testContext.SaveChanges();
}
Console.WriteLine(DateTime.Now.ToString());
}
}
}
}
* This source code was highlighted with Source Code Highlighter.
The size of the inserted record (string) is 2,000 characters (2 Kb).
Here are the results (the usage code is the same):
MySQL: 20 records per second
PostgreSQL: 100 records per second (for comparison)
MS SQL Express: 166 records per second (for comparison)
I understand the problem is in the connector, MySQL itself does not so slow. What can be tried to optimize to achieve 100 records per second?
Answer the question
In order to leave comments, you need to log in
Well, for starters, do testContext.SaveChanges(); after the cycle.
You can explore what results will be if you wrap everything in a transaction.
Also, it is important which database engine is used in the Muscle. If InnoDB, then you need to go into the settings and increase the amount of memory, etc. available to it. Like this: www.google.ru/search?q=mysql+innodb+optimization
ZY most importantly, what for do you need muscle if there is postgre and mssql?
This is some kind of quiet horror ... when I wrote on .net databases, with mysql, the work for writing was at the level of 200 requests per second, and not just a plate, but with indexes, etc. while the computer was ancient and weak.
Are you sure that you don't need to tune anything extra when using the base(...) keyword?
Have you tried it without using EF? What productivity by the same machine when using ADO.NET?
Played some more.
First, I checked for Java. I used EclipseLink as ORM. The result is identical.
Secondly, with the help of MySQLInstanceConfig.exe, I deleted the existing instance and created it again, but with the maximum parameters (only a database with transaction support, because they are needed).
As a result - about the same result. Moreover, during the test (and it lasts about 40 seconds), 700 MB remain free on the laptop. memory and 20-30% of the processor.
I also tested it in a multi-threaded environment. If you can insert 20 records per second into a 1-n thread, then by running 100 threads, it was possible to increase the number to 250 records per second.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question