R
R
Roman2021-09-06 10:40:24
Redis
Roman, 2021-09-06 10:40:24

Why is Redis so slow?

I welcome everyone.

Input data:
VDS 6 cores from Xeon E3 12xx v2, 16Gb RAM, 60Gb SSD, Debian 10
Attendance is small, about 6k per day.

apt-get install redis-server

# Server
redis_version:5.0.3
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:94145a25ce04923
redis_mode:standalone
os:Linux 4.19.0-17-amd64 x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:8.3.0
process_id:422
run_id:f20ad51d8b6ae05a9ba843b2165e23707347f574
tcp_port:6379
uptime_in_seconds:7466
uptime_in_days:0
hz:10
configured_hz:10
lru_clock:3522888
executable:/usr/bin/redis-server
config_file:/etc/redis/redis.conf

# Clients
connected_clients:19
client_recent_max_input_buffer:2
client_recent_max_output_buffer:0
blocked_clients:0

# Memory
used_memory:1744808
used_memory_human:1.66M
used_memory_rss:6344704
used_memory_rss_human:6.05M
used_memory_peak:64245904
used_memory_peak_human:61.27M
used_memory_peak_perc:2.72%
used_memory_overhead:1151378
used_memory_startup:796712
used_memory_dataset:593430
used_memory_dataset_perc:62.59%
allocator_allocated:1862528
allocator_active:2310144
allocator_resident:5468160
total_system_memory:16424943616
total_system_memory_human:15.30G
used_memory_lua:41984
used_memory_lua_human:41.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:allkeys-lru
allocator_frag_ratio:1.24
allocator_frag_bytes:447616
allocator_rss_ratio:2.37
allocator_rss_bytes:3158016
rss_overhead_ratio:1.16
rss_overhead_bytes:876544
mem_fragmentation_ratio:3.77
mem_fragmentation_bytes:4662840
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:354290
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0

# Persistence
loading:0
rdb_changes_since_last_save:1681
rdb_bgsave_in_progress:0
rdb_last_save_time:1630905374
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0

# Stats
total_connections_received:6023
total_commands_processed:8276
instantaneous_ops_per_sec:0
total_net_input_bytes:5139874
total_net_output_bytes:12357655
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
evicted_keys:0
keyspace_hits:2852
keyspace_misses:0
pubsub_channels:2
pubsub_patterns:3
latest_fork_usec:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0

# Replication
role:master
connected_slaves:0
master_replid:56dbf25d2fef0a3114f1a436001cae747f20aabe
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU
used_cpu_sys:14.781089
used_cpu_user:10.658056
used_cpu_sys_children:0.000000
used_cpu_user_children:0.000000

# Cluster
cluster_enabled:0

# Keyspace
db0:keys=7,expires=0,avg_ttl=0


redis-benchmark -c 3000 -q -n 10 -t get,set

I get the response "Could not connect to Redis at 127.0.0.1:6379: Can't create socket: Too many open files". I do ulimit -n 65535. Tried through the file /etc/security/limits.conf
* soft nproc 65535
* hard nproc 65535
* soft nofile 100000
* hard nofile 100000
root soft nofile unlimited
root hard nofile unlimited

Yes, rebooted shutdown -r now. Not saved, ulimit -n gives 1024. In general...

redis-benchmark -c 3000 -q -n 10 -t get,set
SET: 111.11 requests per second
GET: 111.11 requests per second


In the evening around 80-90, no more. Why so slow? I tried to take VDS on another service, I get:
SET: 588.24 requests per second
GET: 625.00 requests per second

But it seems like I’m already used to it here and I don’t want to transfer projects =) Is the machine on which VDSka is allocated loaded?

Website tech stack - nodejs + nuxtjs ssr + expressjs + redis + mongodb (5.5gb memory). Page caches are created on the fly and checked against time. There are 3 cache implementations - fs, redis, mongo. If I install a radish (storage in the usual get\set JSON.stringify\parse strings), the site starts to slow down terribly, the response to Yandex.webmaster is about 11 seconds. Therefore, now through fs, it would seem that these are slow operations, but ... radish turns out to be slower ... WHAT?! ... Same with sessions, storage in mongo - site response 300-400ms, radish - 5sec ... All further aggravated by the chat on socket.io and the operation of the site in cluster mode via pm2, respectively, applications communicate with each other via socket.io-redis. Moreover, 2 applications work more slowly than in 1. Apparently, the double load is just for the radish?
What thoughts, what other metrics to get,

Answer the question

In order to leave comments, you need to log in

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question