Answer the question
In order to leave comments, you need to log in
How to optimize the PG counter?
I have some statistics in my application
Number of registered users and number of users who are online, etc.
The number of records in the table is more than 5 million, and INSERT happens very often
Methods that I have already tried:
COUNT
This method is not suitable because the fetch takes about 1
second
SELECT COUNT(*) FROM users
drop table if exists user_aggregate;
drop trigger if exists user_aggregate_trigger on user_info;
drop function if exists user_aggregate();
create table user_aggregate (
id int not null primary key,
user_aggregate int not null
);
create or replace function user_aggregate()
returns trigger
as $$
begin
insert into user_aggregate (id, user_aggregate)
values (1, (select count(id) from user_info))
on conflict (id) do
update set user_aggregate = excluded.user_aggregate;
return new;
end
$$ language plpgsql;
create trigger user_aggregate_trigger
after insert or delete on user_info
for each row execute procedure user_aggregate();
insert into user_aggregate (id, user_aggregate)
values (1, (select count(id) from user_info))
on conflict (id) do
update set user_aggregate = excluded.user_aggregate;
SELECT MAX(id) FROM users
Answer the question
In order to leave comments, you need to log in
A separate table with keys (number of registered users and number of users who are online, etc.) and values. The user logged in, update set value = value + 1 where key='online_user', value = value - 1 logged out, but not at the trigger level, but at the application level. Over time, distortions of values appear due to various errors and unaccounted for states, so once in a certain period (hour, day) the keys are calculated in the background via SELECT COUNT (*) FROM users, etc. Also, this can be kept not in postgresql, but in redis, memcache, etc.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question