DB performance of frequently updated table
Michael Šimáček
msimacek at redhat.com
Thu Jul 23 09:19:07 UTC 2015
On 2015-07-22 10:48, Miroslav Suchý wrote:
> Dne 2.7.2015 v 10:49 Michael Šimáček napsal(a):
>> We've been facing some DB performance issues in Koschei production machine recently. Our central table (package) has
>> only ~10000 rows but sequential scan of the table was taking unreasonably long (~4s). Other tables that are orders of
>> magnitude bigger were faster to query. I was investigating the problem and it turned out that the table occupied very
>> large amount of disk space:
>
> So why are you not using indexes?
>
I am. But a) some queries I need to access all rows (summing
priorities). It's not in frontend, so not performance-critical, but 8s
is a lot even for backround tasks.
b) indices make it better for some time. But over time, they seem to
degrade the same way as the main table and index-using queries also
become slow.
Currently, we just work around the problem by running VACUUM FULL on the
single table by cron multiple times per day. In the next release, I
tried to make the updates to the table much less frequent (storing
recalculated priorities happens every 5 minutes, instead of every few
seconds), so the problem should be reduced.
--
Michael
More information about the infrastructure
mailing list