Why does PostgresQL query performance drop over time, but restored when rebuilding index

Posted by Jim Rush on Stack Overflow See other posts from Stack Overflow or by Jim Rush
Published on 2010-03-06T15:30:52Z Indexed on 2010/03/09 8:06 UTC
Read the original article Hit count: 420

Filed under:
|

According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored.

We are using out of the box settings.
The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client

I'm willing to give up insert and update performance to keep up the query performance.

We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage.

We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code.

Additional information: Table and index

CREATE TABLE icl_contacts
(
  id bigint NOT NULL,
  campaignfqname character varying(255) NOT NULL,
  currentstate character(16) NOT NULL,
  xmlscheduledtime character(23) NOT NULL,
...
25 or so other fields.  Most of them fixed or varying character fiel  
...
  CONSTRAINT icl_contacts_pkey PRIMARY KEY (id)
)
WITH (OIDS=FALSE);
ALTER TABLE icl_contacts OWNER TO postgres;

CREATE INDEX icl_contacts_idx
  ON icl_contacts
  USING btree
  (xmlscheduledtime, currentstate, campaignfqname);

Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) -> Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text))

And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us.

My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

© Stack Overflow or respective owner

Related posts about postgresql

Related posts about Performance