summaryrefslogtreecommitdiff
path: root/src/backend/commands
AgeCommit message (Collapse)Author
2013-06-13Only install a portal's ResourceOwner if it actually has one.Tom Lane
In most scenarios a portal without a ResourceOwner is dead and not subject to any further execution, but a portal for a cursor WITH HOLD remains in existence with no ResourceOwner after the creating transaction is over. In this situation, if we attempt to "execute" the portal directly to fetch data from it, we were setting CurrentResourceOwner to NULL, leading to a segfault if the datatype output code did anything that required a resource owner (such as trying to fetch system catalog entries that weren't already cached). The case appears to be impossible to provoke with stock libpq, but psqlODBC at least is able to cause it when working with held cursors. Simplest fix is to just skip the assignment to CurrentResourceOwner, so that any resources used by the data output operations will be managed by the transaction-level resource owner instead. For consistency I changed all the places that install a portal's resowner as current, even though some of them are probably not reachable with a held cursor's portal. Per report from Joshua Berry (with thanks to Hiroshi Inoue for developing a self-contained test case). Back-patch to all supported versions.
2013-04-29Ensure ANALYZE phase is not skipped because of canceled truncate.Kevin Grittner
Patch b19e4250b45e91c9cbdd18d35ea6391ab5961c8d attempted to preserve existing behavior regarding statistics generation in the case that a truncation attempt was canceled due to lock conflicts. It failed to do this accurately in two regards: (1) autovacuum had previously generated statistics if the truncate attempt failed to initially get the lock rather than having started the attempt, and (2) the VACUUM ANALYZE command had always generated statistics. Both of these changes were unintended, and are reverted by this patch. On review, there seems to be consensus that the previous failure to generate statistics when the truncate was terminated was more an unfortunate consequence of how that effort was previously terminated than a feature we want to keep; so this patch generates statistics even when an autovacuum truncation attempt terminates early. Another unintended change which is kept on the basis that it is an improvement is that when a VACUUM command is truncating, it will the new heuristic for avoiding blocking other processes, rather than keeping an AccessExclusiveLock on the table for however long the truncation takes. Per multiple reports, with some renaming per patch by Jeff Janes. Backpatch to 9.0, where problem was created.
2013-04-25Avoid deadlock between concurrent CREATE INDEX CONCURRENTLY commands.Tom Lane
There was a high probability of two or more concurrent C.I.C. commands deadlocking just before completion, because each would wait for the others to release their reference snapshots. Fix by releasing the snapshot before waiting for other snapshots to go away. Per report from Paul Hinze. Back-patch to all active branches.
2013-02-23Correct tense in log messagePeter Eisentraut
2013-02-13Fix bogus when-to-deregister-from-listener-array logic.Tom Lane
Since a backend adds itself to the global listener array during Exec_ListenPreCommit, it's inappropriate for it to remove itself during Exec_UnlistenCommit or Exec_UnlistenAllCommit --- that leads to failure when committing a transaction that did UNLISTEN then LISTEN, since we end up not registered though we should be. (This leads to missing later notifications, or to Assert failures in assert-enabled builds.) Instead deal with deregistering at the bottom of AtCommit_Notify, when we know the final state of the listenChannels list. Also, simplify the representation of registration status by replacing the transient backendHasExecutedInitialListen flag with an amRegisteredListener flag. Per report from Greg Sabino Mullane. Back-patch to 9.0, where the problem was introduced during the LISTEN/NOTIFY rewrite.
2013-02-01Fix typo in freeze_table_age implementationAlvaro Herrera
The original code used freeze_min_age instead of freeze_table_age. The main consequence of this mistake is that lowering freeze_min_age would cause full-table scans to occur much more frequently, which causes serious issues because the number of writes required is much larger. That feature (freeze_min_age) is supposed to affect only how soon tuples are frozen; some pages should still be skipped due to the visibility map. Backpatch to 8.4, where the freeze_table_age feature was introduced. Report and patch from Andres Freund
2013-01-23Fix performance problems with autovacuum truncation in busy workloads.Kevin Grittner
In situations where there are over 8MB of empty pages at the end of a table, the truncation work for trailing empty pages takes longer than deadlock_timeout, and there is frequent access to the table by processes other than autovacuum, there was a problem with the autovacuum worker process being canceled by the deadlock checking code. The truncation work done by autovacuum up that point was lost, and the attempt tried again by a later autovacuum worker. The attempts could continue indefinitely without making progress, consuming resources and blocking other processes for up to deadlock_timeout each time. This patch has the autovacuum worker checking whether it is blocking any other thread at 20ms intervals. If such a condition develops, the autovacuum worker will persist the work it has done so far, release its lock on the table, and sleep in 50ms intervals for up to 5 seconds, hoping to be able to re-acquire the lock and try again. If it is unable to get the lock in that time, it moves on and a worker will try to continue later from the point this one left off. While this patch doesn't change the rules about when and what to truncate, it does cause the truncation to occur sooner, with less blocking, and with the consumption of fewer resources when there is contention for the table's lock. The only user-visible change other than improved performance is that the table size during truncation may change incrementally instead of just once. Backpatched to 9.0 from initial master commit at b19e4250b45e91c9cbdd18d35ea6391ab5961c8d -- before that the differences are too large to be clearly safe. Jan Wieck
2013-01-18Protect against SnapshotNow race conditions in pg_tablespace scans.Tom Lane
Use of SnapshotNow is known to expose us to race conditions if the tuple(s) being sought could be updated by concurrently-committing transactions. CREATE DATABASE and DROP DATABASE are particularly exposed because they do heavyweight filesystem operations during their scans of pg_tablespace, so that the scans run for a very long time compared to most. Furthermore, the potential consequences of a missed or twice-visited row are nastier than average: * createdb() could fail with a bogus "file already exists" error, or silently fail to copy one or more tablespace's worth of files into the new database. * remove_dbtablespaces() could miss one or more tablespaces, thus failing to free filesystem space for the dropped database. * check_db_file_conflict() could likewise miss a tablespace, leading to an OID conflict that could result in data loss either immediately or in future operations. (This seems of very low probability, though, since a duplicate database OID would be unlikely to start with.) Hence, it seems worth fixing these three places to use MVCC snapshots, even though this will someday be superseded by a generic solution to SnapshotNow race conditions. Back-patch to all active branches. Stephen Frost and Tom Lane
2012-12-03Avoid holding vmbuffer pin after VACUUM.Simon Riggs
During VACUUM if we pause to perform a cycle of index cleanup we drop the vmbuffer pin, so we should do the same thing when heap scan completes. This avoids holding vmbuffer pin across the main index cleanup in VACUUM, which could be minutes or hours longer than necessary for correctness. Bug report and suggested fix from Pavan Deolasee
2012-11-30Add missing buffer lock acquisition in GetTupleForTrigger().Tom Lane
If we had not been holding buffer pin continuously since the tuple was initially fetched by the UPDATE or DELETE query, it would be possible for VACUUM or a page-prune operation to move the tuple while we're trying to copy it. This would result in a garbage "old" tuple value being passed to an AFTER ROW UPDATE or AFTER ROW DELETE trigger. The preconditions for this are somewhat improbable, and the timing constraints are very tight; so it's not so surprising that this hasn't been reported from the field, even though the bug has been there a long time. Problem found by Andres Freund. Back-patch to all active branches.
2012-11-29Fix assorted bugs in CREATE INDEX CONCURRENTLY.Tom Lane
This patch changes CREATE INDEX CONCURRENTLY so that the pg_index flag changes it makes without exclusive lock on the index are made via heap_inplace_update() rather than a normal transactional update. The latter is not very safe because moving the pg_index tuple could result in concurrent SnapshotNow scans finding it twice or not at all, thus possibly resulting in index corruption. In addition, fix various places in the code that ought to check to make sure that the indexes they are manipulating are valid and/or ready as appropriate. These represent bugs that have existed since 8.2, since a failed CREATE INDEX CONCURRENTLY could leave a corrupt or invalid index behind, and we ought not try to do anything that might fail with such an index. Also fix RelationReloadIndexInfo to ensure it copies all the pg_index columns that are allowed to change after initial creation. Previously we could have been left with stale values of some fields in an index relcache entry. It's not clear whether this actually had any user-visible consequences, but it's at least a bug waiting to happen. This is a subset of a patch already applied in 9.2 and HEAD. Back-patch into all earlier supported branches. Tom Lane and Andres Freund
2012-11-05Fix handling of inherited check constraints in ALTER COLUMN TYPE.Tom Lane
This case got broken in 8.4 by the addition of an error check that complains if ALTER TABLE ONLY is used on a table that has children. We do use ONLY for this situation, but it's okay because the necessary recursion occurs at a higher level. So we need to have a separate flag to suppress recursion without making the error check. Reported and patched by Pavan Deolasee, with some editorial adjustments by me. Back-patch to 8.4, since this is a regression of functionality that worked in earlier branches.
2012-07-25Fix longstanding crash-safety bug with newly-created-or-reset sequences.Tom Lane
If a crash occurred immediately after the first nextval() call for a serial column, WAL replay would restore the sequence to a state in which it appeared that no nextval() had been done, thus allowing the first sequence value to be returned again by the next nextval() call; as reported in bug #6748 from Xiangming Mei. More generally, the problem would occur if an ALTER SEQUENCE was executed on a freshly created or reset sequence. (The manifestation with serial columns was introduced in 8.2 when we added an ALTER SEQUENCE OWNED BY step to serial column creation.) The cause is that sequence creation attempted to save one WAL entry by writing out a WAL record that made it appear that the first nextval() had already happened (viz, with is_called = true), while marking the sequence's in-database state with log_cnt = 1 to show that the first nextval() need not emit a WAL record. However, ALTER SEQUENCE would emit a new WAL entry reflecting the actual in-database state (with is_called = false). Then, nextval would allocate the first sequence value and set is_called = true, but it would trust the log_cnt value and not emit any WAL record. A crash at this point would thus restore the sequence to its post-ALTER state, causing the next nextval() call to return the first sequence value again. To fix, get rid of the idea of logging an is_called status different from reality. This means that the first nextval-driven WAL record will happen at the first nextval call not the second, but the marginal cost of that is pretty negligible. In addition, make sure that ALTER SEQUENCE resets log_cnt to zero in any case where it touches sequence parameters that affect future nextval results. This will result in some user-visible changes in the contents of a sequence's log_cnt column, as reflected in the patch's regression test changes; but no application should be depending on that anyway, since it was already true that log_cnt changes rather unpredictably depending on checkpoint timing. In addition, make some basically-cosmetic improvements to get rid of sequence.c's undesirable intimacy with page layout details. It was always really trying to WAL-log the contents of the sequence tuple, so we should have it do that directly using a HeapTuple's t_data and t_len, rather than backing into it with some magic assumptions about where the tuple would be on the sequence's page. Back-patch to all supported branches.
2012-06-30Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars.Tom Lane
If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-29Fix NOTIFY to cope with I/O problems, such as out-of-disk-space.Tom Lane
The LISTEN/NOTIFY subsystem got confused if SimpleLruZeroPage failed, which would typically happen as a result of a write() failure while attempting to dump a dirty pg_notify page out of memory. Subsequently, all attempts to send more NOTIFY messages would fail with messages like "Could not read from file "pg_notify/nnnn" at offset nnnnn: Success". Only restarting the server would clear this condition. Per reports from Kevin Grittner and Christoph Berg. Back-patch to 9.0, where the problem was introduced during the LISTEN/NOTIFY rewrite.
2012-05-13Fix DROP TABLESPACE to unlink symlink when directory is not there.Tom Lane
If the tablespace directory is missing entirely, we allow DROP TABLESPACE to go through, on the grounds that it should be possible to clean up the catalog entry in such a situation. However, we forgot that the pg_tblspc symlink might still be there. We should try to remove the symlink too (but not fail if it's no longer there), since not doing so can lead to weird behavior subsequently, as per report from Michael Nolan. There was some discussion of adding dependency links to prevent DROP TABLESPACE when the catalogs still contain references to the tablespace. That might be worth doing too, but it's an orthogonal question, and in any case wouldn't be back-patchable. Back-patch to 9.0, which is as far back as the logic looks like this. We could possibly do something similar in 8.x, but given the lack of reports I'm not sure it's worth the trouble, and anyway the case could not arise in the form the logic is meant to cover (namely, a post-DROP transaction rollback having resurrected the pg_tablespace entry after some or all of the filesystem infrastructure is gone).
2012-03-25Fix COPY FROM for null marker strings that correspond to invalid encoding.Tom Lane
The COPY documentation says "COPY FROM matches the input against the null string before removing backslashes". It is therefore reasonable to presume that null markers like E'\\0' will work ... and they did, until someone put the tests in the wrong order during microoptimization-driven rewrites. Since then, we've been failing if the null marker is something that would de-escape to an invalidly-encoded string. Since null markers generally need to be something that can't appear in the data, this represents a nontrivial loss of functionality; surprising nobody noticed it earlier. Per report from Jeff Davis. Backpatch to 8.4 where this got broken.
2012-02-23Require execute permission on the trigger function for CREATE TRIGGER.Tom Lane
This check was overlooked when we added function execute permissions to the system years ago. For an ordinary trigger function it's not a big deal, since trigger functions execute with the permissions of the table owner, so they couldn't do anything the user issuing the CREATE TRIGGER couldn't have done anyway. However, if a trigger function is SECURITY DEFINER, that is not the case. The lack of checking would allow another user to install it on his own table and then invoke it with, essentially, forged input data; which the trigger function is unlikely to realize, so it might do something undesirable, for instance insert false entries in an audit log table. Reported by Dinesh Kumar, patch by Robert Haas Security: CVE-2012-0866
2012-02-23Remove inappropriate quotesPeter Eisentraut
And adjust wording for consistency.
2012-02-22REASSIGN OWNED: Support foreign data wrappers and serversAlvaro Herrera
This was overlooked when implementing those kinds of objects, in commit cae565e503c42a0942ca1771665243b4453c5770. Per report from Pawel Casperek.
2012-02-06Avoid throwing ERROR during WAL replay of DROP TABLESPACE.Tom Lane
Although we will not even issue an XLOG_TBLSPC_DROP WAL record unless removal of the tablespace's directories succeeds, that does not guarantee that the same operation will succeed during WAL replay. Foreseeable reasons for it to fail include temp files created in the tablespace by Hot Standby backends, wrong directory permissions on a standby server, etc etc. The original coding threw ERROR if replay failed to remove the directories, but that is a serious overreaction. Throwing an error aborts recovery, and worse means that manual intervention will be needed to get the database to start again, since otherwise the same error will recur on subsequent attempts to replay the same WAL record. And the consequence of failing to remove the directories is only that some probably-small amount of disk space is wasted, so it hardly seems justified to throw an error. Accordingly, arrange to report such failures as LOG messages and keep going when a failure occurs during replay. Back-patch to 9.0 where Hot Standby was introduced. In principle such problems can occur in earlier releases, but Hot Standby increases the odds of trouble significantly. Given the lack of field reports of such issues, I'm satisfied with patching back as far as the patch applies easily.
2012-02-05Fix transient clobbering of shared buffers during WAL replay.Tom Lane
RestoreBkpBlocks was in the habit of zeroing and refilling the target buffer; which was perfectly safe when the code was written, but is unsafe during Hot Standby operation. The reason is that we have coding rules that allow backends to continue accessing a tuple in a heap relation while holding only a pin on its buffer. Such a backend could see transiently zeroed data, if WAL replay had occasion to change other data on the page. This has been shown to be the cause of bug #6425 from Duncan Rance (who deserves kudos for developing a sufficiently-reproducible test case) as well as Bridget Frey's re-report of bug #6200. It most likely explains the original report as well, though we don't yet have confirmation of that. To fix, change the code so that only bytes that are supposed to change will change, even transiently. This actually saves cycles in RestoreBkpBlocks, since it's not writing the same bytes twice. Also fix seq_redo, which has the same disease, though it has to work a bit harder to meet the requirement. So far as I can tell, no other WAL replay routines have this type of bug. In particular, the index-related replay routines, which would certainly be broken if they had to meet the same standard, are not at risk because we do not have coding rules that allow access to an index page when not holding a buffer lock on it. Back-patch to 9.0 where Hot Standby was added.
2012-01-30Accept a non-existent value in "ALTER USER/DATABASE SET ..." command.Heikki Linnakangas
When default_text_search_config, default_tablespace, or temp_tablespaces setting is set per-user or per-database, with an "ALTER USER/DATABASE SET ..." statement, don't throw an error if the text search configuration or tablespace does not exist. In case of text search configuration, even if it doesn't exist in the current database, it might exist in another database, where the setting is intended to have its effect. This behavior is now the same as search_path's. Tablespaces are cluster-wide, so the same argument doesn't hold for tablespaces, but there's a problem with pg_dumpall: it dumps "ALTER USER SET ..." statements before the "CREATE TABLESPACE" statements. Arguably that's pg_dumpall's fault - it should dump the statements in such an order that the tablespace is created first and then the "ALTER USER SET default_tablespace ..." statements after that - but it seems better to be consistent with search_path and default_text_search_config anyway. Besides, you could still create a dump that throws an error, by creating the tablespace, running "ALTER USER SET default_tablespace", then dropping the tablespace and running pg_dumpall on that. Backpatch to all supported versions.
2012-01-12Fix CLUSTER/VACUUM FULL for toast values owned by recently-updated rows.Tom Lane
In commit 7b0d0e9356963d5c3e4d329a917f5fbb82a2ef05, I made CLUSTER and VACUUM FULL try to preserve toast value OIDs from the original toast table to the new one. However, if we have to copy both live and recently-dead versions of a row that has a toasted column, those versions may well reference the same toast value with the same OID. The patch then led to duplicate-key failures as we tried to insert the toast value twice with the same OID. (The previous behavior was not very desirable either, since it would have silently inserted the same value twice with different OIDs. That wastes space, but what's worse is that the toast values inserted for already-dead heap rows would not be reclaimed by subsequent ordinary VACUUMs, since they go into the new toast table marked live not deleted.) To fix, check if the copied OID already exists in the new toast table, and if so, assume that it stores the desired value. This is reasonably safe since the only case where we will copy an OID from a previous toast pointer is when toast_insert_or_update was given that toast pointer and so we just pulled the data from the old table; if we got two different values that way then we have big problems anyway. We do have to assume that no other backend is inserting items into the new toast table concurrently, but that's surely safe for CLUSTER and VACUUM FULL. Per bug #6393 from Maxim Boguk. Back-patch to 9.0, same as the previous patch.
2011-12-21Update per-column ACLs, not only per-table ACL, when changing table owner.Tom Lane
We forgot to modify column ACLs, so privileges were still shown as having been granted by the old owner. This meant that neither the new owner nor a superuser could revoke the now-untraceable-to-table-owner permissions. Per bug #6350 from Marc Balmer. This has been wrong since column ACLs were added, so back-patch to 8.4.
2011-10-26Change FK trigger creation order to better support self-referential FKs.Tom Lane
When a foreign-key constraint references another column of the same table, row updates will queue both the PK's ON UPDATE action and the FK's CHECK action in the same event. The ON UPDATE action must execute first, else the CHECK will check a non-final state of the row and possibly throw an inappropriate error, as seen in bug #6268 from Roman Lytovchenko. Now, the firing order of multiple triggers for the same event is determined by the sort order of their pg_trigger.tgnames, and the auto-generated names we use for FK triggers are "RI_ConstraintTrigger_NNNN" where NNNN is the trigger OID. So most of the time the firing order is the same as creation order, and so rearranging the creation order fixes it. This patch will fail to fix the problem if the OID counter wraps around or adds a decimal digit (eg, from 99999 to 100000) while we are creating the triggers for an FK constraint. Given the small odds of that, and the low usage of self-referential FKs, we'll live with that solution in the back branches. A better fix is to change the auto-generated names for FK triggers, but it seems unwise to do that in stable branches because there may be client code that depends on the naming convention. We'll fix it that way in HEAD in a separate patch. Back-patch to all supported branches, since this bug has existed for a long time.
2011-09-06Avoid possibly accessing off the end of memory in examine_attribute().Tom Lane
Since the last couple of columns of pg_type are often NULL, sizeof(FormData_pg_type) can be an overestimate of the actual size of the tuple data part. Therefore memcpy'ing that much out of the catalog cache, as analyze.c was doing, poses a small risk of copying past the end of memory and incurring SIGSEGV. No such crash has been identified in the field, but we've certainly seen the equivalent happen in other code paths, so patch this one all the way back. Per valgrind testing by Noah Misch, though this is not his proposed patch. I chose to use SearchSysCacheCopy1 rather than inventing special-purpose infrastructure for copying only the minimal part of a pg_type tuple.
2011-08-30Fix a missed case in code for "moving average" estimate of reltuples.Tom Lane
It is possible for VACUUM to scan no pages at all, if the visibility map shows that all pages are all-visible. In this situation VACUUM has no new information to report about the relation's tuple density, so it wasn't changing pg_class.reltuples ... but it updated pg_class.relpages anyway. That's wrong in general, since there is no evidence to justify changing the density ratio reltuples/relpages, but it's particularly bad if the previous state was relpages=reltuples=0, which means "unknown tuple density". We just replaced "unknown" with "zero". ANALYZE would eventually recover from this, but it could take a lot of repetitions of ANALYZE to do so if the relation size is much larger than the maximum number of pages ANALYZE will scan, because of the moving-average behavior introduced by commit b4b6923e03f4d29636a94f6f4cc2f5cf6298b8c8. The only known situation where we could have relpages=reltuples=0 and yet the visibility map asserts everything's visible is immediately following a pg_upgrade. It might be advisable for pg_upgrade to try to preserve the relpages/reltuples statistics; but in any case this code is wrong on its own terms, so fix it. Per report from Sergey Koposov. Back-patch to 8.4, where the visibility map was introduced, same as the previous change.
2011-08-21Fix trigger WHEN conditions when both BEFORE and AFTER triggers exist.Tom Lane
Due to tuple-slot mismanagement, evaluation of WHEN conditions for AFTER ROW UPDATE triggers could crash if there had been a BEFORE ROW trigger fired for the same update. Fix by not trying to overload the use of estate->es_trig_tuple_slot. Per report from Yoran Heling. Back-patch to 9.0, when trigger WHEN conditions were introduced.
2011-08-16Preserve toast value OIDs in toast-swap-by-content for CLUSTER/VACUUM FULL.Tom Lane
This works around the problem that a catalog cache entry might contain a toast pointer that we try to dereference just as a VACUUM FULL completes on that catalog. We will see the sinval message on the cache entry when we acquire lock on the toast table, but by that point we've already told tuptoaster.c "here's the pointer to fetch", so it's difficult from a code structural standpoint to update the pointer before we use it. Much less painful to ensure that toast pointers are not invalidated in the first place. We have to add a bit of code to deal with the case that a value that previously wasn't toasted becomes so; but that should be a seldom-exercised corner case, so the inefficiency shouldn't be significant. Back-patch to 9.0. In prior versions, we didn't allow CLUSTER on system catalogs, and VACUUM FULL didn't result in reassignment of toast OIDs, so there was no problem.
2011-08-14Fix unsafe order of operations in foreign-table DDL commands.Tom Lane
When updating or deleting a system catalog tuple, it's necessary to acquire RowExclusiveLock on the catalog before looking up the tuple; otherwise a concurrent VACUUM FULL on the catalog might move the tuple to a different TID before we can apply the update. Coding patterns that find the tuple via a table scan aren't at risk here, but when obtaining the tuple from a catalog cache, correct ordering is important; and several routines in foreigncmds.c got it wrong. Noted while running the regression tests in parallel with VACUUM FULL of assorted system catalogs. For consistency I moved all the heap_open calls to the starts of their functions, including a couple for which there was no actual bug. Back-patch to 8.4 where foreigncmds.c was added.
2011-07-03Fix EXPLAIN to handle gating Result nodes within inner-indexscan subplans.Tom Lane
It is possible for a NestLoop plan node to pass an OUTER Var into an "inner indexscan" that is an Append construct (derived from an inheritance tree or UNION ALL subquery). The OUTER tuple is then passed down at runtime to the leaf indexscan node(s) where it will actually be used. EXPLAIN has to likewise pass the information about the nestloop's outer subplan down through the Append node, else it will fail to print the outer-reference Vars (with complaints like "bogus varno: 65001"). However, there was a case missed in all this: we could also have gating Result nodes that were inserted into the appendrel plan tree to deal with pseudoconstant qual conditions. So EXPLAIN has to pass down the outer plan node to a Result's subplan, too. Per example from Jon Nelson. The problem is gone in 9.1 because we replaced the nestloop outer-tuple kluge with a Param-based data transfer mechanism. Also, so far as I can tell, the case can't happen before 8.4 because of restrictions on what sorts of appendrel members could be pulled up into the parent query. So this patch is only needed for 8.4 and 9.0.
2011-06-19Fix thinko in previous patch to always update pg_class.reltuples/relpages.Tom Lane
I mis-simplified the test where ANALYZE decided if it could get away without doing anything: under the new regime, that's never allowed. Per bug #6068 from Jeff Janes. Back-patch to 8.4, just like previous patch.
2011-06-02Clean up after erroneous SELECT FOR UPDATE/SHARE on a sequence.Tom Lane
My previous commit disallowed this operation, but did nothing about cleaning up the damage if one had already been done. With the operation disallowed, it's okay to just forcibly clear xmax in a sequence's tuple, since any value seen there could not represent a live transaction's lock. So, any sequence-specific operation will repair the problem automatically, whether or not the user has already seen "could not access status of transaction" failures.
2011-05-30Fix VACUUM so that it always updates pg_class.reltuples/relpages.Tom Lane
When we added the ability for vacuum to skip heap pages by consulting the visibility map, we made it just not update the reltuples/relpages statistics if it skipped any pages. But this could leave us with extremely out-of-date stats for a table that contains any unchanging areas, especially for TOAST tables which never get processed by ANALYZE. In particular this could result in autovacuum making poor decisions about when to process the table, as in recent report from Florian Helmberger. And in general it's a bad idea to not update the stats at all. Instead, use the previous values of reltuples/relpages as an estimate of the tuple density in unvisited pages. This approach results in a "moving average" estimate of reltuples, which should converge to the correct value over multiple VACUUM and ANALYZE cycles even when individual measurements aren't very good. This new method for updating reltuples is used by both VACUUM and ANALYZE, with the result that we no longer need the grotty interconnections that caused ANALYZE to not update the stats depending on what had happened in the parent VACUUM command. Also, fix the logic for skipping all-visible pages during VACUUM so that it looks ahead rather than behind to decide what to do, as per a suggestion from Greg Stark. This eliminates useless scanning of all-visible pages at the start of the relation or just after a not-all-visible page. In particular, the first few pages of the relation will not be invariably included in the scanned pages, which seems to help in not overweighting them in the reltuples estimate. Back-patch to 8.4, where the visibility map was introduced.
2011-05-27Preserve caller's memory context in ProcessCompletedNotifies().Tom Lane
This is necessary to avoid long-term memory leakage, because the main loop in PostgresMain expects to be executing in MessageContext, and hence is a bit sloppy about freeing stuff that is only needed for the duration of processing the current client message. The known case of an actual leak is when encoding conversion has to be done on the incoming command string, but there might be others. Per report from Per-Olov Esgard. Back-patch to 9.0, where the bug was introduced by the LISTEN/NOTIFY rewrite.
2011-05-01Make CLUSTER lock the old table's toast table before copying data.Tom Lane
We must lock out autovacuuming of the old toast table before computing the OldestXmin horizon we will use. Otherwise, autovacuum could start on the toast table later, compute a later OldestXmin horizon, and remove as DEAD toast tuples that we still need (because we think their parent tuples are only RECENTLY_DEAD). Per further thought about bug #5998.
2011-04-19Avoid changing an index's indcheckxmin horizon during REINDEX.Tom Lane
There can never be a need to push the indcheckxmin horizon forward, since any HOT chains that are actually broken with respect to the index must pre-date its original creation. So we can just avoid changing pg_index altogether during a REINDEX operation. This offers a cleaner solution than my previous patch for the problem found a few days ago that we mustn't try to update pg_index while we are reindexing it. System catalog indexes will always be created with indcheckxmin = false during initdb, and with this modified code we should never try to change their pg_index entries. This avoids special-casing system catalogs as the former patch did, and should provide a performance benefit for many cases where REINDEX formerly caused an index to be considered unusable for a short time. Back-patch to 8.3 to cover all versions containing HOT. Note that this patch changes the API for index_build(), but I believe it is unlikely that any add-on code is calling that directly.
2011-03-28Prevent a rowtype from being included in itself.Tom Lane
Eventually we might be able to allow that, but it's not clear how many places need to be fixed to prevent infinite recursion when there's a direct or indirect inclusion of a rowtype in itself. One such place is CheckAttributeType(), which will recurse to stack overflow in cases such as those exhibited in bug #5950 from Alex Perepelica. If we were sure it was the only such place, we could easily modify the code added by this patch to stop the recursion without a complaint ... but it probably isn't the only such place. Hence, throw error until such time as someone is excited enough about this type of usage to put work into making it safe. Back-patch as far as 8.3. 8.2 doesn't have the recursive call in CheckAttributeType in the first place, so I see no need to add code there in the absence of clear evidence of a problem elsewhere.
2011-03-08Don't throw a warning if vacuum sees PD_ALL_VISIBLE flag set on a page thatHeikki Linnakangas
contains newly-inserted tuples that according to our OldestXmin are not yet visible to everyone. The value returned by GetOldestXmin() is conservative, and it can move backwards on repeated calls, so if we see that contradiction between the PD_ALL_VISIBLE flag and status of tuples on the page, we have to assume it's because an earlier vacuum calculated a higher OldestXmin value, and all the tuples really are visible to everyone. We have received several reports of this bug, with the "PD_ALL_VISIBLE flag was incorrectly set in relation ..." warning appearing in logs. We were finally able to hunt it down with David Gould's help to run extra diagnostics in an environment where this happened frequently. Also reword the warning, per Robert Haas' suggestion, to not imply that the PD_ALL_VISIBLE flag is necessarily at fault, as it might also be a symptom of corruption on a tuple header. Backpatch to 8.4, where the PD_ALL_VISIBLE flag was introduced.
2011-02-21Fix dangling-pointer problem in before-row update trigger processing.Tom Lane
ExecUpdate checked for whether ExecBRUpdateTriggers had returned a new tuple value by seeing if the returned tuple was pointer-equal to the old one. But the "old one" was in estate->es_junkFilter's result slot, which would be scribbled on if we had done an EvalPlanQual update in response to a concurrent update of the target tuple; therefore we were comparing a dangling pointer to a live one. Given the right set of circumstances we could get a false match, resulting in not forcing the tuple to be stored in the slot we thought it was stored in. In the case reported by Maxim Boguk in bug #5798, this led to "cannot extract system attribute from virtual tuple" failures when trying to do "RETURNING ctid". I believe there is a very-low-probability chance of more serious errors, such as generating incorrect index entries based on the original rather than the trigger-modified version of the row. In HEAD, change all of ExecBRInsertTriggers, ExecIRInsertTriggers, ExecBRUpdateTriggers, and ExecIRUpdateTriggers so that they continue to have similar APIs. In the back branches I just changed ExecBRUpdateTriggers, since there is no bug in the ExecBRInsertTriggers case.
2011-01-20Make ALTER TABLE revalidate uniqueness and exclusion constraints.Robert Haas
Failure to do so can lead to constraint violations. This was broken by commit 1ddc2703a936d03953657f43345460b9242bbed1 on 2010-02-07, so back-patch to 9.0. Noah Misch. Regression test by me.
2010-11-09Repair memory leakage while ANALYZE-ing complex index expressions.Tom Lane
The general design of memory management in Postgres is that intermediate results computed by an expression are not freed until the end of the tuple cycle. For expression indexes, ANALYZE has to re-evaluate each expression for each of its sample rows, and it wasn't bothering to free intermediate results until the end of processing of that index. This could lead to very substantial leakage if the intermediate results were large, as in a recent example from Jakub Ouhrabka. Fix by doing ResetExprContext for each sample row. This necessitates adding a datumCopy step to ensure that the final expression value isn't recycled too. Some quick testing suggests that this change adds at worst about 10% to the time needed to analyze a table with an expression index; which is annoying, but seems a tolerable price to pay to avoid unexpected out-of-memory problems. Back-patch to all supported branches.
2010-10-02Throw an appropriate error if ALTER COLUMN TYPE finds a dependent trigger.Tom Lane
Actually making this case work, if the column is used in the trigger's WHEN condition, will take some new code that probably isn't appropriate to back-patch. For now, just throw a FEATURE_NOT_SUPPORTED error rather than allowing control to reach the "unexpected object" case. Per bug #5688 from Daniel Grace. Back-patch to 9.0 where the possibility of such a dependency was introduced.
2010-09-23ProcessIncomingNotify *must* reset notifyInterruptOccurred when called.Tom Lane
This was broken in 9.0 by careless addition of an early-exit path. Bug report and diagnosis by Jeff Davis.
2010-09-23Prevent show_session_authorization from crashing when session_authorizationTom Lane
hasn't been set. The only known case where this can happen is when show_session_authorization is invoked in an autovacuum process, which is possible if an index function calls it, as for example in bug #5669 from Andrew Geery. We could perhaps try to return a sensible value, such as the name of the cluster-owning superuser; but that seems like much more trouble than the case is worth, and in any case it could create new possible failure modes. Simply returning an empty string seems like the most appropriate fix. Back-patch to all supported versions, even those before autovacuum, just in case there's another way to provoke this crash.
2010-09-22Fix remaining stray references to CVS.Tom Lane
These are just cosmetic and don't seem worth back-patching far. I put them into 9.0 just because it was trivial to do so.
2010-08-19Fix possible corruption of AfterTriggerEventLists in subtransaction rollback.Tom Lane
afterTriggerInvokeEvents failed to adjust events->tailfree when truncating the last chunk of an event list. This could result in the data being "de-truncated" by afterTriggerRestoreEventList during a subsequent subtransaction abort. Even that wouldn't kill us, because the re-added data would just be events marked DONE --- unless the data had been partially overwritten by new events. Then we might crash, or in any case misbehave (perhaps fire triggers twice, or fire triggers with the wrong event data). Per bug #5622 from Thue Janus Kristensen. Back-patch to 8.4 where the current trigger list representation was introduced.
2010-08-18Fix failure of "ALTER TABLE t ADD COLUMN c serial" when done by non-owner.Tom Lane
The implicitly created sequence was created as owned by the current user, who could be different from the table owner, eg if current user is a superuser or some member of the table's owning role. This caused sanity checks in the SEQUENCE OWNED BY code to spit up. Although possibly we don't need those sanity checks, the safest fix seems to be to make sure the implicit sequence is assigned the same owner role as the table has. (We still do all permissions checks as the current user, however.) Per report from Josh Berkus. Back-patch to 9.0. The bug goes back to the invention of SEQUENCE OWNED BY in 8.2, but the fix requires an API change for DefineRelation(), which seems to have potential for breaking third-party code if done in a minor release. Given the lack of prior complaints, it's probably not worth fixing in the stable branches.
2010-08-03Fix inheritance count tracking in ALTER TABLE .. ADD CONSTRAINT.Robert Haas
Without this patch, constraints inherited by children of a parent table which itself has multiple inheritance parents can end up with the wrong coninhcount. After dropping the constraint, the children end up with a leftover copy of the constraint that is not dumped and cannot be dropped. There is a similar problem with ALTER TABLE .. ADD COLUMN, but that looks significantly more difficult to resolve, so I'm committing this fix separately. Back-patch to 8.4, which is the first release that has coninhcount. Report by Hank Enting.