summaryrefslogtreecommitdiff
path: root/src/backend
AgeCommit message (Collapse)Author
2011-11-08Make DatumGetInetP() unpack inet datums with a 1-byte header, and addHeikki Linnakangas
a new macro, DatumGetInetPP(), that does not. This brings these macros in line with other DatumGet*P() macros. Backpatch to 8.3, where 1-byte header varlenas were introduced.
2011-11-04Don't assume that a tuple's header size is unchanged during toasting.Tom Lane
This assumption can be wrong when the toaster is passed a raw on-disk tuple, because the tuple might pre-date an ALTER TABLE ADD COLUMN operation that added columns without rewriting the table. In such a case the tuple's natts value is smaller than what we expect from the tuple descriptor, and so its t_hoff value could be smaller too. In fact, the tuple might not have a null bitmap at all, and yet our current opinion of it is that it contains some trailing nulls. In such a situation, toast_insert_or_update did the wrong thing, because to save a few lines of code it would use the old t_hoff value as the offset where heap_fill_tuple should start filling data. This did not leave enough room for the new nulls bitmap, with the result that the first few bytes of data could be overwritten with null flag bits, as in a recent report from Hubert Depesz Lubaczewski. The particular case reported requires ALTER TABLE ADD COLUMN followed by CREATE TABLE AS SELECT * FROM ... or INSERT ... SELECT * FROM ..., and further requires that there be some out-of-line toasted fields in one of the tuples to be copied; else we'll not reach the troublesome code. The problem can only manifest in this form in 8.4 and later, because before commit a77eaa6a95009a3441e0d475d1980259d45da072, CREATE TABLE AS or INSERT/SELECT wouldn't result in raw disk tuples getting passed directly to heap_insert --- there would always have been at least a junkfilter in between, and that would reconstitute the tuple header with an up-to-date t_natts and hence t_hoff. But I'm backpatching the tuptoaster change all the way anyway, because I'm not convinced there are no older code paths that present a similar risk.
2011-11-03Fix inline_set_returning_function() to allow multiple OUT parameters.Tom Lane
inline_set_returning_function failed to distinguish functions returning generic RECORD (which require a column list in the RTE, as well as run-time type checking) from those with multiple OUT parameters (which do not). This prevented inlining from happening. Per complaint from Jay Levitt. Back-patch to 8.4 where this capability was introduced.
2011-11-02Revert "Stop btree indexscans upon reaching nulls in either direction."Tom Lane
This reverts commit 7357f92a3ef1faff0ce85f55f3c5a7f4dc820eaa. As pointed out by Naoya Anzai, we need to do more work to make that idea handle end-of-index cases, and it is looking like too much risk for a back-patch. So bug #6278 is only going to be fixed in HEAD.
2011-11-02Derive oldestActiveXid at correct time for Hot Standby.Simon Riggs
There was a timing window between when oldestActiveXid was derived and when it should have been derived that only shows itself under heavy load. Move code around to ensure correct timing of derivation. No change to StartupSUBTRANS() code, which is where this failed. Bug report by Chris Redekop
2011-11-02Start Hot Standby faster when initial snapshot is incomplete.Simon Riggs
If the initial snapshot had overflowed then we can start whenever the latest snapshot is empty, not overflowed or as we did already, start when the xmin on primary was higher than xmax of our starting snapshot, which proves we have full snapshot data. Bug report by Chris Redekop
2011-11-02Fix timing of Startup CLOG and MultiXact during Hot StandbySimon Riggs
Patch by me, bug report by Chris Redekop, analysis by Florian Pflug
2011-11-01Fix race condition with toast table access from a stale syscache entry.Tom Lane
If a tuple in a syscache contains an out-of-line toasted field, and we try to fetch that field shortly after some other transaction has committed an update or deletion of the tuple, there is a race condition: vacuum could come along and remove the toast tuples before we can fetch them. This leads to transient failures like "missing chunk number 0 for toast value NNNNN in pg_toast_2619", as seen in recent reports from Andrew Hammond and Tim Uckun. The design idea of syscache is that access to stale syscache entries should be prevented by relation-level locks, but that fails for at least two cases where toasted fields are possible: ANALYZE updates pg_statistic rows without locking out sessions that might want to plan queries on the same table, and CREATE OR REPLACE FUNCTION updates pg_proc rows without any meaningful lock at all. The least risky fix seems to be an idea that Heikki suggested when we were dealing with a related problem back in August: forcibly detoast any out-of-line fields before putting a tuple into syscache in the first place. This avoids the problem because at the time we fetch the parent tuple from the catalog, we should be holding an MVCC snapshot that will prevent removal of the toast tuples, even if the parent tuple is outdated immediately after we fetch it. (Note: I'm not convinced that this statement holds true at every instant where we could be fetching a syscache entry at all, but it does appear to hold true at the times where we could fetch an entry that could have a toasted field. We will need to be a bit wary of adding toast tables to low-level catalogs that don't have them already.) An additional benefit is that subsequent uses of the syscache entry should be faster, since they won't have to detoast the field. Back-patch to all supported versions. The problem is significantly harder to reproduce in pre-9.0 releases, because of their willingness to flush every entry in a syscache whenever the underlying catalog is vacuumed (cf CatalogCacheFlushRelation); but there is still a window for trouble.
2011-10-31Stop btree indexscans upon reaching nulls in either direction.Tom Lane
The existing scan-direction-sensitive tests were overly complex, and failed to stop the scan in cases where it's perfectly legitimate to do so. Per bug #6278 from Maksym Boguk. Back-patch to 8.3, which is as far back as the patch applies easily. Doesn't seem worth sweating over a relatively minor performance issue in 8.2 at this late date. (But note that this was a performance regression from 8.1 and before, so 8.2 is being left as an outlier.)
2011-10-29Fix assorted bogosities in cash_in() and cash_out().Tom Lane
cash_out failed to handle multiple-byte thousands separators, as per bug #6277 from Alexander Law. In addition, cash_in didn't handle that either, nor could it handle multiple-byte positive_sign. Both routines failed to support multiple-byte mon_decimal_point, which I did not think was worth changing, but at least now they check for the possibility and fall back to using '.' rather than emitting invalid output. Also, make cash_in handle trailing negative signs, which formerly it would reject. Since cash_out generates trailing negative signs whenever the locale tells it to, this last omission represents a fail-to-reload-dumped-data bug. IMO that justifies patching this all the way back.
2011-10-26Change FK trigger creation order to better support self-referential FKs.Tom Lane
When a foreign-key constraint references another column of the same table, row updates will queue both the PK's ON UPDATE action and the FK's CHECK action in the same event. The ON UPDATE action must execute first, else the CHECK will check a non-final state of the row and possibly throw an inappropriate error, as seen in bug #6268 from Roman Lytovchenko. Now, the firing order of multiple triggers for the same event is determined by the sort order of their pg_trigger.tgnames, and the auto-generated names we use for FK triggers are "RI_ConstraintTrigger_NNNN" where NNNN is the trigger OID. So most of the time the firing order is the same as creation order, and so rearranging the creation order fixes it. This patch will fail to fix the problem if the OID counter wraps around or adds a decimal digit (eg, from 99999 to 100000) while we are creating the triggers for an FK constraint. Given the small odds of that, and the low usage of self-referential FKs, we'll live with that solution in the back branches. A better fix is to change the auto-generated names for FK triggers, but it seems unwise to do that in stable branches because there may be client code that depends on the naming convention. We'll fix it that way in HEAD in a separate patch. Back-patch to all supported branches, since this bug has existed for a long time.
2011-10-23Don't trust deferred-unique indexes for join removal.Tom Lane
The uniqueness condition might fail to hold intra-transaction, and assuming it does can give incorrect query results. Per report from Marti Raudsepp, though this is not his proposed patch. Back-patch to 9.0, where both these features were introduced. In the released branches, add the new IndexOptInfo field to the end of the struct, to try to minimize ABI breakage for third-party code that may be examining that struct.
2011-10-14Fix bugs in information_schema.referential_constraints view.Tom Lane
This view was being insufficiently careful about matching the FK constraint to the depended-on primary or unique key constraint. That could result in failure to show an FK constraint at all, or showing it multiple times, or claiming that it depended on a different constraint than the one it really does. Fix by joining via pg_depend to ensure that we find only the correct dependency. Back-patch, but don't bump catversion because we can't force initdb in back branches. The next minor-version release notes should explain that if you need to fix this in an existing installation, you can drop the information_schema schema then re-create it by sourcing $SHAREDIR/information_schema.sql in each database (as a superuser of course).
2011-10-08Don't let transform_null_equals=on affect CASE foo WHEN NULL ... constructs.Heikki Linnakangas
transform_null_equals is only supposed to affect "foo = NULL" expressions given directly by the user, not the internal "foo = NULL" expression generated from CASE-WHEN. This fixes bug #6242, reported by Sergey. Backpatch to all supported branches.
2011-10-04Add sourcefile/sourceline data to EXEC_BACKEND GUC transmission files.Tom Lane
This oversight meant that on Windows, the pg_settings view would not display source file or line number information for values coming from postgresql.conf, unless the backend had received a SIGHUP since starting. In passing, also make the error detection in read_nondefault_variables a tad more thorough, and fix it to not lose precision on float GUCs (these changes are already in HEAD as of my previous commit).
2011-10-03ProcedureCreate neglected to record dependencies on default expressions.Tom Lane
Thus, an object referenced in a default expression could be dropped while the function remained present. This was unaccountably missed in the original patch to add default parameters for functions. Reported by Pavel Stehule.
2011-09-22Translation updatesPeter Eisentraut
2011-09-16gistendscan() forgot to free so->giststate.Tom Lane
This oversight led to a massive memory leak --- upwards of 10KB per tuple --- during creation-time verification of an exclusion constraint based on a GIST index. In most other scenarios it'd just be a leak of 10KB that would be recovered at end of query, so not too significant; though perhaps the leak would be noticeable in a situation where a GIST index was being used in a nestloop inner indexscan. In any case, it's a real leak of long standing, so patch all supported branches. Per report from Harald Fuchs.
2011-09-13deflist_to_tuplestore dumped core on an option with no value.Tom Lane
Make it return NULL for the option_value, instead. Per report from Frank van Vugt. Back-patch to 8.4 where this code was added.
2011-09-07Fix corner case bug in numeric to_char().Tom Lane
Trailing-zero stripping applied by the FM specifier could strip zeroes to the left of the decimal point, for a format with no digit positions after the decimal point (such as "FM999."). Reported and diagnosed by Marti Raudsepp, though I didn't use his patch.
2011-09-06Avoid possibly accessing off the end of memory in SJIS2004 conversion.Tom Lane
The code in shift_jis_20042euc_jis_2004() would fetch two bytes even when only one remained in the string. Since conversion functions aren't supposed to assume null-terminated input, this poses a small risk of fetching past the end of memory and incurring SIGSEGV. No such crash has been identified in the field, but we've certainly seen the equivalent happen in other code paths, so patch this one all the way back. Report and patch by Noah Misch.
2011-09-06Avoid possibly accessing off the end of memory in examine_attribute().Tom Lane
Since the last couple of columns of pg_type are often NULL, sizeof(FormData_pg_type) can be an overestimate of the actual size of the tuple data part. Therefore memcpy'ing that much out of the catalog cache, as analyze.c was doing, poses a small risk of copying past the end of memory and incurring SIGSEGV. No such crash has been identified in the field, but we've certainly seen the equivalent happen in other code paths, so patch this one all the way back. Per valgrind testing by Noah Misch, though this is not his proposed patch. I chose to use SearchSysCacheCopy1 rather than inventing special-purpose infrastructure for copying only the minimal part of a pg_type tuple.
2011-09-01Further repair of eqjoinsel ndistinct-clamping logic.Tom Lane
Examination of examples provided by Mark Kirkwood and others has convinced me that actually commit 7f3eba30c9d622d1981b1368f2d79ba0999cdff2 was quite a few bricks shy of a load. The useful part of that patch was clamping ndistinct for the inner side of a semi or anti join, and the reason why that's needed is that it's the only way that restriction clauses eliminating rows from the inner relation can affect the estimated size of the join result. I had not clearly understood why the clamping was appropriate, and so mis-extrapolated to conclude that we should clamp ndistinct for the outer side too, as well as for both sides of regular joins. These latter actions were all wrong, and are reverted with this patch. In addition, the clamping logic is now made to affect the behavior of both paths in eqjoinsel_semi, with or without MCV lists to compare. When we have MCVs, we suppose that the most common values are the ones that are most likely to survive the decimation resulting from a lower restriction clause, so we think of the clamping as eliminating non-MCV values, or potentially even the least-common MCVs for the inner relation. Back-patch to 8.4, same as previous fixes in this area.
2011-08-31Improve eqjoinsel's ndistinct clamping to work for multiple levels of join.Tom Lane
This patch fixes an oversight in my commit 7f3eba30c9d622d1981b1368f2d79ba0999cdff2 of 2008-10-23. That patch accounted for baserel restriction clauses that reduced the number of rows coming out of a table (and hence the number of possibly-distinct values of a join variable), but not for join restriction clauses that might have been applied at a lower level of join. To account for the latter, look up the sizes of the min_lefthand and min_righthand inputs of the current join, and clamp with those in the same way as for the base relations. Noted while investigating a complaint from Ben Chobot, although this in itself doesn't seem to explain his report. Back-patch to 8.4; previous versions used different estimation methods for which this heuristic isn't relevant.
2011-08-30Fix a missed case in code for "moving average" estimate of reltuples.Tom Lane
It is possible for VACUUM to scan no pages at all, if the visibility map shows that all pages are all-visible. In this situation VACUUM has no new information to report about the relation's tuple density, so it wasn't changing pg_class.reltuples ... but it updated pg_class.relpages anyway. That's wrong in general, since there is no evidence to justify changing the density ratio reltuples/relpages, but it's particularly bad if the previous state was relpages=reltuples=0, which means "unknown tuple density". We just replaced "unknown" with "zero". ANALYZE would eventually recover from this, but it could take a lot of repetitions of ANALYZE to do so if the relation size is much larger than the maximum number of pages ANALYZE will scan, because of the moving-average behavior introduced by commit b4b6923e03f4d29636a94f6f4cc2f5cf6298b8c8. The only known situation where we could have relpages=reltuples=0 and yet the visibility map asserts everything's visible is immediately following a pg_upgrade. It might be advisable for pg_upgrade to try to preserve the relpages/reltuples statistics; but in any case this code is wrong on its own terms, so fix it. Per report from Sergey Koposov. Back-patch to 8.4, where the visibility map was introduced, same as the previous change.
2011-08-26Fix potential memory clobber in tsvector_concat().Tom Lane
tsvector_concat() allocated its result workspace using the "conservative" estimate of the sum of the two input tsvectors' sizes. Unfortunately that wasn't so conservative as all that, because it supposed that the number of pad bytes required could not grow. Which it can, as per test case from Jesper Krogh, if there's a mix of lexemes with positions and lexemes without them in the input data. The fix is to assume that we might add a not-previously-present pad byte for each and every lexeme in the two inputs; which really is conservative, but it doesn't seem worthwhile to try to be more precise. This is an aboriginal bug in tsvector_concat, so back-patch to all versions containing it.
2011-08-21Fix trigger WHEN conditions when both BEFORE and AFTER triggers exist.Tom Lane
Due to tuple-slot mismanagement, evaluation of WHEN conditions for AFTER ROW UPDATE triggers could crash if there had been a BEFORE ROW trigger fired for the same update. Fix by not trying to overload the use of estate->es_trig_tuple_slot. Per report from Yoran Heling. Back-patch to 9.0, when trigger WHEN conditions were introduced.
2011-08-20Fix performance problem when building a lossy tidbitmap.Tom Lane
As pointed out by Sergey Koposov, repeated invocations of tbm_lossify can make building a large tidbitmap into an O(N^2) operation. To fix, make sure we remove more than the minimum amount of information per call, and add a fallback path to behave sanely if we're unable to fit the bitmap within the requested amount of memory. This has been wrong since the tidbitmap code was written, so back-patch to all supported branches.
2011-08-16Forget about targeting catalog cache invalidations by tuple TID.Tom Lane
The TID isn't stable enough: we might queue an sinval event before a VACUUM FULL, and then process it afterwards, when the target tuple no longer has the same TID. So we must invalidate entries on the basis of hash value only. The old coding can be shown to result in various bizarre, hard-to-reproduce errors in the presence of concurrent VACUUM FULLs on system catalogs, and could easily result in permanent catalog corruption, up to and including complete loss of tables. This commit is just a minimal fix that removes the unsafe comparison. We should remove transmission of the tuple TID from sinval messages altogether, and then arrange to suppress the extra message in the common case of a heap_update that doesn't change the key hashvalue. But that's going to be much more invasive, and will only produce a probably-marginal performance gain, so it doesn't seem like material for a back-patch. Back-patch to 9.0. Before that, VACUUM FULL refused to do any tuple moving if it found any INSERT_IN_PROGRESS or DELETE_IN_PROGRESS tuples (and CLUSTER would give up altogether), so there was no risk of moving a tuple that might be the subject of an unsent sinval message.
2011-08-16Fix incorrect order of operations during sinval reset processing.Tom Lane
We have to be sure that we have revalidated each nailed-in-cache relcache entry before we try to use it to load data for some other relcache entry. The introduction of "mapped relations" in 9.0 broke this, because although we updated the state kept in relmapper.c early enough, we failed to propagate that information into relcache entries soon enough; in particular, we could try to fetch pg_class rows out of pg_class before we'd updated its relcache entry's rd_node.relNode value from the map. This bug accounts for Dave Gould's report of failures after "vacuum full pg_class", and I believe that there is risk for other system catalogs as well. The core part of the fix is to copy relmapper data into the relcache entries during "phase 1" in RelationCacheInvalidate(), before they'll be used in "phase 2". To try to future-proof the code against other similar bugs, I also rearranged the order in which nailed relations are visited during phase 2: now it's pg_class first, then pg_class_oid_index, then other nailed relations. This should ensure that RelationClearRelation can apply RelationReloadIndexInfo to all nailed indexes without risking use of not-yet-revalidated relcache entries. Back-patch to 9.0 where the relation mapper was introduced.
2011-08-16Preserve toast value OIDs in toast-swap-by-content for CLUSTER/VACUUM FULL.Tom Lane
This works around the problem that a catalog cache entry might contain a toast pointer that we try to dereference just as a VACUUM FULL completes on that catalog. We will see the sinval message on the cache entry when we acquire lock on the toast table, but by that point we've already told tuptoaster.c "here's the pointer to fetch", so it's difficult from a code structural standpoint to update the pointer before we use it. Much less painful to ensure that toast pointers are not invalidated in the first place. We have to add a bit of code to deal with the case that a value that previously wasn't toasted becomes so; but that should be a seldom-exercised corner case, so the inefficiency shouldn't be significant. Back-patch to 9.0. In prior versions, we didn't allow CLUSTER on system catalogs, and VACUUM FULL didn't result in reassignment of toast OIDs, so there was no problem.
2011-08-16Fix race condition in relcache init file invalidation.Tom Lane
The previous code tried to synchronize by unlinking the init file twice, but that doesn't actually work: it leaves a window wherein a third process could read the already-stale init file but miss the SI messages that would tell it the data is stale. The result would be bizarre failures in catalog accesses, typically "could not read block 0 in file ..." later during startup. Instead, hold RelCacheInitLock across both the unlink and the sending of the SI messages. This is more straightforward, and might even be a bit faster since only one unlink call is needed. This has been wrong since it was put in (in 2002!), so back-patch to all supported releases.
2011-08-14Fix unsafe order of operations in foreign-table DDL commands.Tom Lane
When updating or deleting a system catalog tuple, it's necessary to acquire RowExclusiveLock on the catalog before looking up the tuple; otherwise a concurrent VACUUM FULL on the catalog might move the tuple to a different TID before we can apply the update. Coding patterns that find the tuple via a table scan aren't at risk here, but when obtaining the tuple from a catalog cache, correct ordering is important; and several routines in foreigncmds.c got it wrong. Noted while running the regression tests in parallel with VACUUM FULL of assorted system catalogs. For consistency I moved all the heap_open calls to the starts of their functions, including a couple for which there was no actual bug. Back-patch to 8.4 where foreigncmds.c was added.
2011-08-13Fix incorrect timeout handling during initial authentication transaction.Tom Lane
The statement start timestamp was not set before initiating the transaction that is used to look up client authentication information in pg_authid. In consequence, enable_sig_alarm computed a wrong value (far in the past) for statement_fin_time. That didn't have any immediate effect, because the timeout alarm was set without reference to statement_fin_time; but if we subsequently blocked on a lock for a short time, CheckStatementTimeout would consult the bogus value when we cancelled the lock timeout wait, and then conclude we'd timed out, leading to immediate failure of the connection attempt. Thus an innocent "vacuum full pg_authid" would cause failures of concurrent connection attempts. Noted while testing other, more serious consequences of vacuum full on system catalogs. We should set the statement timestamp before StartTransactionCommand(), so that the transaction start timestamp is also valid. I'm not sure if there are any non-cosmetic effects of it not being valid, but the xact timestamp is at least sent to the statistics machinery. Back-patch to 9.0. Before that, the client authentication timeout was done outside any transaction and did not depend on this state to be valid.
2011-08-09Fix nested PlaceHolderVar expressions that appear only in targetlists.Tom Lane
A PlaceHolderVar's expression might contain another, lower-level PlaceHolderVar. If the outer PlaceHolderVar is used, the inner one certainly will be also, and so we have to make sure that both of them get into the placeholder_list with correct ph_may_need values during the initial pre-scan of the query (before deconstruct_jointree starts). We did this correctly for PlaceHolderVars appearing in the query quals, but overlooked the issue for those appearing in the top-level targetlist; with the result that nested placeholders referenced only in the targetlist did not work correctly, as illustrated in bug #6154. While at it, add some error checking to find_placeholder_info to ensure that we don't try to create new placeholders after it's too late to do so; they have to all be created before deconstruct_jointree starts. Back-patch to 8.4 where the PlaceHolderVar mechanism was introduced.
2011-08-02Move CheckRecoveryConflictDeadlock() call to a safer place.Tom Lane
This kluge was inserted in a spot apparently chosen at random: the lock manager's state is not yet fully set up for the wait, and in particular LockWaitCancel hasn't been armed by setting lockAwaited, so the ProcLock will not get cleaned up if the ereport is thrown. This seems to not cause any observable problem in trivial test cases, because LockReleaseAll will silently clean up the debris; but I was able to cause failures with tests involving subtransactions. Fixes breakage induced by commit c85c941470efc44494fd7a5f426ee85fc65c268c. Back-patch to all affected branches.
2011-08-02Fix incorrect initialization of ProcGlobal->startupBufferPinWaitBufId.Tom Lane
It was initialized in the wrong place and to the wrong value. With bad luck this could result in incorrect query-cancellation failures in hot standby sessions, should a HS backend be holding pin on buffer number 1 while trying to acquire a lock.
2011-08-02Avoid integer overflow when LIMIT + OFFSET >= 2^63.Heikki Linnakangas
This fixes bug #6139 reported by Hitoshi Harada.
2011-07-24Use OpenSSL's SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER flag.Tom Lane
This disables an entirely unnecessary "sanity check" that causes failures in nonblocking mode, because OpenSSL complains if we move or compact the write buffer. The only actual requirement is that we not modify pending data once we've attempted to send it, which we don't. Per testing and research by Martin Pihlak, though this fix is a lot simpler than his patch. I put the same change into the backend, although it's less clear whether it's necessary there. We do use nonblock mode in some situations in streaming replication, so seems best to keep the same behavior in the backend as in libpq. Back-patch to all supported releases.
2011-07-16Fix SSPI login when multiple roundtrips are requiredMagnus Hagander
This fixes SSPI login failures showing "The function requested is not supported", often showing up when connecting to localhost. The reason was not properly updating the SSPI handle when multiple roundtrips were required to complete the authentication sequence. Report and analysis by Ahmed Shinwari, patch by Magnus Hagander
2011-07-15Fix two ancient bugs in GiST code to re-find a parent after page split:Heikki Linnakangas
First, when following a right-link, we incorrectly marked the current page as the parent of the right sibling. In reality, the parent of the right page is the same as the parent of the current page (or some page to the right of it, gistFindCorrectParent() will sort that out). Secondly, when we follow a right-link, we must prepend, not append, the right page to our list of pages to visit. That's because we assume that once we hit a leaf page in the list, all the rest are leaf pages too, and give up. To hit these bugs, you need concurrent actions and several unlucky accidents. Another backend must split the root page, while you're in process of splitting a lower-level page. Furthermore, while you scan the internal nodes to re-find the parent, another backend needs to again split some more internal pages. Even then, the bugs don't necessarily manifest as user-visible errors or index corruption. While we're at it, make the error reporting a bit better if gistFindPath() fails to re-find the parent. It used to be an assertion, but an elog() seems more appropriate. Backpatch to all supported branches.
2011-07-14In planner, don't assume that empty parent tables aren't really empty.Tom Lane
There's a heuristic in estimate_rel_size() to clamp the minimum size estimate for a table to 10 pages, unless we can see that vacuum or analyze has been run (and set relpages to something nonzero, so this will always happen for a table that's actually empty). However, it would be better not to do this for inheritance parent tables, which very commonly are really empty and can be expected to stay that way. Per discussion of a recent pgsql-performance report from Anish Kejariwal. Also prevent it from happening for indexes (although this is more in the nature of documentation, since CREATE INDEX normally initializes relpages to something nonzero anyway). Back-patch to 9.0, because the ability to collect statistics across a whole inheritance tree has improved the planner's estimates to the point where this relatively small error makes a significant difference. In the referenced report, merge or hash joins were incorrectly estimated as cheaper than a nestloop with inner indexscan on the inherited table. That was less likely before 9.0 because the lack of inherited stats would have resulted in a default (and rather pessimistic) estimate of the cost of a merge or hash join.
2011-07-08Fix another oversight in logging of changes in postgresql.conf settings.Tom Lane
We were using GetConfigOption to collect the old value of each setting, overlooking the possibility that it didn't exist yet. This does happen in the case of adding a new entry within a custom variable class, as exhibited in bug #6097 from Maxim Boguk. To fix, add a missing_ok parameter to GetConfigOption, but only in 9.1 and HEAD --- it seems possible that some third-party code is using that function, so changing its API in a minor release would cause problems. In 9.0, create a near-duplicate function instead.
2011-07-03Fix EXPLAIN to handle gating Result nodes within inner-indexscan subplans.Tom Lane
It is possible for a NestLoop plan node to pass an OUTER Var into an "inner indexscan" that is an Append construct (derived from an inheritance tree or UNION ALL subquery). The OUTER tuple is then passed down at runtime to the leaf indexscan node(s) where it will actually be used. EXPLAIN has to likewise pass the information about the nestloop's outer subplan down through the Append node, else it will fail to print the outer-reference Vars (with complaints like "bogus varno: 65001"). However, there was a case missed in all this: we could also have gating Result nodes that were inserted into the appendrel plan tree to deal with pseudoconstant qual conditions. So EXPLAIN has to pass down the outer plan node to a Result's subplan, too. Per example from Jon Nelson. The problem is gone in 9.1 because we replaced the nestloop outer-tuple kluge with a Param-based data transfer mechanism. Also, so far as I can tell, the case can't happen before 8.4 because of restrictions on what sorts of appendrel members could be pulled up into the parent query. So this patch is only needed for 8.4 and 9.0.
2011-06-29Restore correct btree preprocessing of "indexedcol IS NULL" conditions.Tom Lane
Such a condition is unsatisfiable in combination with any other type of btree-indexable condition (since we assume btree operators are always strict). 8.3 and 8.4 had an explicit test for this, which I removed in commit 29c4ad98293e3c5cb3fcdd413a3f4904efff8762, mistakenly thinking that the case would be subsumed by the more general handling of IS (NOT) NULL added in that patch. Put it back, and improve the comments about it, and add a regression test case. Per bug #6079 from Renat Nasyrov, and analysis by Dean Rasheed.
2011-06-29Protect pg_stat_reset_shared() against NULL inputMagnus Hagander
Per bug #6082, reported by Steve Haslam
2011-06-27Reduce impact of btree page reuse on Hot Standby by fixing off-by-1 error.Simon Riggs
WAL records of type XLOG_BTREE_REUSE_PAGE were generated using a latestRemovedXid one higher than actually needed because xid used was page opaque->btpo.xact rather than an actually removed xid. Noticed on an otherwise quiet system by Noah Misch. Noah Misch and Simon Riggs
2011-06-20Fix thinko in previous patch for optimizing EXISTS-within-EXISTS.Tom Lane
When recursing after an optimization in pull_up_sublinks_qual_recurse, the available_rels value passed down must include only the relations that are in the righthand side of the new SEMI or ANTI join; it's incorrect to pull up a sub-select that refers to other relations, as seen in the added test case. Per report from BangarRaju Vadapalli. While at it, rethink the idea of recursing below a NOT EXISTS. That is essentially the same situation as pulling up ANY/EXISTS sub-selects that are in the ON clause of an outer join, and it has the same disadvantage: we'd force the two joins to be evaluated according to the syntactic nesting order, because the lower join will most likely not be able to commute with the ANTI join. That could result in having to form a rather large join product, whereas the handling of a correlated subselect is not quite that dumb. So until we can handle those cases better, #ifdef NOT_USED that case. (I think it's okay to pull up in the EXISTS/ANY cases, because SEMI joins aren't so inflexible about ordering.) Back-patch to 8.4, same as for previous patch in this area. Fortunately that patch hadn't made it into any shipped releases yet.
2011-06-20Fixed string in German translation that causes segfault.Michael Meskes
Applied patch by Christoph Berg <cb@df7cb.de> to replace placeholder "%s" by correct string.
2011-06-19Fix thinko in previous patch to always update pg_class.reltuples/relpages.Tom Lane
I mis-simplified the test where ANALYZE decided if it could get away without doing anything: under the new regime, that's never allowed. Per bug #6068 from Jeff Janes. Back-patch to 8.4, just like previous patch.