summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2011-08-31Fix pg_upgrade to preserve toast relfrozenxids for old 8.3 servers.Bruce Momjian
This fixes a pg_upgrade bug that could lead to query errors when clog files are improperly removed. Backpatch to 8.4, 9.0, 9.1.
2011-08-31Improve eqjoinsel's ndistinct clamping to work for multiple levels of join.Tom Lane
This patch fixes an oversight in my commit 7f3eba30c9d622d1981b1368f2d79ba0999cdff2 of 2008-10-23. That patch accounted for baserel restriction clauses that reduced the number of rows coming out of a table (and hence the number of possibly-distinct values of a join variable), but not for join restriction clauses that might have been applied at a lower level of join. To account for the latter, look up the sizes of the min_lefthand and min_righthand inputs of the current join, and clamp with those in the same way as for the base relations. Noted while investigating a complaint from Ben Chobot, although this in itself doesn't seem to explain his report. Back-patch to 8.4; previous versions used different estimation methods for which this heuristic isn't relevant.
2011-08-30Fix a missed case in code for "moving average" estimate of reltuples.Tom Lane
It is possible for VACUUM to scan no pages at all, if the visibility map shows that all pages are all-visible. In this situation VACUUM has no new information to report about the relation's tuple density, so it wasn't changing pg_class.reltuples ... but it updated pg_class.relpages anyway. That's wrong in general, since there is no evidence to justify changing the density ratio reltuples/relpages, but it's particularly bad if the previous state was relpages=reltuples=0, which means "unknown tuple density". We just replaced "unknown" with "zero". ANALYZE would eventually recover from this, but it could take a lot of repetitions of ANALYZE to do so if the relation size is much larger than the maximum number of pages ANALYZE will scan, because of the moving-average behavior introduced by commit b4b6923e03f4d29636a94f6f4cc2f5cf6298b8c8. The only known situation where we could have relpages=reltuples=0 and yet the visibility map asserts everything's visible is immediately following a pg_upgrade. It might be advisable for pg_upgrade to try to preserve the relpages/reltuples statistics; but in any case this code is wrong on its own terms, so fix it. Per report from Sergey Koposov. Back-patch to 8.4, where the visibility map was introduced, same as the previous change.
2011-08-28Actually, all of parallel restore's limitations should be tested earlier.Tom Lane
On closer inspection, whining in restore_toc_entries_parallel is really much too late for any user-facing error case. The right place to do it is at the start of RestoreArchive(), before we've done anything interesting (suh as trying to DROP all the targets ...) Back-patch to 8.4, where parallel restore was introduced.
2011-08-28Be more user-friendly about unsupported cases for parallel pg_restore.Tom Lane
If we are unable to do a parallel restore because the input file is stdin or is otherwise unseekable, we should complain and fail immediately, not after having done some of the restore. Complaining once per thread isn't so cool either, and the messages should be worded to make it clear this is an unsupported case not some weird race-condition bug. Per complaint from Lonni Friedman. Back-patch to 8.4, where parallel restore was introduced.
2011-08-27Don't assume that "E" response to NEGOTIATE_SSL_CODE means pre-7.0 server.Tom Lane
These days, such a response is far more likely to signify a server-side problem, such as fork failure. Reporting "server does not support SSL" (in sslmode=require) could be quite misleading. But the results could be even worse in sslmode=prefer: if the problem was transient and the next connection attempt succeeds, we'll have silently fallen back to protocol version 2.0, possibly disabling features the user needs. Hence, it seems best to just eliminate the assumption that backing off to non-SSL/2.0 protocol is the way to recover from an "E" response, and instead treat the server error the same as we would in non-SSL cases. I tested this change against a pre-7.0 server, and found that there was a second logic bug in the "prefer" path: the test to decide whether to make a fallback connection attempt assumed that we must have opened conn->ssl, which in fact does not happen given an "E" response. After fixing that, the code does indeed connect successfully to pre-7.0, as long as you didn't set sslmode=require. (If you did, you get "Unsupported frontend protocol", which isn't completely off base given the server certainly doesn't support SSL.) Since there seems no reason to believe that pre-7.0 servers exist anymore in the wild, back-patch to all supported branches.
2011-08-27Ensure we discard unread/unsent data when abandoning a connection attempt.Tom Lane
There are assorted situations wherein PQconnectPoll() will abandon a connection attempt and try again with different parameters (eg, SSL versus not SSL). However, the code forgot to discard any pending data in libpq's I/O buffers when doing this. In at least one case (server returns E message during SSL negotiation), there is unread input data which bollixes the next connection attempt. I have not checked to see whether this is possible in the other cases where we close the socket and retry, but it seems like a matter of good defensive programming to add explicit buffer-flushing code to all of them. This is one of several issues exposed by Daniel Farina's report of misbehavior after a server-side fork failure. This has been wrong since forever, so back-patch to all supported branches.
2011-08-26Fix potential memory clobber in tsvector_concat().Tom Lane
tsvector_concat() allocated its result workspace using the "conservative" estimate of the sum of the two input tsvectors' sizes. Unfortunately that wasn't so conservative as all that, because it supposed that the number of pad bytes required could not grow. Which it can, as per test case from Jesper Krogh, if there's a mix of lexemes with positions and lexemes without them in the input data. The fix is to assume that we might add a not-previously-present pad byte for each and every lexeme in the two inputs; which really is conservative, but it doesn't seem worthwhile to try to be more precise. This is an aboriginal bug in tsvector_concat, so back-patch to all versions containing it.
2011-08-25Properly quote SQL/MED generic options in pg_dump output.Robert Haas
Shigeru Hanada
2011-08-20Fix performance problem when building a lossy tidbitmap.Tom Lane
As pointed out by Sergey Koposov, repeated invocations of tbm_lossify can make building a large tidbitmap into an O(N^2) operation. To fix, make sure we remove more than the minimum amount of information per call, and add a fallback path to behave sanely if we're unable to fit the bitmap within the requested amount of memory. This has been wrong since the tidbitmap code was written, so back-patch to all supported branches.
2011-08-16Fix race condition in relcache init file invalidation.Tom Lane
The previous code tried to synchronize by unlinking the init file twice, but that doesn't actually work: it leaves a window wherein a third process could read the already-stale init file but miss the SI messages that would tell it the data is stale. The result would be bizarre failures in catalog accesses, typically "could not read block 0 in file ..." later during startup. Instead, hold RelCacheInitLock across both the unlink and the sending of the SI messages. This is more straightforward, and might even be a bit faster since only one unlink call is needed. This has been wrong since it was put in (in 2002!), so back-patch to all supported releases.
2011-08-14Fix unsafe order of operations in foreign-table DDL commands.Tom Lane
When updating or deleting a system catalog tuple, it's necessary to acquire RowExclusiveLock on the catalog before looking up the tuple; otherwise a concurrent VACUUM FULL on the catalog might move the tuple to a different TID before we can apply the update. Coding patterns that find the tuple via a table scan aren't at risk here, but when obtaining the tuple from a catalog cache, correct ordering is important; and several routines in foreigncmds.c got it wrong. Noted while running the regression tests in parallel with VACUUM FULL of assorted system catalogs. For consistency I moved all the heap_open calls to the starts of their functions, including a couple for which there was no actual bug. Back-patch to 8.4 where foreigncmds.c was added.
2011-08-09Fix nested PlaceHolderVar expressions that appear only in targetlists.Tom Lane
A PlaceHolderVar's expression might contain another, lower-level PlaceHolderVar. If the outer PlaceHolderVar is used, the inner one certainly will be also, and so we have to make sure that both of them get into the placeholder_list with correct ph_may_need values during the initial pre-scan of the query (before deconstruct_jointree starts). We did this correctly for PlaceHolderVars appearing in the query quals, but overlooked the issue for those appearing in the top-level targetlist; with the result that nested placeholders referenced only in the targetlist did not work correctly, as illustrated in bug #6154. While at it, add some error checking to find_placeholder_info to ensure that we don't try to create new placeholders after it's too late to do so; they have to all be created before deconstruct_jointree starts. Back-patch to 8.4 where the PlaceHolderVar mechanism was introduced.
2011-08-02Avoid integer overflow when LIMIT + OFFSET >= 2^63.Heikki Linnakangas
This fixes bug #6139 reported by Hitoshi Harada.
2011-07-28Fix pg_restore's direct-to-database mode for standard_conforming_strings.Tom Lane
pg_backup_db.c contained a mini SQL lexer with which it tried to identify boundaries between SQL commands, but that code was not designed to cope with standard_conforming_strings, and would get the wrong answer if a backslash immediately precedes a closing single quote in such a string, as per report from Julian Mehnle. The bug only affects direct-to-database restores from archive files made with standard_conforming_strings = on. Rather than complicating the code some more to try to fix that, let's just rip it all out. The only reason it was needed was to cope with COPY data embedded into ordinary archive entries, which was a layout that was used only for about the first three weeks of the archive format's existence, and never in any production release of pg_dump. Instead, just rely on the archive file layout to tell us whether we're printing COPY data or not. This bug represents a data corruption hazard in all releases in which standard_conforming_strings can be turned on, ie 8.2 and later, so back-patch to all supported branches.
2011-07-26Add missing newlines at end of error messagesPeter Eisentraut
2011-07-24Fix previous patch so it also works if not USE_SSL (mea culpa).Tom Lane
On balance, the need to cover this case changes my mind in favor of pushing all error-message generation duties into the two fe-secure.c routines. So do it that way.
2011-07-24Improve libpq's error reporting for SSL failures.Tom Lane
In many cases, pqsecure_read/pqsecure_write set up useful error messages, which were then overwritten with useless ones by their callers. Fix this by defining the responsibility to set an error message to be entirely that of the lower-level function when using SSL. Back-patch to 8.3; the code is too different in 8.2 to be worth the trouble.
2011-07-24Use OpenSSL's SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER flag.Tom Lane
This disables an entirely unnecessary "sanity check" that causes failures in nonblocking mode, because OpenSSL complains if we move or compact the write buffer. The only actual requirement is that we not modify pending data once we've attempted to send it, which we don't. Per testing and research by Martin Pihlak, though this fix is a lot simpler than his patch. I put the same change into the backend, although it's less clear whether it's necessary there. We do use nonblock mode in some situations in streaming replication, so seems best to keep the same behavior in the backend as in libpq. Back-patch to all supported releases.
2011-07-21Fix PQsetvalue() to avoid possible crash when adding a new tuple.Tom Lane
PQsetvalue unnecessarily duplicated the logic in pqAddTuple, and didn't duplicate it exactly either --- pqAddTuple does not care what is in the tuple-pointer array positions beyond the last valid entry, whereas the code in PQsetvalue assumed such positions would contain NULL. This led to possible crashes if PQsetvalue was applied to a PGresult that had previously been enlarged with pqAddTuple, for instance one built from a server query. Fix by relying on pqAddTuple instead of duplicating logic, and not assuming anything about the contents of res->tuples[res->ntups]. Back-patch to 8.4, where PQsetvalue was introduced. Andrew Chernow
2011-07-18Adapted expected result for latest change to ecpglib.Michael Meskes
2011-07-18Made ecpglib write double with a precision of 15 digits.Michael Meskes
Patch by Akira Kurosawa <kurosawa-akira@mxc.nes.nec.co.jp>.
2011-07-16Fix SSPI login when multiple roundtrips are requiredMagnus Hagander
This fixes SSPI login failures showing "The function requested is not supported", often showing up when connecting to localhost. The reason was not properly updating the SSPI handle when multiple roundtrips were required to complete the authentication sequence. Report and analysis by Ahmed Shinwari, patch by Magnus Hagander
2011-07-15Fix two ancient bugs in GiST code to re-find a parent after page split:Heikki Linnakangas
First, when following a right-link, we incorrectly marked the current page as the parent of the right sibling. In reality, the parent of the right page is the same as the parent of the current page (or some page to the right of it, gistFindCorrectParent() will sort that out). Secondly, when we follow a right-link, we must prepend, not append, the right page to our list of pages to visit. That's because we assume that once we hit a leaf page in the list, all the rest are leaf pages too, and give up. To hit these bugs, you need concurrent actions and several unlucky accidents. Another backend must split the root page, while you're in process of splitting a lower-level page. Furthermore, while you scan the internal nodes to re-find the parent, another backend needs to again split some more internal pages. Even then, the bugs don't necessarily manifest as user-visible errors or index corruption. While we're at it, make the error reporting a bit better if gistFindPath() fails to re-find the parent. It used to be an assertion, but an elog() seems more appropriate. Backpatch to all supported branches.
2011-07-05Fix psql's counting of script file line numbers during COPY.Tom Lane
handleCopyIn incremented pset.lineno for each line of COPY data read from a file. This is correct when reading from the current script file (i.e., we are doing COPY FROM STDIN followed by in-line data), but it's wrong if the data is coming from some other file. Per bug #6083 from Steve Haslam. Back-patch to all supported versions.
2011-07-04Back-patch Fix bat file quoting of %ENV from commit 19b7fac8.Andrew Dunstan
2011-07-03Back-patch creation of tar.bz2 tarball during "make dist".Tom Lane
Since commit a4d03bbcdaf7739d7e9073ee76bb186f68ddc163, "make dist" has built both gzip- and bzip2-compressed tarballs. However, this was pretty useless, because our tarball build script didn't know about it and proceeded to overwrite the bz2 file with new data. Back-patch the change to all active branches, so that creation of the tar.bz2 file can be removed from the build script.
2011-07-03Fix EXPLAIN to handle gating Result nodes within inner-indexscan subplans.Tom Lane
It is possible for a NestLoop plan node to pass an OUTER Var into an "inner indexscan" that is an Append construct (derived from an inheritance tree or UNION ALL subquery). The OUTER tuple is then passed down at runtime to the leaf indexscan node(s) where it will actually be used. EXPLAIN has to likewise pass the information about the nestloop's outer subplan down through the Append node, else it will fail to print the outer-reference Vars (with complaints like "bogus varno: 65001"). However, there was a case missed in all this: we could also have gating Result nodes that were inserted into the appendrel plan tree to deal with pseudoconstant qual conditions. So EXPLAIN has to pass down the outer plan node to a Result's subplan, too. Per example from Jon Nelson. The problem is gone in 9.1 because we replaced the nestloop outer-tuple kluge with a Param-based data transfer mechanism. Also, so far as I can tell, the case can't happen before 8.4 because of restrictions on what sorts of appendrel members could be pulled up into the parent query. So this patch is only needed for 8.4 and 9.0.
2011-06-20Fix thinko in previous patch for optimizing EXISTS-within-EXISTS.Tom Lane
When recursing after an optimization in pull_up_sublinks_qual_recurse, the available_rels value passed down must include only the relations that are in the righthand side of the new SEMI or ANTI join; it's incorrect to pull up a sub-select that refers to other relations, as seen in the added test case. Per report from BangarRaju Vadapalli. While at it, rethink the idea of recursing below a NOT EXISTS. That is essentially the same situation as pulling up ANY/EXISTS sub-selects that are in the ON clause of an outer join, and it has the same disadvantage: we'd force the two joins to be evaluated according to the syntactic nesting order, because the lower join will most likely not be able to commute with the ANTI join. That could result in having to form a rather large join product, whereas the handling of a correlated subselect is not quite that dumb. So until we can handle those cases better, #ifdef NOT_USED that case. (I think it's okay to pull up in the EXISTS/ANY cases, because SEMI joins aren't so inflexible about ordering.) Back-patch to 8.4, same as for previous patch in this area. Fortunately that patch hadn't made it into any shipped releases yet.
2011-06-20Fixed string in German translation that causes segfault.Michael Meskes
Applied patch by Christoph Berg <cb@df7cb.de> to replace placeholder "%s" by correct string.
2011-06-19Fix thinko in previous patch to always update pg_class.reltuples/relpages.Tom Lane
I mis-simplified the test where ANALYZE decided if it could get away without doing anything: under the new regime, that's never allowed. Per bug #6068 from Jeff Janes. Back-patch to 8.4, just like previous patch.
2011-06-17Obtain table locks as soon as practical during pg_dump.Tom Lane
For some reason, when we (I) added table lock acquisition to pg_dump, we didn't think about making it happen as soon as possible after the start of the transaction. What with subsequent additions, there was actually quite a lot going on before we got around to that; which sort of defeats the purpose. Rearrange the order of calls in dumpSchema() to close the risk window as much as we easily can. Back-patch to all supported branches.
2011-06-17Add overflow checks to int4 and int8 versions of generate_series().Robert Haas
The previous code went into an infinite loop after overflow. In fact, an overflow is not really an error; it just means that the current value is the last one we need to return. So, just arrange to stop immediately when overflow is detected. Back-patch all the way.
2011-06-15Fix failure to account for memory used by tuplestore_putvalues().Tom Lane
This oversight could result in a tuplestore using much more than the intended amount of memory. It would only happen in a code path that loaded a tuplestore via tuplestore_putvalues(), and many of those won't emit huge amounts of data; but cases such as holdable cursors and plpgsql's RETURN NEXT command could have the problem. The fix ensures that the tuplestore will switch to write-to-disk mode when it overruns work_mem. The potential overrun was finite, because we would still count the space used by the tuple pointer array, so the tuplestore code would eventually flip into write-to-disk mode anyway. When storing wide tuples we would go far past the expected work_mem usage before that happened; but this may account for the lack of prior reports. Back-patch to 8.4, where tuplestore_putvalues was introduced. Per bug #6061 from Yann Delorme.
2011-06-14Fix assorted issues with build and install paths containing spaces.Tom Lane
Apparently there is no buildfarm critter exercising this case after all, because it fails in several places. With this patch, build, install, check-world, and installcheck-world pass for me on OS X.
2011-06-13Fix aboriginal copy-paste mistake in error messageAlvaro Herrera
Spotted by Jaime Casanova
2011-06-10Work around gcc 4.6.0 bug that breaks WAL replay.Tom Lane
ReadRecord's habit of using both direct references to tmpRecPtr and references to *RecPtr (which is pointing at tmpRecPtr) triggers an optimization bug in gcc 4.6.0, which apparently has forgotten about aliasing rules. Avoid the compiler bug, and make the code more readable to boot, by getting rid of the direct references. Improve the comments while at it. Back-patch to all supported versions, in case they get built with 4.6.0. Tom Lane, with some cosmetic suggestions from Alex Hunsaker
2011-06-09Use the correct eventlog severity for errorMagnus Hagander
2011-06-09Support silent mode for service registrations on win32Magnus Hagander
Using -s when registering a service will now suppress the application eventlog entries stating that the service is starting and started. MauMau
2011-06-04Allow building with perl 5.14.Andrew Dunstan
Patch from Alex Hunsaker.
2011-06-04Expose the "*VALUES*" alias that we generate for a stand-alone VALUES list.Tom Lane
We were trying to make that strictly an internal implementation detail, but it turns out that it's exposed anyway when dumping a view defined like CREATE VIEW test_view AS VALUES (1), (2), (3) ORDER BY 1; This comes out as CREATE VIEW ... ORDER BY "*VALUES*".column1; which fails to parse when reloading the dump. Hacking ruleutils.c to suppress the column qualification looks like it'd be a risky business, so instead promote the RTE alias to full-fledged usability. Per bug #6049 from Dylan Adams. Back-patch to all supported branches.
2011-06-02Clean up after erroneous SELECT FOR UPDATE/SHARE on a sequence.Tom Lane
My previous commit disallowed this operation, but did nothing about cleaning up the damage if one had already been done. With the operation disallowed, it's okay to just forcibly clear xmax in a sequence's tuple, since any value seen there could not represent a live transaction's lock. So, any sequence-specific operation will repair the problem automatically, whether or not the user has already seen "could not access status of transaction" failures.
2011-06-02Disallow SELECT FOR UPDATE/SHARE on sequences.Tom Lane
We can't allow this because such an operation stores its transaction XID into the sequence tuple's xmax. Because VACUUM doesn't process sequences (and we don't want it to start doing so), such an xmax value won't get frozen, meaning it will eventually refer to nonexistent pg_clog storage, and even wrap around completely. Since the row lock is ignored by nextval and setval, the usefulness of the operation is highly debatable anyway. Per reports of trouble with pgpool 3.0, which had ill-advisedly started using such commands as a form of locking. In HEAD, also disallow SELECT FOR UPDATE/SHARE on toast tables. Although this does work safely given the current implementation, there seems no good reason to allow it. I refrained from changing that behavior in back branches, however.
2011-05-31Protect GIST logic that assumes penalty values can't be negative.Tom Lane
Apparently sane-looking penalty code might return small negative values, for example because of roundoff error. This will confuse places like gistchoose(). Prevent problems by clamping negative penalty values to zero. (Just to be really sure, I also made it force NaNs to zero.) Back-patch to all supported branches. Alexander Korotkov
2011-05-30Fix portability bugs in use of credentials control messages for peer auth.Tom Lane
Even though our existing code for handling credentials control messages has been basically unchanged since 2001, it was fundamentally wrong: it did not ensure proper alignment of the supplied buffer, and it was calculating buffer sizes and message sizes incorrectly. This led to failures on platforms where alignment padding is relevant, for instance FreeBSD on 64-bit platforms, as seen in a recent Debian bug report passed on by Martin Pitt (http://bugs.debian.org//cgi-bin/bugreport.cgi?bug=612888). Rewrite to do the message-whacking using the macros specified in RFC 2292, following a suggestion from Theo de Raadt in that thread. Tested by me on Debian/kFreeBSD-amd64; since OpenBSD and NetBSD document the identical CMSG API, it should work there too. Back-patch to all supported branches.
2011-05-30Fix VACUUM so that it always updates pg_class.reltuples/relpages.Tom Lane
When we added the ability for vacuum to skip heap pages by consulting the visibility map, we made it just not update the reltuples/relpages statistics if it skipped any pages. But this could leave us with extremely out-of-date stats for a table that contains any unchanging areas, especially for TOAST tables which never get processed by ANALYZE. In particular this could result in autovacuum making poor decisions about when to process the table, as in recent report from Florian Helmberger. And in general it's a bad idea to not update the stats at all. Instead, use the previous values of reltuples/relpages as an estimate of the tuple density in unvisited pages. This approach results in a "moving average" estimate of reltuples, which should converge to the correct value over multiple VACUUM and ANALYZE cycles even when individual measurements aren't very good. This new method for updating reltuples is used by both VACUUM and ANALYZE, with the result that we no longer need the grotty interconnections that caused ANALYZE to not update the stats depending on what had happened in the parent VACUUM command. Also, fix the logic for skipping all-visible pages during VACUUM so that it looks ahead rather than behind to decide what to do, as per a suggestion from Greg Stark. This eliminates useless scanning of all-visible pages at the start of the relation or just after a not-all-visible page. In particular, the first few pages of the relation will not be invariably included in the scanned pages, which seems to help in not overweighting them in the reltuples estimate. Back-patch to 8.4, where the visibility map was introduced.
2011-05-28Fix null-dereference crash in parse_xml_decl().Tom Lane
parse_xml_decl's header comment says you can pass NULL for any unwanted output parameter, but it failed to honor this contract for the "standalone" flag. The only currently-affected caller is xml_recv, so the net effect is that sending a binary XML value containing a standalone parameter in its xml declaration would crash the backend. Per bug #6044 from Christopher Dillard. In passing, remove useless initializations of parse_xml_decl's output parameters in xml_parse. Back-patch to 8.3, where this code was introduced.
2011-05-26Make decompilation of optimized CASE constructs more robust.Tom Lane
We had some hacks in ruleutils.c to cope with various odd transformations that the optimizer could do on a CASE foo WHEN "CaseTestExpr = RHS" clause. However, the fundamental impossibility of covering all cases was exposed by Heikki, who pointed out that the "=" operator could get replaced by an inlined SQL function, which could contain nearly anything at all. So give up on the hacks and just print the expression as-is if we fail to recognize it as "CaseTestExpr = RHS". (We must cover that case so that decompiled rules print correctly; but we are not under any obligation to make EXPLAIN output be 100% valid SQL in all cases, and already could not do so in some other cases.) This approach requires that we have some printable representation of the CaseTestExpr node type; I used "CASE_TEST_EXPR". Back-patch to all supported branches, since the problem case fails in all.
2011-05-24Avoid uninitialized bits in the result of QTN2QT().Tom Lane
Found with additional valgrind testing. Noah Misch
2011-05-23Lobotomize typmod check in convert_tuples_by_position, back branches only.Tom Lane
convert_tuples_by_position was rejecting attempts to coerce a record field with -1 typmod to the same type with a non-default typmod. This is in fact the "correct" thing to do (since we're just going to do a type relabeling, not invoke any length-conversion cast function); but it results in rejecting valid cases like bug #6020, because the source record's tupdesc is built from Params that don't have typmod assigned. Since that's a regression from previous versions, which accepted this code, we have to do something about it. In HEAD, I've fixed the problem properly by causing the Params to receive the correct typmods; but the potential for incidental behavioral changes seems high enough to make it unattractive to make the same change in released branches. (And it couldn't be fixed that way in 8.4 anyway...) Hence this patch just modifies convert_tuples_by_position to not complain if either the input or the output tupdesc has typmod -1. This is still a shade tighter checking than we did before 9.0, since before that plpgsql failed to consider typmods at all when checking record compatibility. (convert_tuples_by_position is currently used only by plpgsql, so we're not affecting other behavior.) Back-patch to 8.4, since we recently back-ported convert_tuples_by_position into that branch.