summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2016-11-25Check for pending trigger events on far end when dropping an FK constraint.Tom Lane
When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT, we refuse the drop if there are any pending trigger events on the named table; this ensures that we won't remove the pg_trigger row that will be consulted by those events. But we should make the same check for the referenced relation, else we might remove a due-to-be-referenced pg_trigger row for that relation too, resulting in "could not find trigger NNN" or "relation NNN has no triggers" errors at commit. Per bug #14431 from Benjie Gillam. Back-patch to all supported branches. Report: <20161124114911.6530.31200@wrigleys.postgresql.org>
2016-11-25Fix typo in commentMagnus Hagander
Thomas Munro
2016-11-24Check that default_tablespace affects ALTER TABLE ADD UNIQUE/PRIMARY KEY.Tom Lane
Seems like a good thing to test, considering that we nearly broke it yesterday. Michael Paquier
2016-11-24Fix commit_ts for FrozenXid and BootstrapXidAlvaro Herrera
Previously, requesting commit timestamp for transactions FrozenTransactionId and BootstrapTransactionId resulted in an error. But since those values can validly appear in committed tuples' Xmin, this behavior is unhelpful and error prone: each caller would have to special-case those values before requesting timestamp data for an Xid. We already have a perfectly good interface for returning "the Xid you requested is too old for us to have commit TS data for it", so let's use that instead. Backpatch to 9.5, where commit timestamps appeared. Author: Craig Ringer Discussion: https://www.postgresql.org/message-id/CAMsr+YFM5Q=+ry3mKvWEqRTxrB0iU3qUSRnS28nz6FJYtBwhJg@mail.gmail.com
2016-11-23Avoid masking a function parameter name with a local variable name.Tom Lane
No actual bug here, but it might confuse readers, so change the name of the local variable. Ashutosh Bapat
2016-11-23Make sure ALTER TABLE preserves index tablespaces.Tom Lane
When rebuilding an existing index, ALTER TABLE correctly kept the physical file in the same tablespace, but it messed up the pg_class entry if the index had been in the database's default tablespace and "default_tablespace" was set to some non-default tablespace. This led to an inaccessible index. Fix by fixing pg_get_indexdef_string() to always include a tablespace clause, whether or not the index is in the default tablespace. The previous behavior was installed in commit 537e92e41, and I think it just wasn't thought through very clearly; certainly the possible effect of default_tablespace wasn't considered. There's some risk in changing the behavior of this function, but there are no other call sites in the core code. Even if it's being used by some third party extension, it's fairly hard to envision a usage that is okay with a tablespace clause being appended some of the time but can't handle it being appended all the time. Back-patch to all supported versions. Code fix by me, investigation and test cases by Michael Paquier. Discussion: <1479294998857-5930602.post@n3.nabble.com>
2016-11-22Remove barrier.hRobert Haas
A new thing also called a "barrier" is proposed, but whether we decide to take that patch or not, this file seems to have outlived its usefulness. Thomas Munro
2016-11-22Code review for commit 274bb2b3857cc987cfa21d14775cae9b0dababa5.Robert Haas
Avoid memory leak in conninfo_uri_parse_options. Use the current host rather than the comma-separated list of host names when the host name is needed for GSS, SSPI, or SSL authentication. Document the way connect_timeout interacts with multiple host specifications. Takayuki Tsunakawa
2016-11-22Improve handling of "UPDATE ... SET (column_list) = row_constructor".Tom Lane
Previously, the right-hand side of a multiple-column assignment, if it wasn't a sub-SELECT, had to be a simple parenthesized expression list, because gram.y was responsible for "bursting" the construct into independent column assignments. This had the minor defect that you couldn't write ROW (though you should be able to, since the standard says this is a row constructor), and the rather larger defect that unlike other uses of row constructors, we would not expand a "foo.*" item into multiple columns. Fix that by changing the RHS to be just "a_expr" in the grammar, leaving it to transformMultiAssignRef to separate the elements of a RowExpr; which it will do only after performing standard transformation of the RowExpr, so that "foo.*" behaves as expected. The key reason we didn't do that before was the hard-wired handling of DEFAULT tokens (SetToDefault nodes). This patch deals with that issue by allowing DEFAULT in any a_expr and having parse analysis throw an error if SetToDefault is found in an unexpected place. That's an improvement anyway since the error can be more specific than just "syntax error". The SQL standard suggests that the RHS could be any a_expr yielding a suitable row value. This patch doesn't really move the goal posts in that respect --- you're still limited to RowExpr or a sub-SELECT --- but it does fix the grammar restriction, so it provides some tangible progress towards a full implementation. And the limitation is now documented by an explicit error message rather than an unhelpful "syntax error". Discussion: <8542.1479742008@sss.pgh.pa.us>
2016-11-22Support condition variables.Robert Haas
Condition variables provide a flexible way to sleep until a cooperating process causes an arbitrary condition to become true. In simple cases, this can be accomplished with a WaitLatch/ResetLatch loop; the cooperating process can call SetLatch after performing work that might cause the condition to be satisfied, and the waiting process can recheck the condition each time. However, if the process performing the work doesn't have an easy way to identify which processes might be waiting, this doesn't work, because it can't identify which latches to set. Condition variables solve that problem by internally maintaining a list of waiters; a process that may have caused some waiter's condition to be satisfied must "signal" or "broadcast" on the condition variable. Robert Haas and Thomas Munro
2016-11-21Fix uninitialized variable.Tom Lane
Oversight in a734fd5d1. Michael Paquier
2016-11-21Fix PGLC_localeconv() to handle errors better.Tom Lane
The code was intentionally not very careful about leaking strdup'd strings in case of an error. That was forgivable probably, but it also failed to notice strdup() failures, which could lead to subsequent null-pointer-dereference crashes, since many callers unsurprisingly didn't check for null pointers in the struct lconv fields. An even worse problem is that it could throw error while we were setlocale'd to a non-C locale, causing unwanted behavior in subsequent libc calls. Rewrite to ensure that we cannot throw elog(ERROR) until after we've restored the previous locale settings, or at least attempted to. (I'm sorely tempted to make restore failure be a FATAL error, but will refrain for the moment.) Having done that, it's not much more work to ensure that we clean up strdup'd storage on the way out, too. This code is substantially the same in all supported branches, so back-patch all the way. Michael Paquier and Tom Lane Discussion: <CAB7nPqRMbGqa_mesopcn4MPyTs34eqtVEK7ELYxvvV=oqS00YA@mail.gmail.com>
2016-11-21Fix optimization for skipping searches for parallel-query hazards.Tom Lane
Fix thinko in commit da1c91631: even if the original query was free of parallel hazards, we might introduce such a hazard by adding PARAM_EXEC Param nodes. Adjust is_parallel_safe() so that it will scan the given expression whenever any such nodes have been created. Per report from Andreas Seltenreich. Discussion: <878tse6yvf.fsf@credativ.de>
2016-11-21autovacuum: Drop orphan temp tables more quickly but with more caution.Robert Haas
Previously, we only dropped an orphan temp table when it became old enough to threaten wraparound; instead, doing it immediately. The only value of waiting is that someone might be able to examine the contents of the orphan temp table for forensic purposes, but it's pretty difficult to actually do that and few users will wish to do so. On the flip side, not performing the drop immediately generates log spam and bloats pg_class. In addition, per a report from Grigory Smolkin, if a temporary schema contains a very large number of temporary tables, a backend attempting to clear the temporary schema might fail due to lock table exhaustion. It's helpful for autovacuum to clean up after such cases, and we don't want it to wait for wraparound to threaten before doing so. To prevent autovacuum from failing in the same manner as a backend trying to drop an entire temp schema, remove orphan temp tables in batches of 50, committing after each batch, so that we don't accumulate an unbounded number of locks. If a drop fails, retry other orphan tables that need to be dropped up to 10 times before giving up. With this system, if a backend does fail to clean a temporary schema due to lock table exhaustion, autovacuum should hopefully put things right the next time it processes the database. Discussion: CAB7nPqSbYT6dRwsXVgiKmBdL_ARemfDZMPA+RPeC_ge0GK70hA@mail.gmail.com Michael Paquier, with a bunch of comment changes by me.
2016-11-21Fix test for subplans in force-parallel mode.Tom Lane
We mustn't force parallel mode if the query has any subplans, since ExecSerializePlan doesn't transmit them to workers. Testing top_plan->initPlan is inadequate because (1) there might be initPlans attached to lower plan nodes, and (2) non-initPlan subplans don't work either. There's certainly room for improvement in those restrictions, but for the moment that's what we've got. Amit Kapila, per report from Andreas Seltenreich Discussion: <8737im6pmh.fsf@credativ.de>
2016-11-20Prevent multicolumn expansion of "foo.*" in an UPDATE source expression.Tom Lane
Because we use transformTargetList() for UPDATE as well as SELECT tlists, the code accidentally tried to expand a "*" reference into several columns. This is nonsensical, because the UPDATE syntax provides exactly one target column to put the value into. The immediate result was that transformUpdateTargetList() got confused and reported "UPDATE target count mismatch --- internal error". It seems better to treat such a reference as a plain whole-row variable, as it would be in other contexts. (This could produce useful results when the target column is of composite type.) Fix by tweaking transformTargetList() to perform *-expansion only conditionally, depending on its exprKind parameter. Back-patch to 9.3. The problem exists further back, but a fix would be much more invasive before that, because transformTargetList() wasn't told what kind of list it was working on. Doesn't seem worth the trouble given the lack of field reports. (I only noticed it because I was checking the code while trying to improve the documentation about how we handle "foo.*".) Discussion: <4308.1479595330@sss.pgh.pa.us>
2016-11-19Fix latent costing error in create_merge_append_path.Tom Lane
create_merge_append_path should use the path rowcount it just computed, not rel->tuples, for costing purposes. Those numbers should always be the same at present, but if we ever support parameterized MergeAppend paths (a case this function is otherwise prepared for), the former would be right and the latter wrong. No need for back-patch since the problem is only latent. Ashutosh Bapat Discussion: <CAFjFpRek+cLCnTo24youuGtsq4zRphEB8EUUPjDxZjnL4n4HYQ@mail.gmail.com>
2016-11-19Code review for GUC serialization/deserialization code.Tom Lane
The serialization code dumped core for a string-valued GUC whose value is NULL, which is a legal state. The infrastructure isn't capable of transmitting that state exactly, but fortunately, transmitting an empty string instead should be close enough (compare, eg, commit e45e990e4). The code potentially underestimated the space required to format a real-valued variable, both because it made an unwarranted assumption that %g output would never be longer than %e output, and because it didn't count right even for %e format. In practice this would pretty much always be masked by overestimates for other variables, but it's still wrong. Also fix boundary-case error in read_gucstate, incorrect handling of the case where guc_sourcefile is non-NULL but zero length (not clear that can happen, but if it did, this code would get totally confused), and confusingly useless check for a NULL result from read_gucstate. Andreas Seltenreich discovered the core dump; other issues noted while reading nearby code. Back-patch to 9.5 where this code was introduced. Michael Paquier and Tom Lane Discussion: <871sy78wno.fsf@credativ.de>
2016-11-18Add pg_sequences viewPeter Eisentraut
Like pg_tables, pg_views, and others, this view contains information about sequences in a way that is independent of the system catalog layout but more comprehensive than the information schema. To help implement the view, add a new internal function pg_sequence_last_value() to return the last value of a sequence. This is kept separate from pg_sequence_parameters() to separate querying run-time state from catalog-like information. Reviewed-by: Andreas Karlsson <andreas@proxel.se>
2016-11-18Clean up pg_dump tests, re-enable BLOB testingStephen Frost
Add a loop to check that each test covers all of the pg_dump runs. We (I) had been a bit sloppy when adding new runs and not making sure to mark if they should be under like or unlike for each test, this loop makes sure that the test system will complain if any are forgotten in the future. The loop also correctly handles the 'catch all' cases, which are used to avoid running unnecessary specific checks when a single catch-all can be done (eg: a no-acl run should not have any GRANT commands). Also, re-enable the testing of blobs, but use lo_from_bytea() instead of trying to be cute and writing out to a file and then reading it back in with psql, which proved to be difficult for some buildfarm members. This allows us to add support for testing the --no-blobs option which will be getting added shortly, provided the buildfarm doesn't blow up on this.
2016-11-17Remove or reduce verbosity of some debug messages.Robert Haas
The debug messages that merely print StartTransactionCommand, CommitTransactionCommand, ProcessUtilty, or ProcessQuery with no additional details seem to be useless. Get rid of them. The transaction status messages produced by ShowTransactionState are occasionally useful, but they are extremely verbose, producing multiple lines of log output every time they fire, which can happens multiple times per transaction. So, reduce the level to DEBUG5; avoid emitting an extra line just to explain which debug point is at issue; and tighten up the rest of the message so it doesn't use quite so much horizontal space. With these changes, it's possible to run a somewhat busy system with a log level even as high as DEBUG4, whereas previously anything above DEBUG2 would flood the log with output that probably wasn't really all that useful.
2016-11-17Fix pg_dump's handling of circular dependencies in views.Tom Lane
pg_dump's traditional solution for breaking a circular dependency involving a view was to create the view with CREATE TABLE and then later issue CREATE RULE "_RETURN" ... to convert the table to a view, relying on the backend's very very ancient code that supports making views that way. We've wanted to get rid of that kluge for a long time, but the thing that finally motivates doing something about it is the recognition that this method fails with the --clean option, because it leads to issuing DROP RULE "_RETURN" followed by DROP TABLE --- and the backend won't let you drop a view's _RETURN rule. Instead, let's break circular dependencies by initially creating the view using CREATE VIEW AS SELECT NULL::columntype AS columnname, ... (so that it has the right column names and types to support external references, but no dependencies beyond the column data types), and then later dumping the ON SELECT rule using the spelling CREATE OR REPLACE VIEW. This method wasn't available when this code was originally written, but it's been possible since PG 7.3, so it seems fine to start relying on it now. To solve the --clean problem, make the dropStmt for an ON SELECT rule be CREATE OR REPLACE VIEW with the same dummy target list as above. In this way, during the DROP phase, we first reduce the view to have no extra dependencies, and then we can drop it entirely when we've gotten rid of whatever had a circular dependency on it. (Note: this should work adequately well with the --if-exists option, since the CREATE OR REPLACE VIEW will go through whether the view exists or not. It could fail if the view exists with a conflicting column set, but we don't really support --clean against a non-matching database anyway.) This allows cleaning up some other kluges inside pg_dump, notably that we don't need a notion of reloptions attached to a rule anymore. Although this is a bug fix, commit to HEAD only for now. The problem's existed for a long time and we've had relatively few complaints, so it doesn't really seem worth taking risks to fix it in the back branches. We might revisit that choice if no problems emerge. Discussion: <19092.1479325184@sss.pgh.pa.us>
2016-11-17Improve pg_dump/pg_restore --create --if-exists logic.Tom Lane
Teach it not to complain if the dropStmt attached to an archive entry is actually spelled CREATE OR REPLACE VIEW, since that will happen due to an upcoming bug fix. Also, if it doesn't recognize a dropStmt, have it print a WARNING and then emit the dropStmt unmodified. That seems like a much saner behavior than Assert'ing or dumping core due to a null-pointer dereference, which is what would happen before :-(. Back-patch to 9.4 where this option was introduced. Discussion: <19092.1479325184@sss.pgh.pa.us>
2016-11-17Re-pgindent src/bin/pg_dump/*Tom Lane
Cleanup for recent patches --- it's not much change, but I got annoyed while re-indenting the view-rule fix I'm working on.
2016-11-17Avoid pin scan for replay of XLOG_BTREE_VACUUM in all casesAlvaro Herrera
Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to require complex interlocking that matched the requirements on the master. This required an O(N) operation that became a significant problem with large indexes, causing replication delays of seconds or in some cases minutes while the XLOG_BTREE_VACUUM was replayed. This commit skips the “pin scan” that was previously required, by observing in detail when and how it is safe to do so, with full documentation. The pin scan is skipped only in replay; the VACUUM code path on master is not touched here. No tests included. Manual tests using an additional patch to view WAL records and their timing have shown the change in WAL records and their handling has successfully reduced replication delay. This is a back-patch of commits 687f2cd7a015, 3e4b7d87988f, b60284261375 by Simon Riggs, to branches 9.4 and 9.5. No further backpatch is possible because this depends on catalog scans being MVCC. I (Álvaro) additionally updated a slight problem in the README, which explains why this touches the 9.6 and master branches.
2016-11-15Check that result tupdesc has exactly 1 column in return_next scalar case.Tom Lane
This should always be true, but since we're relying on a tuple descriptor passed from outside pltcl itself, let's check. Per a gripe from Coverity.
2016-11-15Reserve zero as an invalid DSM handle.Robert Haas
Previously, the handle for the control segment could not be zero, but some other DSM segment could potentially have a handle value of zero. However, that means that if someone wanted to store a dsm_handle that might or might not be valid, they would need a separate boolean to keep track of whether the associated value is legal. That's annoying, so change things so that no DSM segment can ever have a handle of 0 - or as we call it here, DSM_HANDLE_INVALID. Thomas Munro. This was submitted as part of a much larger patch to add an malloc-like allocator for dynamic shared memory, but this part seems like a good idea independently of the rest of the patch.
2016-11-15Allow DOS-style line endings in ~/.pgpass files.Tom Lane
On Windows, libc will mask \r\n line endings for us, since we read the password file in text mode. But that doesn't happen on Unix. People who share password files across both systems might have \r\n line endings in a file they use on Unix, so as a convenience, ignore trailing \r. Per gripe from Josh Berkus. In passing, put the existing check for empty line somewhere where it's actually useful, ie after stripping the newline not before. Vik Fearing, adjusted a bit by me Discussion: <0de37763-5843-b2cc-855e-5d0e5df25807@agliodbs.com>
2016-11-15Account for catalog snapshot in PGXACT->xmin updates.Tom Lane
The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting for whether MyPgXact->xmin could be cleared or advanced. In normal transactions this was masked by the fact that the transaction snapshot would be older, but during backend startup and certain utility commands it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had been cleared, meaning that recently-deleted rows could be pruned even though this snapshot could still see them, causing unexpected catalog lookup failures. This effect appears to be the explanation for a recent failure on buildfarm member piculet. To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever it is valid. In the previous logic, it was possible for the CatalogSnapshot to remain valid across waits for client input, but with this change that would mean it delays advance of global xmin in cases where it did not before. To avoid possibly causing new table-bloat problems with clients that sit idle for long intervals, add code to invalidate the CatalogSnapshot before waiting for client input. (When the backend is busy, it's unlikely that the CatalogSnapshot would be the oldest snap for very long, so we don't worry about forcing early invalidation of it otherwise.) In passing, remove the CatalogSnapshotStale flag in favor of using "CatalogSnapshot != NULL" to represent validity, as we do for the other special snapshots in snapmgr.c. And improve some obsolete comments. No regression test because I don't know a deterministic way to cause this failure. But the stress test shown in the original discussion provokes "cache lookup failed for relation 1255" within a few dozen seconds for me. Back-patch to 9.4 where MVCC catalog scans were introduced. (Note: it's quite easy to produce similar failures with the same test case in branches before 9.4. But MVCC catalog scans were supposed to fix that.) Discussion: <16447.1478818294@sss.pgh.pa.us>
2016-11-15Limit the number of number of tapes used for a sort to 501.Robert Haas
Gigantic numbers of tapes don't work out well. Original patch by Peter Geoghegan; comments entirely rewritten by me.
2016-11-15Fix broken statement in UCS_to_most.pl.Robert Haas
This has been wrong for a very long time, and it's puzzling to me how it ever worked for anyone. Kyotaro Horiguchi
2016-11-15pgbench: Increase maximum size of log filename from 64 to MAXPGPATH.Robert Haas
Commit 41124a91e61fc6d9681c1e8b15ba30494e84d643 allowed the transaction log file prefix to be changed but left in place the existing 64-character limit on the total length of a log file name. It's possible that could be inconvenient for somebody, so increase the limit to MAXPGPATH, which ought to be enough for anybody. Per a suggestion from Tom Lane.
2016-11-14Provide NO_INSTALLCHECK option for pgxs.Andres Freund
This allows us to avoid running the regression tests in contrib modules like pg_stat_statement in a less ugly manner. Discussion: <22432.1478968242@sss.pgh.pa.us>
2016-11-14Fix typo in commentMagnus Hagander
The function was renamed in 908e23473, but the comment never learned about it.
2016-11-14Allow individual TAP tests to be run via PROVE_TESTSPeter Eisentraut
Add a new optional Makefile variable PROVE_TESTS that, if passed as a space-separated list of paths relative to the Makefile invoking $(prove_check) or $(prove_installcheck), runs just those tests instead of t/*.pl . From: Craig Ringer <craig@2ndquadrant.com> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-13pg_upgrade: Upgrade sequence data via pg_dumpPeter Eisentraut
Previously, pg_upgrade migrated sequence data like tables by copying the on-disk file. This does not allow any changes in the on-disk format for sequences. It's simpler to just have pg_dump set the new sequence values as it normally does. To do that, create a hidden submode in pg_dump that dumps sequence data even when a schema-only dump is requested, and trigger that submode in binary upgrade mode. (This new submode could easily be exposed as a command-line option, but it has limited use outside of pg_dump and would probably cause some confusion, so we don't do that at this time.) Reviewed-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-13pg_dump: Separate table and sequence data object typesPeter Eisentraut
Instead of handling both sequence data and table data internally as "table data", handle sequences separately under a "sequence set" type. We already handled materialized view data differently, so it makes the code somewhat cleaner to handle each relation kind separately at the top level. This does not change the output format, since there already was a separate "SEQUENCE SET" archive entry type. A noticeable difference is that SEQUENCE SET entries now always appear after TABLE DATA entries. And in parallel mode there is less sorting to do, because the sequence data entries are no longer considered table data. Reviewed-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-10Cleanup of rewriter and planner handling of Query.hasRowSecurity flag.Tom Lane
Be sure to pull up the subquery's hasRowSecurity flag when flattening a subquery in pull_up_simple_subquery(). This isn't a bug today because we don't look at the hasRowSecurity flag during planning, but it could easily be a bug tomorrow. Likewise, make rewriteRuleAction() pull up the hasRowSecurity flag when absorbing RTEs from a rule action. This isn't a bug either, for the opposite reason: the flag should never be set yet. But again, it seems like good future proofing. Add a comment explaining why rewriteTargetView() should *not* set hasRowSecurity when adding stuff to securityQuals. Improve some nearby comments about securityQuals processing, and document that field more completely in parsenodes.h. Patch by me, analysis by Dean Rasheed. Discussion: <CAEZATCXZ8tb2DV6f=bkhsMV6u_gRcZ0CZBw2J-qU84RxSukZog@mail.gmail.com>
2016-11-10Re-allow user_catalog_table option for materialized views.Tom Lane
The reloptions stuff allows this option to be set on a matview. While it's questionable whether that is useful or was really intended, it does work, and we shouldn't change that in minor releases. Commit e3e66d8a9 disabled the option since I didn't realize that it was possible for it to be set on a matview. Tweak the test to re-allow it. Discussion: <19749.1478711862@sss.pgh.pa.us>
2016-11-10Support "COPY view FROM" for views with INSTEAD OF INSERT triggers.Tom Lane
We just pass the data to the INSTEAD trigger. Haribabu Kommi, reviewed by Dilip Kumar Patch: <CAJrrPGcSQkrNkO+4PhLm4B8UQQQmU9YVUuqmtgM=pmzMfxWaWQ@mail.gmail.com>
2016-11-10Fix partial aggregation for the case of a degenerate GROUP BY clause.Tom Lane
The plan generated for sorted partial aggregation with "GROUP BY constant" included a Sort node with no sort keys, which the executor does not like. Per report from Steve Randall. I'd add a regression test case if I could think of a compact one, but it doesn't seem worth expending lots of cycles on. Report: <CABVd52UAdGXpg_rCk46egpNKYdXOzCjuJ1zG26E2xBe_8bj+Fg@mail.gmail.com>
2016-11-09pgbench: Allow the transaction log file prefix to be changed.Robert Haas
Masahiko Sawada, reviewed by Fabien Coelho and Beena Emerson, with some a bit of wordsmithing and cosmetic adjustment by me.
2016-11-08Simplify code by getting rid of SPI_push, SPI_pop, SPI_restore_connection.Tom Lane
The idea behind SPI_push was to allow transitioning back into an "unconnected" state when a SPI-using procedure calls unrelated code that might or might not invoke SPI. That sounds good, but in practice the only thing it does for us is to catch cases where a called SPI-using function forgets to call SPI_connect --- which is a highly improbable failure mode, since it would be exposed immediately by direct testing of said function. As against that, we've had multiple bugs induced by forgetting to call SPI_push/SPI_pop around code that might invoke SPI-using functions; these are much harder to catch and indeed have gone undetected for years in some cases. And we've had to band-aid around some problems of this ilk by introducing conditional push/pop pairs in some places, which really kind of defeats the purpose altogether; if we can't draw bright lines between connected and unconnected code, what's the point? Hence, get rid of SPI_push[_conditional], SPI_pop[_conditional], and the underlying state variable _SPI_curid. It turns out SPI_restore_connection can go away too, which is a nice side benefit since it was never more than a kluge. Provide no-op macros for the deleted functions so as to avoid an API break for external modules. A side effect of this removal is that SPI_palloc and allied functions no longer permit being called when unconnected; they'll throw an error instead. The apparent usefulness of the previous behavior was a mirage as well, because it was depended on by only a few places (which I fixed in preceding commits), and it posed a risk of allocations being unexpectedly long-lived if someone forgot a SPI_push call. Discussion: <20808.1478481403@sss.pgh.pa.us>
2016-11-08psql: Tab completion for renaming enum values.Robert Haas
For ALTER TYPE .. RENAME, add "VALUE" to the list of possible completions. Complete ALTER TYPE .. RENAME VALUE with possible enum values. After that, complete with "TO". Dagfinn Ilmari Mannsåker, reviewed by Artur Zakirov.
2016-11-08Replace uses of SPI_modifytuple that intend to allocate in current context.Tom Lane
Invent a new function heap_modify_tuple_by_cols() that is functionally equivalent to SPI_modifytuple except that it always allocates its result by simple palloc. I chose however to make the API details a bit more like heap_modify_tuple: pass a tupdesc rather than a Relation, and use bool convention for the isnull array. Use this function in place of SPI_modifytuple at all call sites where the intended behavior is to allocate in current context. (There actually are only two call sites left that depend on the old behavior, which makes me wonder if we should just drop this function rather than keep it.) This new function is easier to use than heap_modify_tuple() for purposes of replacing a single column (or, really, any fixed number of columns). There are a number of places where it would simplify the code to change over, but I resisted that temptation for the moment ... everywhere except in plpgsql's exec_assign_value(); changing that might offer some small performance benefit, so I did it. This is on the way to removing SPI_push/SPI_pop, but it seems like good code cleanup in its own right. Discussion: <9633.1478552022@sss.pgh.pa.us>
2016-11-08Fix typo.Robert Haas
Michael Paquier
2016-11-08Make SPI_fnumber() reject dropped columns.Tom Lane
There's basically no scenario where it's sensible for this to match dropped columns, so put a test for dropped-ness into SPI_fnumber() itself, and excise the test from the small number of callers that were paying attention to the case. (Most weren't :-(.) In passing, normalize tests at call sites: always reject attnum <= 0 if we're disallowing system columns. Previously there was a mixture of "< 0" and "<= 0" tests. This makes no practical difference since SPI_fnumber() never returns 0, but I'm feeling pedantic today. Also, in the places that are actually live user-facing code and not legacy cruft, distinguish "column not found" from "can't handle system column". Per discussion with Jim Nasby; thi supersedes his original patch that just changed the behavior at one call site. Discussion: <b2de8258-c4c0-1cb8-7b97-e8538e5c975c@BlueTreble.com>
2016-11-08Fix mistake in XLOG_SEG_SIZE test.Robert Haas
The intent of the test is to check whether XLOG_SEG_SIZE is in a particular range, but actually in one case it compares XLOG_BLCKSZ by mistake. Repair. Commit 88e982302684246e8af785e78a467ac37c76dee9 introduced this faulty test. Kuntal Ghosh, reviewed by Michael Paquier.
2016-11-08Use heap_modify_tuple not SPI_modifytuple in pl/python triggers.Tom Lane
The code here would need some change anyway given planned change in SPI_modifytuple semantics, since this executes after we've exited the SPI environment. But really it's better to just use heap_modify_tuple. While at it, normalize use of SPI_fnumber: make error messages distinguish no-such-column from can't-set-system-column, and remove test for deleted column which is going to migrate into SPI_fnumber. The lack of a check for system column names is actually a pre-existing bug here, and might even qualify as a security bug except that we don't have any trusted version of plpython.
2016-11-08Use heap_modify_tuple not SPI_modifytuple in pl/perl triggers.Tom Lane
The code here would need some change anyway given planned change in SPI_modifytuple semantics, since this executes after we've exited the SPI environment. But really it's better to just use heap_modify_tuple. The code's actually shorter this way, and this avoids depending on some rather indirect reasoning about why the temporary arrays can't be overrun. (I think the old code is safe, as long as Perl hashes can't contain duplicate keys; but with this way we don't need that assumption, only the assumption that SPI_fnumber doesn't return an out-of-range attnum.) While at it, normalize use of SPI_fnumber: make error messages distinguish no-such-column from can't-set-system-column, and remove test for deleted column which is going to migrate into SPI_fnumber.