summaryrefslogtreecommitdiff
path: root/src/include
AgeCommit message (Collapse)Author
2018-10-09Add pg_ls_archive_statusdir functionMichael Paquier
This function lists the contents of the WAL archive status directory, and is intended to be used by monitoring tools. Unlike pg_ls_dir(), access to it can be granted to non-superusers so that those monitoring tools can observe the principle of least privilege. Access is also given by default to members of pg_monitor. Author: Christoph Moench-Tegeder Reviewed-by: Aya Iwata Discussion: https://postgr.es/m/20180930205920.GA64534@elch.exwg.net
2018-10-09Relax transactional restrictions on ALTER TYPE ... ADD VALUE (redux).Thomas Munro
Originally committed as 15bc038f (plus some follow-ups), this was reverted in 28e07270 due to a problem discovered in parallel workers. This new version corrects that problem by sending the list of uncommitted enum values to parallel workers. Here follows the original commit message describing the change: To prevent possibly breaking indexes on enum columns, we must keep uncommitted enum values from getting stored in tables, unless we can be sure that any such column is new in the current transaction. Formerly, we enforced this by disallowing ALTER TYPE ... ADD VALUE from being executed at all in a transaction block, unless the target enum type had been created in the current transaction. This patch removes that restriction, and instead insists that an uncommitted enum value can't be referenced unless it belongs to an enum type created in the same transaction as the value. Per discussion, this should be a bit less onerous. It does require each function that could possibly return a new enum value to SQL operations to check this restriction, but there aren't so many of those that this seems unmaintainable. Author: Andrew Dunstan and Tom Lane, with parallel query fix by Thomas Munro Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAEepm%3D0Ei7g6PaNTbcmAh9tCRahQrk%3Dr5ZWLD-jr7hXweYX3yg%40mail.gmail.com Discussion: https://postgr.es/m/4075.1459088427%40sss.pgh.pa.us
2018-10-08Fix omissions in snprintf.c's coverage of standard *printf functions.Tom Lane
A warning on a NetBSD box revealed to me that pg_waldump/compat.c is using vprintf(), which snprintf.c did not provide coverage for. This is not good if we want to have uniform *printf behavior, and it's pretty silly to omit when it's a one-line function. I also noted that snprintf.c has pg_vsprintf() but for some reason it was not exposed to the outside world, creating another way in which code might accidentally invoke the platform *printf family. Let's just make sure that we replace all eight of the POSIX-standard printf family. Also, upgrade plperl.h and plpython.h to make sure that they do their undefine/redefine rain dance for all eight, not some random maybe-sufficient subset thereof.
2018-10-08Advance transaction timestamp for intra-procedure transactions.Tom Lane
Per discussion, this behavior seems less astonishing than not doing so. Peter Eisentraut and Tom Lane Discussion: https://postgr.es/m/20180920234040.GC29981@momjian.us
2018-10-08Improve snprintf.c's handling of NaN, Infinity, and minus zero.Tom Lane
Up to now, float4out/float8out handled NaN and Infinity cases explicitly, and invoked psprintf only for ordinary float values. This was done because platform implementations of snprintf produce varying representations of these special cases. But now that we use snprintf.c always, it's better to give it the responsibility to produce a uniform representation of these cases, so that we have uniformity across the board not only in float4out/float8out. Hence, move that work into fmtfloat(). Also, teach fmtfloat() to recognize IEEE minus zero and handle it correctly. The previous coding worked only accidentally, and would fail for e.g. "%+f" format (it'd print "+-0.00000"). Now that we're using snprintf.c everywhere, it's not acceptable for it to do weird things in corner cases. (This incidentally avoids a portability problem we've seen on some really ancient platforms, that native sprintf does the wrong thing with minus zero.) Also, introduce a new entry point in snprintf.c to allow float[48]out to bypass the work of interpreting a well-known format spec, as well as bypassing the overhead of the psprintf layer. I modeled this API loosely on strfromd(). In my testing, this brings float[48]out back to approximately the same speed they had when using native snprintf, fixing one of the main performance issues caused by using snprintf.c. (There is some talk of more aggressive work to improve the speed of floating-point output conversion, but these changes seem to provide a better starting point for such work anyway.) Getting rid of the previous ad-hoc hack for Infinity/NaN in fmtfloat() allows removing <ctype.h> from snprintf.c's #includes. I also removed a few other #includes that I think are historical, though the buildfarm may expose that as wrong. Discussion: https://postgr.es/m/13178.1538794717@sss.pgh.pa.us
2018-10-08Avoid O(N^2) cost in ExecFindRowMark().Tom Lane
If there are many ExecRowMark structs, we spent O(N^2) time in ExecFindRowMark during executor startup. Once upon a time this was not of great concern, but the addition of native partitioning has squeezed out enough other costs that this can become the dominant overhead in some use-cases for tables with many partitions. To fix, simply replace that List data structure with an array. This adds a little bit of cost to execCurrentOf(), but not much, and anyway that code path is neither of large importance nor very efficient now. If we ever decide it is a bottleneck, constructing a hash table for lookup-by-tableoid would likely be the thing to do. Per complaint from Amit Langote, though this is different from his fix proposal. Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-07Remove some unnecessary fields from Plan trees.Tom Lane
In the wake of commit f2343653f, we no longer need some fields that were used before to control executor lock acquisitions: * PlannedStmt.nonleafResultRelations can go away entirely. * partitioned_rels can go away from Append, MergeAppend, and ModifyTable. However, ModifyTable still needs to know the RT index of the partition root table if any, which was formerly kept in the first entry of that list. Add a new field "rootRelation" to remember that. rootRelation is partly redundant with nominalRelation, in that if it's set it will have the same value as nominalRelation. However, the latter field has a different purpose so it seems best to keep them distinct. Amit Langote, reviewed by David Rowley and Jesper Pedersen, and whacked around a bit more by me Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-06Fix event triggers for partitioned tablesAlvaro Herrera
Index DDL cascading on partitioned tables introduced a way for ALTER TABLE to be called reentrantly. This caused an an important deficiency in event trigger support to be exposed: on exiting the reentrant call, the alter table state object was clobbered, causing a crash when the outer alter table tries to finalize its processing. Fix the crash by creating a stack of event trigger state objects. There are still ways to cause things to misbehave (and probably other crashers) with more elaborate tricks, but at least it now doesn't crash in the obvious scenario. Backpatch to 9.5, where DDL deparsing of event triggers was introduced. Reported-by: Marco Slot Authors: Michaël Paquier, Álvaro Herrera Discussion: https://postgr.es/m/CANNhMLCpi+HQ7M36uPfGbJZEQLyTy7XvX=5EFkpR-b1bo0uJew@mail.gmail.com
2018-10-06Remove more redundant relation locking during executor startup.Tom Lane
We already have appropriate locks on every relation listed in the query's rangetable before we reach the executor. Take the next step in exploiting that knowledge by removing code that worries about taking locks on non-leaf result relations in a partitioned table. In particular, get rid of ExecLockNonLeafAppendTables and a stanza in InitPlan that asserts we already have locks on certain such tables. In passing, clean up some now-obsolete comments in InitPlan. Amit Langote, reviewed by David Rowley and Jesper Pedersen, and whacked around a bit more by me Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-06Propagate xactStartTimestamp and stmtStartTimestamp to parallel workers.Tom Lane
Previously, a worker process would establish values for these based on its own start time. In v10 and up, this can trivially be shown to cause misbehavior of transaction_timestamp(), timestamp_in(), and related functions which are (perhaps unwisely?) marked parallel-safe. It seems likely that other behaviors might diverge from what happens in the parent as well. It's not as trivial to demonstrate problems in 9.6 or 9.5, but I'm sure it's still possible, so back-patch to all branches containing parallel worker infrastructure. In HEAD only, mark now() and statement_timestamp() as parallel-safe (other affected functions already were). While in theory we could still squeeze that change into v11, it doesn't seem important enough to force a last-minute catversion bump. Konstantin Knizhnik, whacked around a bit by me Discussion: https://postgr.es/m/6406dbd2-5d37-4cb6-6eb2-9c44172c7e7c@postgrespro.ru
2018-10-05Allow btree comparison functions to return INT_MIN.Tom Lane
Historically we forbade datatype-specific comparison functions from returning INT_MIN, so that it would be safe to invert the sort order just by negating the comparison result. However, this was never really safe for comparison functions that directly return the result of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction on those library functions. Buildfarm results show that at least on recent Linux on s390x, memcmp() actually does return INT_MIN sometimes, causing sort failures. The agreed-on answer is to remove this restriction and fix relevant call sites to not make such an assumption; code such as "res = -res" should be replaced by "INVERT_COMPARE_RESULT(res)". The same is needed in a few places that just directly negated the result of memcmp or strcmp. To help find places having this problem, I've also added a compile option to nbtcompare.c that causes some of the commonly used comparators to return INT_MIN/INT_MAX instead of their usual -1/+1. It'd likely be a good idea to have at least one buildfarm member running with "-DSTRESS_SORT_INT_MIN". That's far from a complete test of course, but it should help to prevent fresh introductions of such bugs. This is a longstanding portability hazard, so back-patch to all supported branches. Discussion: https://postgr.es/m/20180928185215.ffoq2xrq5d3pafna@alap3.anarazel.de
2018-10-05Add pg_ls_tmpdir functionMichael Paquier
This lists the contents of a temporary directory associated to a given tablespace, useful to get information about on-disk consumption caused by temporary files used by a session query. By default, pg_default is scanned, and a tablespace can be specified as argument. This function is intended to be used by monitoring tools, and, unlike pg_ls_dir(), access to them can be granted to non-superusers so that those monitoring tools can observe the principle of least privilege. Access is also given by default to members of pg_monitor. Author: Nathan Bossart Reviewed-by: Laurenz Albe Discussion: https://postgr.es/m/92F458A2-6459-44B8-A7F2-2ADD3225046A@amazon.com
2018-10-04In the executor, use an array of pointers to access the rangetable.Tom Lane
Instead of doing a lot of list_nth() accesses to es_range_table, create a flattened pointer array during executor startup and index into that to get at individual RangeTblEntrys. This eliminates one source of O(N^2) behavior with lots of partitions. (I'm not exactly convinced that it's the most important source, but it's an easy one to fix.) Amit Langote and David Rowley Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-04Centralize executor's opening/closing of Relations for rangetable entries.Tom Lane
Create an array estate->es_relations[] paralleling the es_range_table, and store references to Relations (relcache entries) there, so that any given RT entry is opened and closed just once per executor run. Scan nodes typically still call ExecOpenScanRelation, but ExecCloseScanRelation is no more; relation closing is now done centrally in ExecEndPlan. This is slightly more complex than one would expect because of the interactions with relcache references held in ResultRelInfo nodes. The general convention is now that ResultRelInfo->ri_RelationDesc does not represent a separate relcache reference and so does not need to be explicitly closed; but there is an exception for ResultRelInfos in the es_trig_target_relations list, which are manufactured by ExecGetTriggerResultRel and have to be cleaned up by ExecCleanUpTriggerState. (That much was true all along, but these ResultRelInfos are now more different from others than they used to be.) To allow the partition pruning logic to make use of es_relations[] rather than having its own relcache references, adjust PartitionedRelPruneInfo to store an RT index rather than a relation OID. Amit Langote, reviewed by David Rowley and Jesper Pedersen, some mods by me Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-04Add option SKIP_LOCKED to VACUUM and ANALYZEMichael Paquier
When specified, this option allows VACUUM to skip the work on a relation if there is a conflicting lock on it when trying to open it at the beginning of its processing. Similarly to autovacuum, this comes with a couple of limitations while the relation is processed which can cause the process to still block: - when opening the relation indexes. - when acquiring row samples for table inheritance trees, partition trees or certain types of foreign tables, and that a lock is taken on some leaves of such trees. Author: Nathan Bossart Reviewed-by: Michael Paquier, Andres Freund, Masahiko Sawada Discussion: https://postgr.es/m/9EF7EBE4-720D-4CF1-9D0E-4403D7E92990@amazon.com Discussion: https://postgr.es/m/20171201160907.27110.74730@wrigleys.postgresql.org
2018-10-03Fix issues around EXPLAIN with JIT.Andres Freund
I (Andres) was more than a bit hasty in committing 33001fd7a7072d48327 after last minute changes, leading to a number of problems (jit output was only shown for JIT in parallel workers, and just EXPLAIN without ANALYZE didn't work). Lukas luckily found these issues quickly. Instead of combining instrumentation in in standard_ExecutorEnd(), do so on demand in the new ExplainPrintJITSummary(). Also update a documentation example of the JIT output, changed in 52050ad8ebec8d831. Author: Lukas Fittl, with minor changes by me Discussion: https://postgr.es/m/CAP53PkxmgJht69pabxBXJBM+0oc6kf3KHMborLP7H2ouJ0CCtQ@mail.gmail.com Backpatch: 11, where JIT compilation was introduced
2018-10-03Make assorted performance improvements in snprintf.c.Tom Lane
In combination, these changes make our version of snprintf as fast or faster than most platforms' native snprintf, except for cases involving floating-point conversion (which we still delegate to the native sprintf). The speed penalty for a float conversion is down to around 10% though, much better than before. Notable changes: * Rather than always parsing the format twice to see if it contains instances of %n$, do the extra scan only if we actually find a $. This obviously wins for non-localized formats, and even when there is use of %n$, we can avoid scanning text before the first % twice. * Use strchrnul() if available to find the next %, and emit the literal text between % escapes as strings rather than char-by-char. * Create a bespoke function (dopr_outchmulti) for the common case of emitting N copies of the same character, in place of writing loops around dopr_outch. * Simplify construction of the format string for invocations of sprintf for floats. * Const-ify some internal functions, and avoid unnecessary use of pass-by-reference arguments. Patch by me, reviewed by Andres Freund Discussion: https://postgr.es/m/11787.1534530779@sss.pgh.pa.us
2018-10-02Use slots more widely in tuple mapping code and make naming more consistent.Andres Freund
It's inefficient to use a single slot for mapping between tuple descriptors for multiple tuples, as previously done when using ConvertPartitionTupleSlot(), as that means the slot's tuple descriptors change for every tuple. Previously we also, via ConvertPartitionTupleSlot(), built new tuples after the mapping even in cases where we, immediately afterwards, access individual columns again. Refactor the code so one slot, on demand, is used for each partition. That avoids having to change the descriptor (and allows to use the more efficient "fixed" tuple slots). Then use slot->slot mapping, to avoid unnecessarily forming a tuple. As the naming between the tuple and slot mapping functions wasn't consistent, rename them to execute_attr_map_{tuple,slot}. It's likely that we'll also rename convert_tuples_by_* to denote that these functions "only" build a map, but that's left for later. Author: Amit Khandekar and Amit Langote, editorialized by me Reviewed-By: Amit Langote, Amit Khandekar, Andres Freund Discussion: https://postgr.es/m/CAJ3gD9fR0wRNeAE8VqffNTyONS_UfFPRpqxhnD9Q42vZB+Jvpg@mail.gmail.com https://postgr.es/m/e4f9d743-cd4b-efb0-7574-da21d86a7f36%40lab.ntt.co.jp Backpatch: -
2018-10-02Refactor relation opening for VACUUM and ANALYZEMichael Paquier
VACUUM and ANALYZE share similar logic when it comes to opening a relation to work on in terms of how the relation is opened, in which order locks are tried and how logs should be generated when something does not work as expected. This commit refactors things so as both use the same code path to handle the way a relation is opened, so as the integration of new options becomes easier. Author: Michael Paquier Reviewed-by: Nathan Bossart Discussion: https://postgr.es/m/20180927075152.GT1659@paquier.xyz
2018-10-01Add assertions that we hold some relevant lock during relation open.Tom Lane
Opening a relation with no lock at all is unsafe; there's no guarantee that we'll see a consistent state of the relevant catalog entries. While use of MVCC scans to read the catalogs partially addresses that complaint, it's still possible to switch to a new catalog snapshot partway through loading the relcache entry. Moreover, whether or not you trust the reasoning behind sometimes using less than AccessExclusiveLock for ALTER TABLE, that reasoning is certainly not valid if concurrent users of the table don't hold a lock corresponding to the operation they want to perform. Hence, add some assertion-build-only checks that require any caller of relation_open(x, NoLock) to hold at least AccessShareLock. This isn't a full solution, since we can't verify that the lock level is semantically appropriate for the action --- but it's definitely of some use, because it's already caught two bugs. We can also assert that callers of addRangeTableEntryForRelation() hold at least the lock level specified for the new RTE. Amit Langote and Tom Lane Discussion: https://postgr.es/m/16565.1538327894@sss.pgh.pa.us
2018-09-30Create an RTE field to record the query's lock mode for each relation.Tom Lane
Add RangeTblEntry.rellockmode, which records the appropriate lock mode for each RTE_RELATION rangetable entry (either AccessShareLock, RowShareLock, or RowExclusiveLock depending on the RTE's role in the query). This patch creates the field and makes all creators of RTE nodes fill it in reasonably, but for the moment nothing much is done with it. The plan is to replace assorted post-parser logic that re-determines the right lockmode to use with simple uses of rte->rellockmode. For now, just add Asserts in each of those places that the rellockmode matches what they are computing today. (In some cases the match isn't perfect, so the Asserts are weaker than you might expect; but this seems OK, as per discussion.) This passes check-world for me, but it seems worth pushing in this state to see if the buildfarm finds any problems in cases I failed to test. catversion bump due to change of stored rules. Amit Langote, reviewed by David Rowley and Jesper Pedersen, and whacked around a bit more by me Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-09-28Add application_name to connection authorized msgStephen Frost
The connection authorized message has quite a bit of useful information in it, but didn't include the application_name (when provided), so let's add that as it can be very useful. Note that at the point where we're emitting the connection authorized message, we haven't processed GUCs, so it's not possible to get this by using log_line_prefix (which pulls from the GUC). There's also something to be said for having this included in the connection authorized message and then not needing to repeat it for every line, as having it in log_line_prefix would do. The GUC cleans the application name to pure-ascii, so do that here too, but pull out the logic for cleaning up a string into its own function in common and re-use it from those places, and check_cluster_name which was doing the same thing. Author: Don Seiler <don@seiler.us> Discussion: https://postgr.es/m/CAHJZqBB_Pxv8HRfoh%2BAB4KxSQQuPVvtYCzMg7woNR3r7dfmopw%40mail.gmail.com
2018-09-28Improve error reporting for unsupported effective_io_concurrency setting.Tom Lane
Give a specific error complaining about lack of posix_fadvise() when someone tries to set effective_io_concurrency > 0 on platforms without that. This probably isn't worth extensive back-patching, but I (tgl) felt cramming it into v11 was reasonable. James Robinson Discussion: https://postgr.es/m/153771876450.14994.560017943128223619@wrigleys.postgresql.org Discussion: https://postgr.es/m/A3942987-5BC7-4F05-B54D-2A0EC2914B33@jlr-photo.com
2018-09-27Fix assorted bugs in pg_get_partition_constraintdef().Tom Lane
It failed if passed a nonexistent relation OID, or one that was a non-heap relation, because of blindly applying heap_open to a user-supplied OID. This is not OK behavior for a SQL-exposed function; we have a project policy that we should return NULL in such cases. Moreover, since pg_get_partition_constraintdef ought now to work on indexes, restricting it to heaps is flat wrong anyway. The underlying function generate_partition_qual() wasn't on board with indexes having partition quals either, nor for that matter with rels having relispartition set but yet null relpartbound. (One wonders whether the person who wrote the function comment blocks claiming that these functions allow a missing relpartbound had ever tested it.) Fix by testing relispartition before opening the rel, and by using relation_open not heap_open. (If any other relkinds ever grow the ability to have relispartition set, the code will work with them automatically.) Also, don't reject null relpartbound in generate_partition_qual. Back-patch to v11, and all but the null-relpartbound change to v10. (It's not really necessary to change generate_partition_qual at all in v10, but I thought s/heap_open/relation_open/ would be a good idea anyway just to keep the code in sync with later branches.) Per report from Justin Pryzby. Discussion: https://postgr.es/m/20180927200020.GJ776@telsasoft.com
2018-09-27Clean up in the wake of TupleDescGetSlot() removal / 10763358c3f.Andres Freund
The previous commit wasn't careful enough to remove all traces of TupleDescGetSlot(). Besides fixing the oversight of not removing TupleDescGetSlot()'s declaration, this also removes FuncCallContext->slot. That was documented to be for use in combination with TupleDescGetSlot(), a cursory search over extensions finds no users, and there doesn't seem to be convincing reasons to keep it around. If we later in the v12 release cycle find users, we can re-consider this part of the commit. Reported-By: Michael Paquier Discussion: https://postgr.es/m/20180926000413.GC1659@paquier.xyz
2018-09-27Switch flags tracking pending interrupts to sig_atomic_tMichael Paquier
Those previously used bool, which should be safe on any modern platforms, however the C standard is clear that it is better to use sig_atomic_t for variables manipulated in signal handlers. This commit adds at the same time PGDLLIMPORT to ClientConnectionLost. Author: Michael Paquier Reviewed-by: Tom Lane, Chris Travers, Andres Freund Discussion: https://postgr.es/m/20180925011311.GD1354@paquier.xyz
2018-09-26Try another way to detect the result type of strerror_r().Tom Lane
The method we've traditionally used, of redeclaring strerror_r() to see if the compiler complains of inconsistent declarations, turns out not to work reliably because some compilers only report a warning, not an error. Amazingly, this has gone undetected for years, even though it certainly breaks our detection of whether strerror_r succeeded. Let's instead test whether the compiler will take the result of strerror_r() as a switch() argument. It's possible this won't work universally either, but it's the best idea I could come up with on the spur of the moment. We should probably back-patch this once the dust settles, but first let's see what the buildfarm thinks of it. Discussion: https://postgr.es/m/10877.1537993279@sss.pgh.pa.us
2018-09-26Clean up *printf macros to avoid conflict with format archetypes.Tom Lane
We must define the macro "printf" with arguments, else it can mess up format archetype attributes in builds where PG_PRINTF_ATTRIBUTE is just "printf". Fortunately, that's easy to do now that we're requiring C99; we can use __VA_ARGS__. On the other hand, it's better not to use __VA_ARGS__ for the rest of the *printf crew, so that one can take the addresses of those functions without surprises. I'd proposed doing this some time ago, but forgot to make it happen; buildfarm failures subsequent to 96bf88d52 reminded me. Discussion: https://postgr.es/m/22709.1535135640@sss.pgh.pa.us Discussion: https://postgr.es/m/20180926190934.ea4xvzhkayuw7gkx@alap3.anarazel.de
2018-09-26Always use our own versions of *printf().Tom Lane
We've spent an awful lot of effort over the years in coping with platform-specific vagaries of the *printf family of functions. Let's just forget all that mess and standardize on always using src/port/snprintf.c. This gets rid of a lot of configure logic, and it will allow a saner approach to dealing with %m (though actually changing that is left for a follow-on patch). Preliminary performance testing suggests that as it stands, snprintf.c is faster than the native printf functions for some tasks on some platforms, and slower for other cases. A pending patch will improve that, though cases with floating-point conversions will doubtless remain slower unless we want to put a *lot* of effort into that. Still, we've not observed that *printf is really a performance bottleneck for most workloads, so I doubt this matters much. Patch by me, reviewed by Michael Paquier Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
2018-09-26Incorporate strerror_r() into src/port/snprintf.c, too.Tom Lane
This provides the features that used to exist in useful_strerror() for users of strerror_r(), too. Also, standardize on the GNU convention that strerror_r returns a char pointer that may not be NULL. I notice that libpq's win32.c contains a variant version of strerror_r that probably ought to be folded into strerror.c. But lacking a Windows environment, I should leave that to somebody else. Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
2018-09-26Convert elog.c's useful_strerror() into a globally-used strerror wrapper.Tom Lane
elog.c has long had a private strerror wrapper that handles assorted possible failures or deficiencies of the platform's strerror. On Windows, it also knows how to translate Winsock error codes, which the native strerror does not. Move all this code into src/port/strerror.c and define strerror() as a macro that invokes it, so that both our frontend and backend code will have all of this behavior. I believe this constitutes an actual bug fix on Windows, since AFAICS our frontend code did not report Winsock error codes properly before this. However, the main point is to lay the groundwork for implementing %m in src/port/snprintf.c: the behavior we want %m to have is this one, not the native strerror's. Note that this throws away the prior use of src/port/strerror.c, which was to implement strerror() on platforms lacking it. That's been dead code for nigh twenty years now, since strerror() was already required by C89. We should likewise cause strerror_r to use this behavior, but I'll tackle that separately. Patch by me, reviewed by Michael Paquier Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
2018-09-25Split ExecStoreTuple into ExecStoreHeapTuple and ExecStoreBufferHeapTuple.Andres Freund
Upcoming changes introduce further types of tuple table slots, in preparation of making table storage pluggable. New storage methods will have different representation of tuples, therefore the slot accessor should refer explicitly to heap tuples. Instead of just renaming the functions, split it into one function that accepts heap tuples not residing in buffers, and one accepting ones in buffers. Previously one function was used for both, but that was a bit awkward already, and splitting will allow us to represent slot types for tuples in buffers and normal memory separately. This is split out from the patch introducing abstract slots, as this largely consists out of mechanical changes. Author: Ashutosh Bapat Reviewed-By: Andres Freund Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
2018-09-25Change TupleTableSlot->tts_nvalid to type AttrNumber.Andres Freund
Previously it was an int / 4 bytes. The maximum number of attributes in a tuple is restricted by the maximum value Var->varattno, which is an AttrNumber/int16. Hence use the same data type for TupleTableSlot->tts_nvalid. Author: Ashutosh Bapat Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
2018-09-25Remove fmgr.h inclusion from partition.hAlvaro Herrera
It's not needed anymore.
2018-09-25Collect JIT instrumentation from workers.Andres Freund
Previously, when using parallel query, EXPLAIN (ANALYZE)'s JIT compilation timings did not include the overhead from doing so on the workers. Fix that. We do so by simply aggregating the cost of doing JIT compilation on workers and the leader together. Arguably that's not quite accurate, because the total time spend doing so is spent in parallel - but it's hard to do much better. For additional detail, when VERBOSE is specified, the stats for workers are displayed separately. Author: Amit Khandekar and Andres Freund Discussion: https://postgr.es/m/CAJ3gD9eLrz51RK_gTkod+71iDcjpB_N8eC6vU2AW-VicsAERpQ@mail.gmail.com Backpatch: 11-
2018-09-24Sync our Snowball stemmer dictionaries with current upstream.Tom Lane
We haven't touched these since text search functionality landed in core in 2007 :-(. While the upstream project isn't a beehive of activity, they do make additions and bug fixes from time to time. Update our copies of these files. Also update our documentation about how to keep things in sync, since they're not making distribution tarballs these days. Fortunately, their source code turns out to be a breeze to build. Notable changes: * The non-UTF8 version of the hungarian stemmer now works in LATIN2 not LATIN1. * New stemmers have appeared for arabic, indonesian, irish, lithuanian, nepali, and tamil. These all work in UTF8, and the indonesian and irish ones also work in LATIN1. (There are some new stemmers that I did not incorporate, mainly because their names don't match the underlying languages, suggesting that they're not to be considered mainstream.) Worth noting: the upstream Nepali dictionary was contributed by Arthur Zakirov. initdb forced because the contents of snowball_create.sql have changed. Still TODO: see about updating the stopword lists. Arthur Zakirov, minor mods and doc work by me Discussion: https://postgr.es/m/20180626122025.GA12647@zakirov.localdomain Discussion: https://postgr.es/m/20180219140849.GA9050@zakirov.localdomain
2018-09-24Use ppoll(2), if available, to wait for input in pgbench.Tom Lane
Previously, pgbench always used select(2) for this purpose, but that's problematic for very high client counts, because select() can't deal with file descriptor numbers larger than FD_SETSIZE. It's pretty common for that to be only 1024 or so, whereas modern OSes can allow many more open files than that. Using poll(2) would surmount that problem, but it creates another one: poll()'s timeout resolution is only 1ms, which is poor enough to cause problems with --rate specifications approaching or exceeding 1K TPS. On platforms that have ppoll(2), which includes Linux and recent FreeBSD, we can use that to avoid the FD_SETSIZE problem without any loss of timeout resolution. Hence, add configure logic to test for ppoll(), and use it if available. This patch introduces an abstraction layer into pgbench that could be extended to support other kernel event-wait APIs such as kevents. But actually adding such support is a matter for some future patch. Doug Rady, reviewed by Robert Haas and Fabien Coelho, and whacked around a good bit more by me Discussion: https://postgr.es/m/23D017C9-81B7-484D-8490-FD94DEC4DF59@amazon.com
2018-09-24Document aclitem functions and operatorsJoe Conway
aclitem functions and operators have been heretofore undocumented. Fix that. While at it, ensure the non-operator aclitem functions have pg_description strings. Does not seem worthwhile to back-patch. Author: Fabien Coelho, with pg_description from John Naylor, and significant refactoring and editorialization by me. Reviewed by: Tom Lane Discussion: https://postgr.es/m/flat/alpine.DEB.2.21.1808010825490.18204%40lancre
2018-09-21Get rid of explicit argument-count markings in tab-complete.c.Tom Lane
This replaces the "TailMatchesN" macros with just "TailMatches", and likewise "HeadMatchesN" becomes "HeadMatches" and "MatchesN" becomes "Matches". The various COMPLETE_WITH_LISTn macros are reduced to COMPLETE_WITH, and the single-item COMPLETE_WITH_CONST also gets folded into that. This eliminates a lot of minor annoyance in writing tab-completion rules. Usefully, the compiled code also gets a bit smaller (10% or so, on my machine). The implementation depends on variadic macros, so we couldn't have done this before we required C99. Andres Freund and Thomas Munro; some cosmetic cleanup by me. Discussion: https://postgr.es/m/d8jo9djvm7h.fsf@dalvik.ping.uio.no
2018-09-22Use size_t consistently in dsa.{ch}.Thomas Munro
Takeshi Ideriha complained that there is a mixture of Size and size_t in dsa.c and corresponding header. Let's use size_t. Back-patch to 10 where dsa.c landed, to make future back-patching easy. Discussion: https://postgr.es/m/4E72940DA2BF16479384A86D54D0988A6F19ABD9%40G01JPEXMBKW04
2018-09-20Add missing pg_description strings for pg_type entries.Tom Lane
I noticed that all non-composite, non-array entries in pg_type.dat had descr strings, except for "json" and the pseudo-types. The lack for json seems certainly an oversight, and there's surely little reason to not have entries for the pseudo-types either. So add some. "make reformat-dat-files" turned up some formatting issues in pg_amop.dat, too, so fix those in passing. No catversion bump since the backend doesn't care too much what is in pg_description.
2018-09-20Teach genbki.pl to auto-generate pg_type entries for array types.Tom Lane
This eliminates some more tedium in adding new catalog entries, specifically the need to set up an array type when adding a new built-in data type. Now it's sufficient to assign an OID for the array type and write it in an "array_type_oid" metadata field. You don't have to fill the base type's typarray link explicitly, either. No catversion bump since the contents of pg_type aren't changed. (Well, their order might be different, but that doesn't matter.) John Naylor, reviewed and whacked around a bit by Dagfinn Ilmari Mannsåker, and some more by me. Discussion: https://postgr.es/m/CAJVSVGVTb6m9pJF49b3SuA8J+T-THO9c0hxOmoyv-yGKh-FbNg@mail.gmail.com
2018-09-19Add support for nearest-neighbor (KNN) searches to SP-GiSTAlexander Korotkov
Currently, KNN searches were supported only by GiST. SP-GiST also capable to support them. This commit implements that support. SP-GiST scan stack is replaced with queue, which serves as stack if no ordering is specified. KNN support is provided for three SP-GIST opclasses: quad_point_ops, kd_point_ops and poly_ops (catversion is bumped). Some common parts between GiST and SP-GiST KNNs are extracted into separate functions. Discussion: https://postgr.es/m/570825e8-47d0-4732-2bf6-88d67d2d51c8%40postgrespro.ru Author: Nikita Glukhov, Alexander Korotkov based on GSoC work by Vlad Sterzhanov Review: Andrey Borodin, Alexander Korotkov
2018-09-18Add a debugging option to stress-test outfuncs.c and readfuncs.c.Tom Lane
In the normal course of operation, query trees will be serialized only if they are stored as views or rules; and plan trees will be serialized only if they get passed to parallel-query workers. This leaves an awful lot of opportunity for bugs/oversights to not get detected, as indeed we've just been reminded of the hard way. To improve matters, this patch adds a new compile option WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees through copyObject, it passes them through nodeToString + stringToNode. Enabling this option in a buildfarm animal or two will catch problems at least for cases that are exercised by the regression tests. A small problem with this idea is that readfuncs.c historically has discarded location fields, on the reasonable grounds that parse locations in a retrieved view are not relevant to the current query. But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements, and it could cause problems for future improvements that might try to report error locations at runtime. To fix that, provide a variant behavior in readfuncs.c that makes it restore location fields when told to. In passing, const-ify the string arguments of stringToNode and its subsidiary functions, just because it annoyed me that they weren't const already. Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
2018-09-18Fix some minor issues exposed by outfuncs/readfuncs testing.Tom Lane
A test patch to pass parse and plan trees through outfuncs + readfuncs exposed several issues that need to be fixed to get clean matches: Query.withCheckOptions failed to get copied; it's intentionally ignored by outfuncs/readfuncs on the grounds that it'd always be NIL anyway in stored rules. This seems less than future-proof, and it's not even saving very much, so just undo the decision and treat the field like all others. Several places that convert a view RTE into a subquery RTE, or similar manipulations, failed to clear out fields that were specific to the original RTE type and should be zero in a subquery RTE. Since readfuncs.c will leave such fields as zero, equalfuncs.c thinks the nodes are different leading to a reported mismatch. It seems like a good idea to clear out the no-longer-needed fields, even though in principle nothing should look at them; the node ought to be indistinguishable from how it would look if we'd built a new node instead of scribbling on the old one. BuildOnConflictExcludedTargetlist randomly set the resname of some TargetEntries to "" not NULL. outfuncs/readfuncs don't distinguish those cases, and so the string will read back in as NULL ... but equalfuncs.c does distinguish. Perhaps we ought to try to make things more consistent in this area --- but it's just useless extra code space for BuildOnConflictExcludedTargetlist to not use NULL here, so I fixed it for now by making it do that. catversion bumped because the change in handling of Query.withCheckOptions affects stored rules. Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
2018-09-18Fix some probably-minor oversights in readfuncs.c.Tom Lane
The system expects TABLEFUNC RTEs to have coltypes, coltypmods, and colcollations lists, but outfuncs doesn't dump them and readfuncs doesn't restore them. This doesn't cause obvious failures, because the only things that look at those fields are expandRTE() and get_rte_attribute_type(), which are mostly used during parse analysis, before anything would've passed the parsetree through outfuncs/readfuncs. But expandRTE() is used in build_physical_tlist(), which means that that function will return a wrong answer for a TABLEFUNC RTE that came from a view. Very accidentally, this doesn't cause serious problems, because what it will return is NIL which callers will interpret as "couldn't build a physical tlist because of dropped columns". So you still get a plan that works, though it's marginally less efficient than it could be. There are also some other expandRTE() calls associated with transformation of whole-row Vars in the planner. I have been unable to exhibit misbehavior from that, and it may be unreachable in any case that anyone would care about ... but I'm not entirely convinced, so this seems like something we should back- patch a fix for. Fortunately, we can fix it without forcing a change of stored rules and a catversion bump, because we can just copy these lists from the subsidiary TableFunc object. readfuncs.c was also missing support for NamedTuplestoreScan plan nodes. This accidentally fails to break parallel query because a query using a named tuplestore would never be considered parallel-safe anyway. However, project policy since parallel query came in is that all plan node types should have outfuncs/readfuncs support, so this is clearly an oversight that should be repaired. Noted while fooling around with a patch to test outfuncs/readfuncs more thoroughly. That exposed some other issues too, but these are the only ones that seem worth back-patching. Back-patch to v10 where both of these features came in. Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
2018-09-18Refactor routines for subscription and publication lookupsMichael Paquier
Those routines gain a missing_ok argument, allowing a caller to get a NULL result instead of an error if set to true. This is part of a larger refactoring effort for objectaddress.c where trying to check for non-existing objects does not result in cache lookup failures. Author: Michael Paquier Reviewed-by: Aleksander Alekseev, Álvaro Herrera Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
2018-09-17Fix parsetree representation of XMLTABLE(XMLNAMESPACES(DEFAULT ...)).Tom Lane
The original coding for XMLTABLE thought it could represent a default namespace by a T_String Value node with a null string pointer. That's not okay, though; in particular outfuncs.c/readfuncs.c are not on board with such a representation, meaning you'll get a null pointer crash if you try to store a view or rule containing this construct. To fix, change the parsetree representation so that we have a NULL list element, instead of a bogus Value node. This isn't really a functional limitation since default XML namespaces aren't yet implemented in the executor; you'd just get "DEFAULT namespace is not supported" anyway. But crashes are not nice, so back-patch to v10 where this syntax was added. Ordinarily we'd consider a parsetree representation change to be un-backpatchable; but since existing releases would crash on the way to storing such constructs, there can't be any existing views/rules to be incompatible with. Per report from Andrey Lepikhov. Discussion: https://postgr.es/m/3690074f-abd2-56a9-144a-aa5545d7a291@postgrespro.ru
2018-09-15Fix failure with initplans used conditionally during EvalPlanQual rechecks.Tom Lane
The EvalPlanQual machinery assumes that any initplans (that is, uncorrelated sub-selects) used during an EPQ recheck would have already been evaluated during the main query; this is implicit in the fact that execPlan pointers are not copied into the EPQ estate's es_param_exec_vals. But it's possible for that assumption to fail, if the initplan is only reached conditionally. For example, a sub-select inside a CASE expression could be reached during a recheck when it had not been previously, if the CASE test depends on a column that was just updated. This bug is old, appearing to date back to my rewrite of EvalPlanQual in commit 9f2ee8f28, but was not detected until Kyle Samson reported a case. To fix, force all not-yet-evaluated initplans used within the EPQ plan subtree to be evaluated at the start of the recheck, before entering the EPQ environment. This could be inefficient, if such an initplan is expensive and goes unused again during the recheck --- but that's piling one layer of improbability atop another. It doesn't seem worth adding more complexity to prevent that, at least not in the back branches. It was convenient to use the new-in-v11 ExecEvalParamExecParams function to implement this, but I didn't like either its name or the specifics of its API, so revise that. Back-patch all the way. Rather than rewrite the patch to avoid depending on bms_next_member() in the oldest branches, I chose to back-patch that function into 9.4 and 9.3. (This isn't the first time back-patches have needed that, and it exhausted my patience.) I also chose to back-patch some test cases added by commits 71404af2a and 342a1ffa2 into 9.4 and 9.3, so that the 9.x versions of eval-plan-qual.spec are all the same. Andrew Gierth diagnosed the problem and contributed the added test cases, though the actual code changes are by me. Discussion: https://postgr.es/m/A033A40A-B234-4324-BE37-272279F7B627@tripadvisor.com
2018-09-14Move PartitionDispatchData struct definition to execPartition.cAlvaro Herrera
There's no reason to expose the struct definition, so don't. Author: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> Discussion: https://postgr.es/m/d3fa24c1-bc65-7133-81df-6474387ccc4f@lab.ntt.co.jp