summaryrefslogtreecommitdiff
path: root/src/backend/utils
AgeCommit message (Collapse)Author
2018-12-14Make pg_statistic and related code account more honestly for collations.Tom Lane
When we first put in collations support, we basically punted on teaching pg_statistic, ANALYZE, and the planner selectivity functions about that. They've just used DEFAULT_COLLATION_OID independently of the actual collation of the data. It's time to improve that, so: * Add columns to pg_statistic that record the specific collation associated with each statistics slot. * Teach ANALYZE to use the column's actual collation when comparing values for statistical purposes, and record this in the appropriate slot. (Note that type-specific typanalyze functions are now expected to fill stats->stacoll with the appropriate collation, too.) * Teach assorted selectivity functions to use the actual collation of the stats they are looking at, instead of just assuming it's DEFAULT_COLLATION_OID. This should give noticeably better results in selectivity estimates for columns with nondefault collations, at least for query clauses that use that same collation (which would be the default behavior in most cases). It's still true that comparisons with explicit COLLATE clauses different from the stored data's collation won't be well-estimated, but that's no worse than before. Also, this patch does make the first step towards doing better with that, which is that it's now theoretically possible to collect stats for a collation other than the column's own collation. Patch by me; thanks to Peter Eisentraut for review. Discussion: https://postgr.es/m/14706.1544630227@sss.pgh.pa.us
2018-12-13Create a separate oid range for oids assigned by genbki.pl.Andres Freund
The changes I made in 578b229718e assigned oids below FirstBootstrapObjectId to objects in include/catalog/*.dat files that did not have an oid assigned, starting at the max oid explicitly assigned. Tom criticized that for mainly two reasons: 1) It's not clear which values are manually and which explicitly assigned. 2) The space below FirstBootstrapObjectId gets pretty crowded, and some PostgreSQL forks have used oids >= 9000 for their own objects, to avoid conflicting. Thus create a new range for objects not assigned explicit oids, but assigned by genbki.pl. For now 1-9999 is for explicitly assigned oids, FirstGenbkiObjectId (10000) to FirstBootstrapObjectId (1200) -1 is for genbki.pl assigned oids, and < FirstNormalObjectId (16384) is for oids assigned during bootstrap. It's possible that we'll have to adjust these boundaries, but there's some headroom for now. Add a note suggesting that oids in forks should be assigned in the 9000-9999 range. Catversion bump for obvious reasons. Per complaint from Tom Lane. Author: Andres Freund Discussion: https://postgr.es/m/16845.1544393682@sss.pgh.pa.us
2018-12-13Drop no-op CoerceToDomain nodes from expressions at planning time.Tom Lane
If a domain has no constraints, then CoerceToDomain doesn't really do anything and can be simplified to a RelabelType. This not only eliminates cycles at execution, but allows the planner to optimize better (for instance, match the coerced expression to an index on the underlying column). However, we do have to support invalidating the plan later if a constraint gets added to the domain. That's comparable to the case of a change to a SQL function that had been inlined into a plan, so all the necessary logic already exists for plans depending on functions. We need only duplicate or share that logic for domains. ALTER DOMAIN ADD/DROP CONSTRAINT need to be taught to send out sinval messages for the domain's pg_type entry, since those operations don't update that row. (ALTER DOMAIN SET/DROP NOT NULL do update that row, so no code change is needed for them.) Testing this revealed what's really a pre-existing bug in plpgsql: it caches the SQL-expression-tree expansion of type coercions and had no provision for invalidating entries in that cache. Up to now that was only a problem if such an expression had inlined a SQL function that got changed, which is unlikely though not impossible. But failing to track changes of domain constraints breaks an existing regression test case and would likely cause practical problems too. We could fix that locally in plpgsql, but what seems like a better idea is to build some generic infrastructure in plancache.c to store standalone expressions and track invalidation events for them. (It's tempting to wonder whether plpgsql's "simple expression" stuff could use this code with lower overhead than its current use of the heavyweight plancache APIs. But I've left that idea for later.) Other stuff fixed in passing: * Allow estimate_expression_value() to drop CoerceToDomain unconditionally, effectively assuming that the coercion will succeed. This will improve planner selectivity estimates for cases involving estimatable expressions that are coerced to domains. We could have done this independently of everything else here, but there wasn't previously any need for eval_const_expressions_mutator to know about CoerceToDomain at all. * Use a dlist for plancache.c's list of cached plans, rather than a manually threaded singly-linked list. That eliminates a potential performance problem in DropCachedPlan. * Fix a couple of inconsistencies in typecmds.c about whether operations on domains drop RowExclusiveLock on pg_type. Our common practice is that DDL operations do drop catalog locks, so standardize on that choice. Discussion: https://postgr.es/m/19958.1544122124@sss.pgh.pa.us
2018-12-12Tweak pg_partition_tree for undefined relations and unsupported relkindsMichael Paquier
This fixes a crash which happened when calling the function directly with a relation OID referring to a non-existing object, and changes the behavior so as NULL is returned for unsupported relkinds instead of generating an error. This puts the new function in line with many other system functions, and eases actions like full scans of pg_class. Author: Michael Paquier Reviewed-by: Amit Langote, Stephen Frost Discussion: https://postgr.es/m/20181207010406.GO2407@paquier.xyz
2018-12-07Fix some errhint and errdetail strings missing a periodMichael Paquier
As per the error message style guide of the documentation, those should be full sentences. Author: Daniel Gustafsson Reviewed-by: Michael Paquier, Álvaro Herrera Discussion: https://1E8D49B4-16BC-4420-B4ED-58501D9E076B@yesql.se
2018-12-06Improve our response to invalid format strings, and detect more cases.Tom Lane
Places that are testing for *printf failure ought to include the format string in their error reports, since bad-format-string is one of the more likely causes of such failure. This both makes it easier to find and repair the mistake, and provides at least some useful info to the user who stumbles across such a problem. Also, tighten snprintf.c to report EINVAL for an invalid flag or final character in a format %-spec (including the case where the %-spec is missing a final character altogether). This seems like better project policy, and it also allows removing an instruction or two from the hot code path. Back-patch the error reporting change in pvsnprintf, since it should be harmless and may be helpful; but not the snprintf.c change. Per discussion of bug #15511 from Ertuğrul Kahveci, which reported an invalid translated format string. These changes don't fix that error, but they should improve matters next time we make such a mistake. Discussion: https://postgr.es/m/15511-1d8b6a0bc874112f@postgresql.org
2018-11-29Add log_statement_sample_rate parameterAlvaro Herrera
This allows to set a lower log_min_duration_statement value without incurring excessive log traffic (which reduces performance). This can be useful to analyze workloads with lots of short queries. Author: Adrien Nayrat Reviewed-by: David Rowley, Vik Fearing Discussion: https://postgr.es/m/c30ee535-ee1e-db9f-fa97-146b9f62caed@anayrat.info
2018-11-29Fix minor typo in dsa.c.Thomas Munro
Author: Takeshi Ideriha Discussion: https://postgr.es/m/4E72940DA2BF16479384A86D54D0988A6F3BF22D%40G01JPEXMBKW04
2018-11-28Have BufFileSize() ereport() on FileSize() failure.Peter Geoghegan
Move the responsibility for checking for and reporting a failure from the only current BufFileSize() caller, logtape.c, to BufFileSize() itself. Code within buffile.c is generally responsible for interfacing with fd.c to report irrecoverable failures. This seems like a convention that's worth sticking to. Reorganizing things this way makes it easy to make the error message raised in the event of BufFileSize() failure descriptive of the underlying problem. We're now clear on the distinction between temporary file name and BufFile name, and can show errno, confident that its value actually relates to the error being reported. In passing, an existing, similar buffile.c ereport() + errcode_for_file_access() site is changed to follow the same conventions. The API of the function BufFileSize() is changed by this commit, despite already being in a stable release (Postgres 11). This seems acceptable, since the BufFileSize() ABI was changed by commit aa551830421, which hasn't made it into a point release yet. Besides, it's difficult to imagine a third party BufFileSize() caller not just raising an error anyway, since BufFile state should be considered corrupt when BufFileSize() fails. Per complaint from Tom Lane. Discussion: https://postgr.es/m/26974.1540826748@sss.pgh.pa.us Backpatch: 11-, where shared BufFiles were introduced.
2018-11-28Only allow one recovery target settingPeter Eisentraut
The previous recovery.conf regime accepted multiple recovery_target* settings and used the last one. This does not translate well to the general GUC system. Specifically, under EXEC_BACKEND, the settings are written out not in any particular order, so the order in which they were originally set is not available to new processes. Rather than redesign the GUC system, it was decided to abandon the old behavior and only allow one recovery target setting. A second setting will cause an error. However, it is allowed to set the same parameter multiple times or unset a parameter and set a different one. Discussion: https://www.postgresql.org/message-id/flat/27802171543235530%40iva2-6ec8f0a6115e.qloud-c.yandex.net#701a59c837ad0bf8c244344aaf3ef5a4
2018-11-25Integrate recovery.conf into postgresql.confPeter Eisentraut
recovery.conf settings are now set in postgresql.conf (or other GUC sources). Currently, all the affected settings are PGC_POSTMASTER; this could be refined in the future case by case. Recovery is now initiated by a file recovery.signal. Standby mode is initiated by a file standby.signal. The standby_mode setting is gone. If a recovery.conf file is found, an error is issued. The trigger_file setting has been renamed to promote_trigger_file as part of the move. The documentation chapter "Recovery Configuration" has been integrated into "Server Configuration". pg_basebackup -R now appends settings to postgresql.auto.conf and creates a standby.signal file. Author: Fujii Masao <masao.fujii@gmail.com> Author: Simon Riggs <simon@2ndquadrant.com> Author: Abhijit Menon-Sen <ams@2ndquadrant.com> Author: Sergei Kornilov <sk@zsrv.org> Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/
2018-11-23Fix float-to-integer coercions to handle edge cases correctly.Tom Lane
ftoi4 and its sibling coercion functions did their overflow checks in a way that looked superficially plausible, but actually depended on an assumption that the MIN and MAX comparison constants can be represented exactly in the float4 or float8 domain. That fails in ftoi4, ftoi8, and dtoi8, resulting in a possibility that values near the MAX limit will be wrongly converted (to negative values) when they need to be rejected. Also, because we compared before rounding off the fractional part, the other three functions threw errors for values that really ought to get rounded to the min or max integer value. Fix by doing rint() first (requiring an assumption that it handles NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already), and by comparing to values that should coerce to float exactly, namely INTxx_MIN and -INTxx_MIN. Also remove some random cosmetic discrepancies between these six functions. Per bug #15519 from Victor Petrovykh. This should get back-patched, but first let's see what the buildfarm thinks of it --- I'm not too sure about portability of some of the regression test cases. Patch by me; thanks to Andrew Gierth for analysis and discussion. Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
2018-11-23Clamp semijoin selectivity to be not more than inner-join selectivity.Tom Lane
We should never estimate the output of a semijoin to be more rows than we estimate for an inner join with the same input rels and join condition; it's obviously impossible for that to happen. However, given the relatively poor quality of our semijoin selectivity estimates --- particularly, but not only, in cases where we punt and return a default estimate --- we did often deliver such estimates. To improve matters, calculate both estimates inside eqjoinsel() and take the smaller one. The bulk of this patch is just mechanical refactoring to avoid repetitive information lookup when we call both eqjoinsel_semi and eqjoinsel_inner. The actual new behavior is just selec = Min(selec, inner_rel->rows * selec_inner); which looks a bit odd but is correct because of our different definitions for inner and semi join selectivity. There is one ensuing plan change in the regression tests, but it looks reasonable enough (and checking the actual row counts shows that the estimate moved closer to reality, not further away). Per bug #15160 from Alexey Ermakov. Although this is arguably a bug fix, I won't risk destabilizing plan choices in stable branches by back-patching. Tom Lane, reviewed by Melanie Plageman Discussion: https://postgr.es/m/152395805004.19366.3107109716821067806@wrigleys.postgresql.org
2018-11-23Add WL_EXIT_ON_PM_DEATH pseudo-event.Thomas Munro
Users of the WaitEventSet and WaitLatch() APIs can now choose between asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death. This reduces code duplication, since almost all callers want the latter. Repair all code that was previously ignoring postmaster death completely, or requesting the event but ignoring it, or requesting the event but then doing an unconditional PostmasterIsAlive() call every time through its event loop (which is an expensive syscall on platforms for which we don't have USE_POSTMASTER_DEATH_SIGNAL support). Assert that callers of WaitLatchXXX() under the postmaster remember to ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent future bugs. The only process that doesn't handle postmaster death is syslogger. It waits until all backends holding the write end of the syslog pipe (including the postmaster) have closed it by exiting, to be sure to capture any parting messages. By using the WaitEventSet API directly it avoids the new assertion, and as a by-product it may be slightly more efficient on platforms that have epoll(). Author: Thomas Munro Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com, https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
2018-11-22Fix another crash in json{b}_populate_recordset and json{b}_to_recordset.Tom Lane
populate_recordset_worker() failed to consider the possibility that the supplied JSON data contains no rows, so that update_cached_tupdesc never got called. This led to a null-pointer dereference since commit 9a5e8ed28; before that it led to a bogus "set-valued function called in context that cannot accept a set" error. Fix by forcing the update to happen. Per bug #15514. Back-patch to v11 as 9a5e8ed28 was. (If we were excited about the bogus error, we could perhaps go back further, but it'd take more work to figure out how to fix it in older branches. Given the lack of field complaints about that aspect, I'm not excited.) Discussion: https://postgr.es/m/15514-59d5b4c4065b178b@postgresql.org
2018-11-20Remove WITH OIDS support, change oid catalog column visibility.Andres Freund
Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-20Add settings to control SSL/TLS protocol versionPeter Eisentraut
For example: ssl_min_protocol_version = 'TLSv1.1' ssl_max_protocol_version = 'TLSv1.2' Reviewed-by: Steve Singer <steve@ssinger.info> Discussion: https://www.postgresql.org/message-id/flat/1822da87-b862-041a-9fc2-d0310c3da173@2ndquadrant.com
2018-11-20Refine some guc.c help textsPeter Eisentraut
These settings apply to communication with the sending server, which is not necessarily a primary. Author: Sergei Kornilov <sk@zsrv.org>
2018-11-19Reduce unnecessary list construction in RelationBuildPartitionDesc.Robert Haas
The 'partoids' list which was constructed by the previous version of this code was necessarily identical to 'inhoids'. There's no point to duplicating the list, so avoid that. Instead, construct the array representation directly from the original 'inhoids' list. Also, use an array rather than a list for 'boundspecs'. We know exactly how many items we need to store, so there's really no reason to use a list. Using an array instead reduces the number of memory allocations we perform. Patch by me, reviewed by Michael Paquier and Amit Langote, the latter of whom also helped with rebasing.
2018-11-19PANIC on fsync() failure.Thomas Munro
On some operating systems, it doesn't make sense to retry fsync(), because dirty data cached by the kernel may have been dropped on write-back failure. In that case the only remaining copy of the data is in the WAL. A subsequent fsync() could appear to succeed, but not have flushed the data. That means that a future checkpoint could apparently complete successfully but have lost data. Therefore, violently prevent any future checkpoint attempts by panicking on the first fsync() failure. Note that we already did the same for WAL data; this change extends that behavior to non-temporary data files. Provide a GUC data_sync_retry to control this new behavior, for users of operating systems that don't eject dirty data, and possibly forensic/testing uses. If it is set to on and the write-back error was transient, a later checkpoint might genuinely succeed (on a system that does not throw away buffers on failure); if the error is permanent, later checkpoints will continue to fail. The GUC defaults to off, meaning that we panic. Back-patch to all supported releases. There is still a narrow window for error-loss on some operating systems: if the file is closed and later reopened and a write-back error occurs in the intervening time, but the inode has the bad luck to be evicted due to memory pressure before we reopen, we could miss the error. A later patch will address that with a scheme for keeping files with dirty data open at all times, but we judge that to be too complicated to back-patch. Author: Craig Ringer, with some adjustments by Thomas Munro Reported-by: Craig Ringer Reviewed-by: Robert Haas, Thomas Munro, Andres Freund Discussion: https://postgr.es/m/20180427222842.in2e4mibx45zdth5%40alap3.anarazel.de
2018-11-16Redesign initialization of partition routing structuresAlvaro Herrera
This speeds up write operations (INSERT, UPDATE, DELETE, COPY, as well as the future MERGE) on partitioned tables. This changes the setup for tuple routing so that it does far less work during the initial setup and pushes more work out to when partitions receive tuples. PartitionDispatchData structs for sub-partitioned tables are only created when a tuple gets routed through it. The possibly large arrays in the PartitionTupleRouting struct have largely been removed. The partitions[] array remains but now never contains any NULL gaps. Previously the NULLs had to be skipped during ExecCleanupTupleRouting(), which could add a large overhead to the cleanup when the number of partitions was large. The partitions[] array is allocated small to start with and only enlarged when we route tuples to enough partitions that it runs out of space. This allows us to keep simple single-row partition INSERTs running quickly. Redesign The arrays in PartitionTupleRouting which stored the tuple translation maps have now been removed. These have been moved out into a PartitionRoutingInfo struct which is an additional field in ResultRelInfo. The find_all_inheritors() call still remains by far the slowest part of ExecSetupPartitionTupleRouting(). This commit just removes the other slow parts. In passing also rename the tuple translation maps from being ParentToChild and ChildToParent to being RootToPartition and PartitionToRoot. The old names mislead you into thinking that a partition of some sub-partitioned table would translate to the rowtype of the sub-partitioned table rather than the root partitioned table. Authors: David Rowley and Amit Langote, heavily revised by Álvaro Herrera Testing help from Jesper Pedersen and Kato Sho. Discussion: https://postgr.es/m/CAKJS1f_1RJyFquuCKRFHTdcXqoPX-PYqAd7nz=GVBwvGh4a6xA@mail.gmail.com
2018-11-15Introduce notion of different types of slots (without implementing them).Andres Freund
Upcoming work intends to allow pluggable ways to introduce new ways of storing table data. Accessing those table access methods from the executor requires TupleTableSlots to be carry tuples in the native format of such storage methods; otherwise there'll be a significant conversion overhead. Different access methods will require different data to store tuples efficiently (just like virtual, minimal, heap already require fields in TupleTableSlot). To allow that without requiring additional pointer indirections, we want to have different structs (embedding TupleTableSlot) for different types of slots. Thus different types of slots are needed, which requires adapting creators of slots. The slot that most efficiently can represent a type of tuple in an executor node will often depend on the type of slot a child node uses. Therefore we need to track the type of slot is returned by nodes, so parent slots can create slots based on that. Relatedly, JIT compilation of tuple deforming needs to know which type of slot a certain expression refers to, so it can create an appropriate deforming function for the type of tuple in the slot. But not all nodes will only return one type of slot, e.g. an append node will potentially return different types of slots for each of its subplans. Therefore add function that allows to query the type of a node's result slot, and whether it'll always be the same type (whether it's fixed). This can be queried using ExecGetResultSlotOps(). The scan, result, inner, outer type of slots are automatically inferred from ExecInitScanTupleSlot(), ExecInitResultSlot(), left/right subtrees respectively. If that's not correct for a node, that can be overwritten using new fields in PlanState. This commit does not introduce the actually abstracted implementation of different kind of TupleTableSlots, that will be left for a followup commit. The different types of slots introduced will, for now, still use the same backing implementation. While this already partially invalidates the big comment in tuptable.h, it seems to make more sense to update it later, when the different TupleTableSlot implementations actually exist. Author: Ashutosh Bapat and Andres Freund, with changes by Amit Khandekar Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-15geo_ops.c: Clarify comments and function argumentsAlvaro Herrera
These functions were not crystal clear about what their respective APIs are. Make an effort to improve that. Emre's patch was correct AFAICT, but I (Álvaro) felt the need to improve a few comments a bit more. Any resulting errors are my own. Per complaint from Coverity, Ning Yu, and Tom Lane. Author: Emre Hasegeli, Álvaro Herrera Reviewed-by: Tomas Vondra, Álvaro Herrera Discussion: https://postgr.es/m/26769.1533090136@sss.pgh.pa.us
2018-11-15Use 64 bit type for BufFileSize().Thomas Munro
BufFileSize() can't use off_t, because it's only 32 bits wide on some systems. BufFile objects can have many 1GB segments so the total size can exceed 2^31. The only known client of the function is parallel CREATE INDEX, which was reported to fail when building large indexes on Windows. Though this is technically an ABI break on platforms with a 32 bit off_t and we might normally avoid back-patching it, the function is brand new and thus unlikely to have been discovered by extension authors yet, and it's fairly thoroughly broken on those platforms anyway, so just fix it. Defect in 9da0cc35. Bug #15460. Back-patch to 11, where this function landed. Author: Thomas Munro Reported-by: Paul van der Linden, Pavel Oskin Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/15460-b6db80de822fa0ad%40postgresql.org Discussion: https://postgr.es/m/CAHDGBJP_GsESbTt4P3FZA8kMUKuYxjg57XHF7NRBoKnR%3DCAR-g%40mail.gmail.com
2018-11-14Add a timezone-specific variant of date_trunc().Tom Lane
date_trunc(field, timestamptz, zone_name) performs truncation using the named time zone as reference, rather than working in the session time zone as is the default behavior. It's equivalent to date_trunc(field, timestamptz at time zone zone_name) at time zone zone_name but it's faster, easier to type, and arguably easier to understand. Vik Fearing and Tom Lane Discussion: https://postgr.es/m/6249ffc4-2b22-4c1b-4e7d-7af84fedd7c6@2ndquadrant.com
2018-11-14Refactor code creating PartitionBoundInfoMichael Paquier
The code building PartitionBoundInfo based on the constituent partition data read from catalogs has been located in partcache.c, with a specific set of routines dedicated to bound types, like sorting or bound data creation. All this logic is moved to partbounds.c and relocates all the bound-specific logistic into it, with partition_bounds_create() as principal entry point. Author: Amit Langote Reviewed-by: Michael Paquier, Álvaro Herrera Discussion: https://postgr.es/m/3f289da8-6d10-75fe-814a-635e8b191d43@lab.ntt.co.jp
2018-11-10Apply RI trigger skipping tests also for DELETEPeter Eisentraut
The tests added in cfa0f4255bb0f5550d37a01c4d8fe2966d20040c to skip firing an RI trigger if any old key value is NULL can also be applied for DELETE. This should give a performance gain in those cases, and it also saves a lot of duplicate code in the actual RI triggers. (That code was already dead code for the UPDATE cases.) Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2018-11-10Remove dead foreign key optimization codePeter Eisentraut
The ri_KeysEqual() calls in the foreign-key trigger functions to optimize away some updates are useless because since adfeef55cbcc5dc72a772777f88c1be05a70dfee those triggers are not enqueued at all. (It's also not useful to keep these checks as some kind of backstop, since it's also semantically correct to just run the full check even with equal keys.) Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2018-11-08Disallow setting client_min_messages higher than ERROR.Tom Lane
Previously it was possible to set client_min_messages to FATAL or PANIC, which had the effect of suppressing transmission of regular ERROR messages to the client. Perhaps that seemed like a useful option in the past, but the trouble with it is that it breaks guarantees that are explicitly made in our FE/BE protocol spec about how a query cycle can end. While libpq and psql manage to cope with the omission, that's mostly because they are not very bright; client libraries that have more semantic knowledge are likely to get confused. Notably, pgODBC doesn't behave very sanely. Let's fix this by getting rid of the ability to set client_min_messages above ERROR. In HEAD, just remove the FATAL and PANIC options from the set of allowed enum values for client_min_messages. (This change also affects trace_recovery_messages, but that's OK since these aren't useful values for that variable either.) In the back branches, there was concern that rejecting these values might break applications that are explicitly setting things that way. I'm pretty skeptical of that argument, but accommodate it by accepting these values and then internally setting the variable to ERROR anyway. In all branches, this allows a couple of tiny simplifications in the logic in elog.c, so do that. Also respond to the point that was made that client_min_messages has exactly nothing to do with the server's logging behavior, and therefore does not belong in the "When To Log" subsection of the documentation. The "Statement Behavior" subsection is a better match, so move it there. Jonah Harris and Tom Lane Discussion: https://postgr.es/m/7809.1541521180@sss.pgh.pa.us Discussion: https://postgr.es/m/15479-ef0f4cc2fd995ca2@postgresql.org
2018-11-06Disable recheck_on_update optimization to avoid crashes.Tom Lane
The code added by commit c203d6cf8 causes a crash in at least one case, where a potentially-optimizable expression index has a storage type different from the input data type. A cursory code review turned up numerous other problems that seem impractical to fix on short notice. Andres argued for revert of that patch some time ago, and if additional senior committers had been paying attention, that's likely what would have happened, but we were not :-( At this point we can't just revert, at least not in v11, because that would mean an ABI break for code touching relcache entries. And we should not remove the (also buggy) support for the recheck_on_update index reloption, since it might already be used in some databases in the field. So this patch just does the as-little-invasive-as-possible measure of disabling the feature as though recheck_on_update were forced off for all indexes. I also removed the related regression tests (which would otherwise fail) and the user-facing documentation of the reloption. We should undertake a more thorough code cleanup if the patch can't be fixed, but not under the extreme time pressure of being already overdue for 11.1 release. Per report from Ondřej Bouda and subsequent private discussion among pgsql-release. Discussion: https://postgr.es/m/20181106185255.776mstcyehnc63ty@alvherre.pgsql
2018-11-06GUC: adjust effective_cache_size SQL descriptionsBruce Momjian
Follow on patch for commit 3e0f1a4741f564c1a2fa6e944729d6967355d8c7. Reported-by: Peter Eisentraut Discussion: https://postgr.es/m/369ec766-b947-51bd-4dad-6fb9e026439f@2ndquadrant.com Backpatch-through: 9.4
2018-11-02GUC: adjust effective_cache_size docs and SQL descriptionBruce Momjian
Clarify that effective_cache_size is both kernel buffers and shared buffers. Reported-by: nat@makarevitch.org Discussion: https://postgr.es/m/153685164808.22334.15432535018443165207@wrigleys.postgresql.org Backpatch-through: 9.3
2018-11-02Fix spelling errors and typos in commentsMagnus Hagander
Author: Daniel Gustafsson <daniel@yesql.se>
2018-11-01Adjust trace_sort log messages.Peter Geoghegan
The project message style guide dictates: "When citing the name of an object, state what kind of object it is". The parallel CREATE INDEX patch added a worker number to most of the trace_sort messages within tuplesort.c without specifying the object type. Bring these messages into compliance with the style guide. We're still treating a leader or serial Tuplesortstate as having worker number -1. trace_sort is a developer option, and these two cases are highly comparable, so this seems appropriate. Per complaint from Tom Lane. Discussion: https://postgr.es/m/8330.1540831863@sss.pgh.pa.us Backpatch: 11-, where parallel CREATE INDEX was introduced.
2018-10-30Add pg_partition_tree to display information about partitionsMichael Paquier
This new function is useful to display a full tree of partitions with a partitioned table given in output, and avoids the need of any complex WITH RECURSIVE query when looking at partition trees which are deep multiple levels. It returns a set of records, one for each partition, containing the partition's name, its immediate parent's name, a boolean value telling if the relation is a leaf in the tree and an integer telling its level in the partition tree with given table considered as root, beginning at zero for the root, and incrementing by one each time the scan goes one level down. Author: Amit Langote Reviewed-by: Jesper Pedersen, Michael Paquier, Robert Haas Discussion: https://postgr.es/m/8d00e51a-9a51-ad02-d53e-ba6bf50b2e52@lab.ntt.co.jp
2018-10-25Apply unconstify() in more placesPeter Eisentraut
Discussion: https://www.postgresql.org/message-id/08adbe4e-38f8-2c73-55f0-591392371687%402ndquadrant.com
2018-10-23Remove get_attidentity()Peter Eisentraut
All existing uses can get this information more easily from the relation descriptor, so the detour through the syscache is not necessary. Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-10-23Remove get_atttypmod()Peter Eisentraut
This has been unused since 2004. get_atttypetypmodcoll() is often a better alternative. Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-10-23Drop const cast from dlsym() callsPeter Eisentraut
This workaround might be obsolete. We'll see if those "older platforms" mentioned in the comment are still around. Discussion: https://www.postgresql.org/message-id/08adbe4e-38f8-2c73-55f0-591392371687%402ndquadrant.com
2018-10-19Add missing quote_identifier calls for CREATE TRIGGER ... REFERENCING.Tom Lane
Mixed-case names for transition tables weren't dumped correctly. Oversight in commit 8c48375e5, per bug #15440 from Karl Czajkowski. In passing, I couldn't resist a bit of code beautification. Back-patch to v10 where this was introduced. Discussion: https://postgr.es/m/15440-02d1468e94d63d76@postgresql.org
2018-10-19Refactor pid, random seed and start time initialization.Thomas Munro
Background workers, including parallel workers, were generating the same sequence of numbers in random(). This showed up as DSM handle collisions when Parallel Hash created multiple segments, but any code that calls random() in background workers could be affected if it cares about different backends generating different numbers. Repair by making sure that all new processes initialize the seed at the same time as they set MyProcPid and MyStartTime in a new function InitProcessGlobals(), called by the postmaster, its children and also standalone processes. Also add a new high resolution MyStartTimestamp as a potentially useful by-product, and remove SessionStartTime from struct Port as it is now redundant. No back-patch for now, as the known consequences so far are just a bunch of harmless shm_open(O_EXCL) collisions. Author: Thomas Munro Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAEepm%3D2eJj_6%3DB%2B2tEpGu2nf1BjthCf9nXXUouYvJJ4C5WSwhg%40mail.gmail.com
2018-10-17Const-ify a few more large static tables.Tom Lane
Per research by Andres. Discussion: https://postgr.es/m/20181015200754.7y7zfuzsoux2c4ya@alap3.anarazel.de
2018-10-16Reorder FmgrBuiltin members, saving 25% in size.Andres Freund
That's worth it, as fmgr_builtins is frequently accessed, and as fmgr_builtins is one of the biggest constant variables in a backend. On most 64bit systems this will change the size of the struct from 32byte to 24bytes. While that could make indexing into the array marginally more expensive, the higher cache hit ratio is worth more, especially because these days fmgr_builtins isn't searched with a binary search anymore (c.f. 212e6f34d5). Discussion: https://postgr.es/m/20181016201145.aa2dfeq54rhqzron@alap3.anarazel.de
2018-10-16Be smarter about age-counter overflow in formatting.c caches.Tom Lane
The previous code here simply threw away whatever it knew about cache entry ages whenever a counter overflow occurred. Since the counter is int width and will be bumped once per format function execution, overflows are not really so rare as to not be worth thinking about. Instead, let's deal with the situation by halving all the age values, essentially rescaling the age metric. In that way, we retain a pretty accurate (if not quite perfect) idea of which entries are oldest.
2018-10-16Avoid statically allocating formatting.c's format string caches.Tom Lane
This eliminates circa 120KB of static data from Postgres' memory footprint. In some usage patterns that space will get allocated anyway, but in many processes it never will be allocated. We can improve matters further by allocating only as many cache entries as we actually use, rather than allocating the whole array on first use. However, to avoid wasting lots of space due to palloc's habit of rounding requests up to power-of-2 sizes, tweak the maximum cacheable format string length to make the struct sizes be powers of 2 or just less. The sizes I chose make the maximums a little bit less than they were before, but I doubt it matters much. While at it, rearrange struct FormatNode to avoid wasting quite so much padding space. This change actually halves the size of that struct on 64-bit machines. Discussion: https://postgr.es/m/20181015200754.7y7zfuzsoux2c4ya@alap3.anarazel.de
2018-10-16Correct constness of system attributes in heap.c & prerequisites.Andres Freund
This allows the compiler / linker to mark affected pages as read-only. There's a fair number of pre-requisite changes, to allow the const properly be propagated. Most of consts were already required for correctness anyway, just not represented on the type-level. Arguably we could be more aggressive in using consts in related code, but.. This requires using a few of the types underlying typedefs that removes pointers (e.g. const NameData *) as declaring the typedefed type constant doesn't have the same meaning (it makes the variable const, not what it points to). Discussion: https://postgr.es/m/20181015200754.7y7zfuzsoux2c4ya@alap3.anarazel.de
2018-10-16Avoid statically allocating gmtsub()'s timezone workspace.Tom Lane
localtime.c's "struct state" is a rather large object, ~23KB. We were statically allocating one for gmtsub() to use to represent the GMT timezone, even though that function is not at all heavily used and is never reached in most backends. Let's malloc it on-demand, instead. This does pose the question of how to handle a malloc failure, but there's already a well-defined error report convention here, ie set errno and return NULL. We have but one caller of pg_gmtime in HEAD, and two in back branches, neither of which were troubling to check for error. Make them do so. The possible errors are sufficiently unlikely (out-of-range timestamp, and now malloc failure) that I think elog() is adequate. Back-patch to all supported branches to keep our copies of the IANA timezone code in sync. This particular change is in a stanza that already differs from upstream, so it's a wash for maintenance purposes --- but only as long as we keep the branches the same. Discussion: https://postgr.es/m/20181015200754.7y7zfuzsoux2c4ya@alap3.anarazel.de
2018-10-15Correct constness of a few variables.Andres Freund
This allows the compiler / linker to mark affected pages as read-only. There's other cases, but they're a bit more invasive, and should go through some review. These are easy. They were found with objdump -j .data -t src/backend/postgres|awk '{print $4, $5, $6}'|sort -r|less Discussion: https://postgr.es/m/20181015200754.7y7zfuzsoux2c4ya@alap3.anarazel.de
2018-10-12Simplify use of AllocSetContextCreate() wrapper macro.Tom Lane
We can allow this macro to accept either abbreviated or non-abbreviated allocation parameters by making use of __VA_ARGS__. As noted by Andres Freund, it's unlikely that any compiler would have __builtin_constant_p but not __VA_ARGS__, so this gives up little or no error checking, and it avoids a minor but annoying API break for extensions. With this change, there is no reason for anybody to call AllocSetContextCreateExtended directly, so in HEAD I renamed it to AllocSetContextCreateInternal. It's probably too late for an ABI break like that in 11, though. Discussion: https://postgr.es/m/20181012170355.bhxi273skjt6sag4@alap3.anarazel.de
2018-10-12Correct attach/detach logic for FKs in partitionsAlvaro Herrera
There was no code to handle foreign key constraints on partitioned tables in the case of ALTER TABLE DETACH; and if you happened to ATTACH a partition that already had an equivalent constraint, that one was ignored and a new constraint was created. Adding this to the fact that foreign key cloning reuses the constraint name on the partition instead of generating a new name (as it probably should, to cater to SQL standard rules about constraint naming within schemas), the result was a pretty poor user experience -- the most visible failure was that just detaching a partition and re-attaching it failed with an error such as ERROR: duplicate key value violates unique constraint "pg_constraint_conrelid_contypid_conname_index" DETAIL: Key (conrelid, contypid, conname)=(26702, 0, test_result_asset_id_fkey) already exists. because it would try to create an identically-named constraint in the partition. To make matters worse, if you tried to drop the constraint in the now-independent partition, that would fail because the constraint was still seen as dependent on the constraint in its former parent partitioned table: ERROR: cannot drop inherited constraint "test_result_asset_id_fkey" of relation "test_result_cbsystem_0001_0050_monthly_2018_09" This fix attacks the problem from two angles: first, when the partition is detached, the constraint is also marked as independent, so the drop now works. Second, when the partition is re-attached, we scan existing constraints searching for one matching the FK in the parent, and if one exists, we link that one to the parent constraint. So we don't end up with a duplicate -- and better yet, we don't need to scan the referenced table to verify that the constraint holds. To implement this I made a small change to previously planner-only struct ForeignKeyCacheInfo to contain the constraint OID; also relcache now maintains the list of FKs for partitioned tables too. Backpatch to 11. Reported-by: Michael Vitale (bug #15425) Discussion: https://postgr.es/m/15425-2dbc9d2aa999f816@postgresql.org