summaryrefslogtreecommitdiff
path: root/src/tools
AgeCommit message (Collapse)Author
2019-09-19Improve handling of NULLs in KNN-GiST and KNN-SP-GiSTAlexander Korotkov
This commit improves subject in two ways: * It removes ugliness of 02f90879e7, which stores distance values and null flags in two separate arrays after GISTSearchItem struct. Instead we pack both distance value and null flag in IndexOrderByDistance struct. Alignment overhead should be negligible, because we typically deal with at most few "col op const" expressions in ORDER BY clause. * It fixes handling of "col op NULL" expression in KNN-SP-GiST. Now, these expression are not passed to support functions, which can't deal with them. Instead, NULL result is implicitly assumed. It future we may decide to teach support functions to deal with NULL arguments, but current solution is bugfix suitable for backpatch. Reported-by: Nikita Glukhov Discussion: https://postgr.es/m/826f57ee-afc7-8977-c44c-6111d18b02ec%40postgrespro.ru Author: Nikita Glukhov Reviewed-by: Alexander Korotkov Backpatch-through: 9.4
2019-09-11Expand properly list of TAP tests used for prove in vcregress.plMichael Paquier
Depending on the system used, t/*.pl may not be expanded into a list of tests which can be consumed by prove when attempting to run TAP tests on a given path. Fix that by using glob() directly in the script, to make sure that a complete list of tests is provided. This has not proved to be an issue with MSVC as the list was properly expanded, but it is on Linux with perl's system(). This is extracted from a larger patch. Author: Tom Lane Discussion: https://postgr.es/m/6628.1567958876@sss.pgh.pa.us Backpatch-through: 9.4
2019-09-07Avoid using INFO elevel for what are fundamentally debug messages.Tom Lane
Commit 6f6b99d13 stuck an INFO message into the fast path for checking partition constraints, for no very good reason except that it made it easy for the regression tests to verify that that path was taken. Assorted later patches did likewise, increasing the unsuppressable-chatter level from ALTER TABLE even more. This isn't good for the user experience, so let's drop these messages down to DEBUG1 where they belong. So as not to have a loss of test coverage, create a TAP test that runs the relevant queries with client_min_messages = DEBUG1 and greps for the expected messages. This testing method is a bit brute-force --- in particular, it duplicates the execution of a fair amount of the core create_table and alter_table tests. We experimented with other solutions, but running any significant amount of standard testing with client_min_messages = DEBUG1 seems to have a lot of output-stability pitfalls, cf commits bbb96c370 and 5655565c0. Possibly at some point we'll look into whether we can reduce the amount of test duplication. Backpatch into v12, because some of these messages are new in v12 and we don't really want to ship it that way. Sergei Kornilov Discussion: https://postgr.es/m/81911511895540@web58j.yandex.ru Discussion: https://postgr.es/m/4859321552643736@myt5-02b80404fd9e.qloud-c.yandex.net
2019-07-25Fix syntax error in commit 20e99cddd.Tom Lane
Per buildfarm.
2019-07-25Honor MSVC WindowsSDKVersion if setAndrew Dunstan
Add a line to the project file setting the target SDK. Otherwise, in for example VS2017, if the default but optional 8.1 SDK is not installed the build will fail. Patch from Peifeng Qiu, slightly edited by me. Discussion: https://postgr.es/m/CABmtVJhw1boP_bd4=b3Qv5YnqEdL696NtHFi2ruiyQ6mFHkeQQ@mail.gmail.com Backpatch to all live branches.
2019-07-03Add support for Visual Studio 2019 in build scriptsMichael Paquier
This fixes at the same time a set of inconsistencies in the documentation and the scripts related to the versions of Windows SDK supported. Author: Haribabu Kommi Reviewed-by: Andrew Dunstan, Juan José Santamaría Flecha, Michael Paquier Discussion: https://postgr.es/m/CAJrrPGcfqXhfPyMrny9apoDU7M1t59dzVAvoJ9AeAh5BJi+UzA@mail.gmail.com Backpatch-through: 9.4
2019-07-01pgindent run prior to branching v12.Tom Lane
pgperltidy and reformat-dat-files too, though the latter didn't find anything to change.
2019-06-30Exclude new src/test/modules/unsafe_tests directory from MSVC build.Tom Lane
There's nothing to build here, and that was confusing AddContrib(). Per buildfarm.
2019-06-26Add support for OpenSSL 1.1.0 and newer versions in MSVC scriptsMichael Paquier
Up to now, the MSVC build scripts are able to support only one fixed version of OpenSSL, and they lacked logic to detect the version of OpenSSL a given compilation of Postgres is linking to (currently 1.0.2, the latest LTS of upstream which will be EOL'd at the end of 2019). This commit adds more logic to detect the version of OpenSSL used by a build and makes use of it to add support for compilation with OpenSSL 1.1.0 which requires a new set of compilation flags to work properly. The supported OpenSSL installers have changed their library layer with various library renames with the upgrade to 1.1.0, making the logic a bit more complicated. The scripts are now able to adapt to the new world order. Reported-by: Sergey Pashkov Author: Juan José Santamaría Flecha, Michael Paquier Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/15789-8fc75dea3c5a17c8@postgresql.org
2019-06-19Remove last references to WAL segment size in MSVC scriptsMichael Paquier
fc49e24 has removed the last use of this compile-time variable as WAL segment size is something that can now be set at initdb time, still this commit has forgotten some references to it. Discussion: https://postgr.es/m/20190617073228.GE18917@paquier.xyz
2019-06-17Fix more typos and inconsistencies in the treeMichael Paquier
Author: Alexander Lakhin Discussion: https://postgr.es/m/0a5419ea-1452-a4e6-72ff-545b1a5a8076@gmail.com
2019-06-16Rework the pg_statistic_ext catalogTomas Vondra
Since extended statistic got introduced in PostgreSQL 10, there was a single catalog pg_statistic_ext storing both the definitions and built statistic. That's however problematic when a user is supposed to have access only to the definitions, but not to user data. Consider for example pg_dump on a database with RLS enabled - if the pg_statistic_ext catalog respects RLS (which it should, if it contains user data), pg_dump would not see any records and the result would not define any extended statistics. That would be a surprising behavior. Until now this was not a pressing issue, because the existing types of extended statistic (functional dependencies and ndistinct coefficients) do not include any user data directly. This changed with introduction of MCV lists, which do include most common combinations of values. The easiest way to fix this is to split the pg_statistic_ext catalog into two - one for definitions, one for the built statistic values. The new catalog is called pg_statistic_ext_data, and we're maintaining a 1:1 relationship with the old catalog - either there are matching records in both catalogs, or neither of them. Bumped CATVERSION due to changing system catalog definitions. Author: Dean Rasheed, with improvements by me Reviewed-by: Dean Rasheed, John Naylor Discussion: https://postgr.es/m/CAEZATCUhT9rt7Ui%3DVdx4N%3D%3DVV5XOK5dsXfnGgVOz_JhAicB%3DZA%40mail.gmail.com
2019-06-09MSVC: Reconcile clean.bat with PostgreSQL 12 work.Noah Misch
2019-06-02Make cpluspluscheck more portable.Tom Lane
Teach it to scrape -I and -D switches from CPPFLAGS in Makefile.global. This is useful for testing on, eg, FreeBSD, where you won't get far without "-I/usr/local/include". Also, expand the set of blacklisted-for-unportability atomics headers, based on noting that arch-x86.h fails to compile on an ARM box. The other ones I'd omitted seem to compile all right on architectures they don't belong to, but that's surely too shaky to rely on. Let's do like we did for the src/include/port/ headers, and ignore all except the variant that's pulled in by the arch-independent header.
2019-06-02Un-break ecpg tests for Windows.Tom Lane
Declaring a function "inline" still doesn't work with Windows compilers (C99? what's that?), unless the macro provided by pg_config.h is in-scope, which it is not in our ECPG test programs. So the workaround I tried to use in commit 7640f9312 doesn't work for Windows. Revert the change in printf_hack.h, and instead just blacklist that file in cpluspluscheck --- since it's a not-installed test file, we don't really need to verify its C++ cleanliness anyway.
2019-05-31Improve coverage of cpluspluscheck.Tom Lane
Formerly, cpluspluscheck was only meant to examine headers that we thought of as exported --- but its notion of what we export was well behind the times. Let's just make it check *all* .h files, except for a well-defined blacklist, instead. While at it, improve its ability to use a C++ compiler other than g++, by scraping the CXX setting from Makefile.global and making it possible to override the warning options used (per suggestion from Andres Freund). Discussion: https://postgr.es/m/b517ec3918d645eb950505eac8dd434e@gaz-is.ru
2019-05-31Integrate cpluspluscheck into build system.Andres Freund
Previously cpluspluscheck wouldn't work in vpath builds, this commit fixes that. To make it easier to invoke, there's a top-level cpluspluscheck target. Discussion: https://postgr.es/20190530220244.kiputcbl4gkl2oo6@alap3.anarazel.de
2019-05-31Make our perfect hash functions be valid C++.Tom Lane
While C is happy to cast "const void *" to "const unsigned char *" silently, C++ insists on an explicit cast. Since we put these functions into header files, cpluspluscheck whines about that. Add the cast to pacify it. Discussion: https://postgr.es/m/b517ec3918d645eb950505eac8dd434e@gaz-is.ru
2019-05-28In the pg_upgrade test suite, don't write to src/test/regress.Noah Misch
When this suite runs installcheck, redirect file creations from src/test/regress to src/bin/pg_upgrade/tmp_check/regress. This closes a race condition in "make -j check-world". If the pg_upgrade suite wrote to a given src/test/regress/results file in parallel with the regular src/test/regress invocation writing it, a test failed spuriously. Even without parallelism, in "make -k check-world", the suite finishing second overwrote the other's regression.diffs. This revealed test "largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too. Buildfarm client REL_10, released fifty-four days ago, supports saving regression.diffs from its new location. When an older client reports a pg_upgradeCheck failure, it will no longer include regression.diffs. Back-patch to 9.5, where pg_upgrade moved to src/bin. Reviewed (in earlier versions) by Andrew Dunstan. Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
2019-05-28In the pg_upgrade test suite, remove and recreate "tmp_check".Noah Misch
This allows "vcregress upgradecheck" to pass twice in immediate succession, and it's more like how $(prove_check) works. Back-patch to 9.5, where pg_upgrade moved to src/bin. Discussion: https://postgr.es/m/20190520012436.GA1480421@rfd.leadboat.com
2019-05-22Initial pgperltidy run for v12.Tom Lane
Make all the perl code look nice, too (for some value of "nice").
2019-05-22Phase 2 pgindent run for v12.Tom Lane
Switch to 2.1 version of pg_bsd_indent. This formats multiline function declarations "correctly", that is with additional lines of parameter declarations indented to match where the first line's left parenthesis is. Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com
2019-05-22Initial pgindent run for v12.Tom Lane
This is still using the 2.0 version of pg_bsd_indent. I thought it would be good to commit this separately, so as to document the differences between 2.0 and 2.1 behavior. Discussion: https://postgr.es/m/16296.1558103386@sss.pgh.pa.us
2019-05-20Remove bug.template filePeter Eisentraut
It's outdated and not really in use anymore. Discussion: https://www.postgresql.org/message-id/flat/cf7ed2b1-1ebe-83cf-e05e-d5943f67af2d%402ndquadrant.com
2019-05-19Revert "In the pg_upgrade test suite, don't write to src/test/regress."Noah Misch
This reverts commit bd1592e8570282b1650af6b8eede0016496daecd. It had multiple defects. Discussion: https://postgr.es/m/12717.1558304356@sss.pgh.pa.us
2019-05-19In the pg_upgrade test suite, don't write to src/test/regress.Noah Misch
When this suite runs installcheck, redirect file creations from src/test/regress to src/bin/pg_upgrade/tmp_check/regress. This closes a race condition in "make -j check-world". If the pg_upgrade suite wrote to a given src/test/regress/results file in parallel with the regular src/test/regress invocation writing it, a test failed spuriously. Even without parallelism, in "make -k check-world", the suite finishing second overwrote the other's regression.diffs. This revealed test "largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too. Buildfarm client REL_10, released forty-five days ago, supports saving regression.diffs from its new location. When an older client reports a pg_upgradeCheck failure, it will no longer include regression.diffs. Back-patch to 9.5, where pg_upgrade moved to src/bin. Reviewed by Andrew Dunstan. Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
2019-05-14Move logging.h and logging.c from src/fe_utils/ to src/common/.Tom Lane
The original placement of this module in src/fe_utils/ is ill-considered, because several src/common/ modules have dependencies on it, meaning that libpgcommon and libpgfeutils now have mutual dependencies. That makes it pointless to have distinct libraries at all. The intended design is that libpgcommon is lower-level than libpgfeutils, so only dependencies from the latter to the former are acceptable. We already have the precedent that fe_memutils and a couple of other modules in src/common/ are frontend-only, so it's not stretching anything out of whack to treat logging.c as a frontend-only module in src/common/. To the extent that such modules help provide a common frontend/backend environment for the rest of common/ to use, it's a reasonable design. (logging.c does not yet provide an ereport() emulation, but one can dream.) Hence, move these files over, and revert basically all of the build-system changes made by commit cc8d41511. There are no places that need to grow new dependencies on libpgcommon, further reinforcing the idea that this is the right solution. Discussion: https://postgr.es/m/a912ffff-f6e4-778a-c86a-cf5c47a12933@2ndquadrant.com
2019-05-03MSVC: Build ~35% faster by calling dumpbin just once per directory.Noah Misch
Peifeng Qiu Discussion: https://postgr.es/m/CABmtVJiKXQjast0dQD-8KAtfm8XmyYxo-4Dc7+M+fBr8JRTqkw@mail.gmail.com
2019-04-12Consistently test for in-use shared memory.Noah Misch
postmaster startup scrutinizes any shared memory segment recorded in postmaster.pid, exiting if that segment matches the current data directory and has an attached process. When the postmaster.pid file was missing, a starting postmaster used weaker checks. Change to use the same checks in both scenarios. This increases the chance of a startup failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1 postmaster.pid` && rm postmaster.pid && pg_ctl -w start". A postmaster will no longer stop if shmat() of an old segment fails with EACCES. A postmaster will no longer recycle segments pertaining to other data directories. That's good for production, but it's bad for integration tests that crash a postmaster and immediately delete its data directory. Such a test now leaks a segment indefinitely. No "make check-world" test does that. win32_shmem.c already avoided all these problems. In 9.6 and later, enhance PostgresNode to facilitate testing. Back-patch to 9.4 (all supported versions). Reviewed (in earlier versions) by Daniel Gustafsson and Kyotaro HORIGUCHI. Discussion: https://postgr.es/m/20190408064141.GA2016666@rfd.leadboat.com
2019-04-05Revert "Consistently test for in-use shared memory."Noah Misch
This reverts commits 2f932f71d9f2963bbd201129d7b971c8f5f077fd, 16ee6eaf80a40007a138b60bb5661660058d0422 and 6f0e190056fe441f7cf788ff19b62b13c94f68f3. The buildfarm has revealed several bugs. Back-patch like the original commits. Discussion: https://postgr.es/m/20190404145319.GA1720877@rfd.leadboat.com
2019-04-04tableam: Add table_multi_insert() and revamp/speed-up COPY FROM buffering.Andres Freund
This adds table_multi_insert(), and converts COPY FROM, the only user of heap_multi_insert, to it. A simple conversion of COPY FROM use slots would have yielded a slowdown when inserting into a partitioned table for some workloads. Different partitions might need different slots (both slot types and their descriptors), and dropping / creating slots when there's constant partition changes is measurable. Thus instead revamp the COPY FROM buffering for partitioned tables to allow to buffer inserts into multiple tables, flushing only when limits are reached across all partition buffers. By only dropping slots when there've been inserts into too many different partitions, the aforementioned overhead is gone. By allowing larger batches, even when there are frequent partition changes, we actuall speed such cases up significantly. By using slots COPY of very narrow rows into unlogged / temporary might slow down very slightly (due to the indirect function calls). Author: David Rowley, Andres Freund, Haribabu Kommi Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20190327054923.t3epfuewxfqdt22e@alap3.anarazel.de
2019-04-04Refactor the fsync queue for wider use.Thomas Munro
Previously, md.c and checkpointer.c were tightly integrated so that fsync calls could be handed off and processed in the background. Introduce a system of callbacks and file tags, so that other modules can hand off fsync work in the same way. For now only md.c uses the new interface, but other users are being proposed. Since there may be use cases that are not strictly SMGR implementations, use a new function table for sync handlers rather than extending the traditional SMGR one. Instead of using a bitmapset of segment numbers for each RelFileNode in the checkpointer's hash table, make the segment number part of the key. This requires sending explicit "forget" requests for every segment individually when relations are dropped, but suits the file layout schemes of proposed future users better (ie sparse or high segment numbers). Author: Shawn Debnath and Thomas Munro Reviewed-by: Thomas Munro, Andres Freund Discussion: https://postgr.es/m/CAEepm=2gTANm=e3ARnJT=n0h8hf88wqmaZxk0JYkxw+b21fNrw@mail.gmail.com
2019-04-03Consistently test for in-use shared memory.Noah Misch
postmaster startup scrutinizes any shared memory segment recorded in postmaster.pid, exiting if that segment matches the current data directory and has an attached process. When the postmaster.pid file was missing, a starting postmaster used weaker checks. Change to use the same checks in both scenarios. This increases the chance of a startup failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1 postmaster.pid` && rm postmaster.pid && pg_ctl -w start". A postmaster will no longer recycle segments pertaining to other data directories. That's good for production, but it's bad for integration tests that crash a postmaster and immediately delete its data directory. Such a test now leaks a segment indefinitely. No "make check-world" test does that. win32_shmem.c already avoided all these problems. In 9.6 and later, enhance PostgresNode to facilitate testing. Back-patch to 9.4 (all supported versions). Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI. Discussion: https://postgr.es/m/20130911033341.GD225735@tornado.leadboat.com
2019-04-03GSSAPI encryption supportStephen Frost
On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-01Unified logging system for command-line programsPeter Eisentraut
This unifies the various ad hoc logging (message printing, error printing) systems used throughout the command-line programs. Features: - Program name is automatically prefixed. - Message string does not end with newline. This removes a common source of inconsistencies and omissions. - Additionally, a final newline is automatically stripped, simplifying use of PQerrorMessage() etc., another common source of mistakes. - I converted error message strings to use %m where possible. - As a result of the above several points, more translatable message strings can be shared between different components and between frontends and backend, without gratuitous punctuation or whitespace differences. - There is support for setting a "log level". This is not meant to be user-facing, but can be used internally to implement debug or verbose modes. - Lazy argument evaluation, so no significant overhead if logging at some level is disabled. - Some color in the messages, similar to gcc and clang. Set PG_COLOR=auto to try it out. Some colors are predefined, but can be customized by setting PG_COLORS. - Common files (common/, fe_utils/, etc.) can handle logging much more simply by just using one API without worrying too much about the context of the calling program, requiring callbacks, or having to pass "progname" around everywhere. - Some programs called setvbuf() to make sure that stderr is unbuffered, even on Windows. But not all programs did that. This is now done centrally. Soft goals: - Reduces vertical space use and visual complexity of error reporting in the source code. - Encourages more deliberate classification of messages. For example, in some cases it wasn't clear without analyzing the surrounding code whether a message was meant as an error or just an info. - Concepts and terms are vaguely aligned with popular logging frameworks such as log4j and Python logging. This is all just about printing stuff out. Nothing affects program flow (e.g., fatal exits). The uses are just too varied to do that. Some existing code had wrappers that do some kind of print-and-exit, and I adapted those. I tried to keep the output mostly the same, but there is a lot of historical baggage to unwind and special cases to consider, and I might not always have succeeded. One significant change is that pg_rewind used to write all error messages to stdout. That is now changed to stderr. Reviewed-by: Donald Dong <xdong@csumb.edu> Reviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru> Discussion: https://www.postgresql.org/message-id/flat/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com
2019-04-01GIN support for @@ and @? jsonpath operatorsAlexander Korotkov
This commit makes existing GIN operator classes jsonb_ops and json_path_ops support "jsonb @@ jsonpath" and "jsonb @? jsonpath" operators. Basic idea is to extract statements of following form out of jsonpath. key1.key2. ... .keyN = const The rest of jsonpath is rechecked from heap. Catversion is bumped. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Nikita Glukhov, Alexander Korotkov Reviewed-by: Jonathan Katz, Pavel Stehule
2019-03-28Use FullTransactionId for the transaction stack.Thomas Munro
Provide GetTopFullTransactionId() and GetCurrentFullTransactionId(). The intended users of these interfaces are access methods that use xids for visibility checks but don't want to have to go back and "freeze" existing references some time later before the 32 bit xid counter wraps around. Use a new struct to serialize the transaction state for parallel query, because FullTransactionId doesn't fit into the previous serialization scheme very well. Author: Thomas Munro Reviewed-by: Heikki Linnakangas Discussion: https://postgr.es/m/CAA4eK1%2BMv%2Bmb0HFfWM9Srtc6MVe160WFurXV68iAFMcagRZ0dQ%40mail.gmail.com
2019-03-28Add basic infrastructure for 64 bit transaction IDs.Thomas Munro
Instead of inferring epoch progress from xids and checkpoints, introduce a 64 bit FullTransactionId type and use it to track xid generation. This fixes an unlikely bug where the epoch is reported incorrectly if the range of active xids wraps around more than once between checkpoints. The only user-visible effect of this commit is to correct the epoch used by txid_current() and txid_status(), also visible with pg_controldata, in those rare circumstances. It also creates some basic infrastructure so that later patches can use 64 bit transaction IDs in more places. The new type is a struct that we pass by value, as a form of strong typedef. This prevents the sort of accidental confusion between TransactionId and FullTransactionId that would be possible if we were to use a plain old uint64. Author: Thomas Munro Reported-by: Amit Kapila Reviewed-by: Andres Freund, Tom Lane, Heikki Linnakangas Discussion: https://postgr.es/m/CAA4eK1%2BMv%2Bmb0HFfWM9Srtc6MVe160WFurXV68iAFMcagRZ0dQ%40mail.gmail.com
2019-03-27tableam: Support for an index build's initial table scan(s).Andres Freund
To support building indexes over tables of different AMs, the scans to do so need to be routed through the table AM. While moving a fair amount of code, nearly all the changes are just moving code to below a callback. Currently the range based interface wouldn't make much sense for non block based table AMs. But that seems aceptable for now. Author: Andres Freund Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
2019-03-26Compute XID horizon for page level index vacuum on primary.Andres Freund
Previously the xid horizon was only computed during WAL replay. That had two major problems: 1) It relied on knowing what the table pointed to looks like. That was easy enough before the introducing of tableam (we knew it had to be heap, although some trickery around logging the heap relfilenodes was required). But to properly handle table AMs we need per-database catalog access to look up the AM handler, which recovery doesn't allow. 2) Not knowing the xid horizon also makes it hard to support logical decoding on standbys. When on a catalog table, we need to be able to conflict with slots that have an xid horizon that's too old. But computing the horizon by visiting the heap only works once consistency is reached, but we always need to be able to detect conflicts. There's also a secondary problem, in that the current method performs redundant work on every standby. But that's counterbalanced by potentially computing the value when not necessary (either because there's no standby, or because there's no connected backends). Solve 1) and 2) by moving computation of the xid horizon to the primary and by involving tableam in the computation of the horizon. To address the potentially increased overhead, increase the efficiency of the xid horizon computation for heap by sorting the tids, and eliminating redundant buffer accesses. When prefetching is available, additionally perform prefetching of buffers. As this is more of a maintenance task, rather than something routinely done in every read only query, we add an arbitrary 10 to the effective concurrency - thereby using IO concurrency, when not globally enabled. That's possibly not the perfect formula, but seems good enough for now. Bumps WAL format, as latestRemovedXid is now part of the records, and the heap's relfilenode isn't anymore. Author: Andres Freund, Amit Khandekar, Robert Haas Reviewed-By: Robert Haas Discussion: https://postgr.es/m/20181212204154.nsxf3gzqv3gesl32@alap3.anarazel.de https://postgr.es/m/20181214014235.dal5ogljs3bmlq44@alap3.anarazel.de https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
2019-03-23tableam: Add tuple_{insert, delete, update, lock} and use.Andres Freund
This adds new, required, table AM callbacks for insert/delete/update and lock_tuple. To be able to reasonably use those, the EvalPlanQual mechanism had to be adapted, moving more logic into the AM. Previously both delete/update/lock call-sites and the EPQ mechanism had to have awareness of the specific tuple format to be able to fetch the latest version of a tuple. Obviously that needs to be abstracted away. To do so, move the logic that find the latest row version into the AM. lock_tuple has a new flag argument, TUPLE_LOCK_FLAG_FIND_LAST_VERSION, that forces it to lock the last version, rather than the current one. It'd have been possible to do so via a separate callback as well, but finding the last version usually also necessitates locking the newest version, making it sensible to combine the two. This replaces the previous use of EvalPlanQualFetch(). Additionally HeapTupleUpdated, which previously signaled either a concurrent update or delete, is now split into two, to avoid callers needing AM specific knowledge to differentiate. The move of finding the latest row version into tuple_lock means that encountering a row concurrently moved into another partition will now raise an error about "tuple to be locked" rather than "tuple to be updated/deleted" - which is accurate, as that always happens when locking rows. While possible slightly less helpful for users, it seems like an acceptable trade-off. As part of this commit HTSU_Result has been renamed to TM_Result, and its members been expanded to differentiated between updating and deleting. HeapUpdateFailureData has been renamed to TM_FailureData. The interface to speculative insertion is changed so nodeModifyTable.c does not have to set the speculative token itself anymore. Instead there's a version of tuple_insert, tuple_insert_speculative, that performs the speculative insertion (without requiring a flag to signal that fact), and the speculative insertion is either made permanent with table_complete_speculative(succeeded = true) or aborted with succeeded = false). Note that multi_insert is not yet routed through tableam, nor is COPY. Changing multi_insert requires changes to copy.c that are large enough to better be done separately. Similarly, although simpler, CREATE TABLE AS and CREATE MATERIALIZED VIEW are also only going to be adjusted in a later commit. Author: Andres Freund and Haribabu Kommi Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20190313003903.nwvrxi7rw3ywhdel@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-23Add options to enable and disable checksums in pg_checksumsMichael Paquier
An offline cluster can now work with more modes in pg_checksums: - --enable enables checksums in a cluster, updating all blocks with a correct checksum, and updating the control file at the end. - --disable disables checksums in a cluster, updating only the control file. - --check is an extra option able to verify checksums for a cluster, and the default used if no mode is specified. When running --enable or --disable, the data folder gets fsync'd for durability, and then it is followed by a control file update and flush to keep the operation consistent should the tool be interrupted, killed or the host unplugged. If no mode is specified in the options, then --check is used for compatibility with older versions of pg_checksums (named pg_verify_checksums in v11 where it was introduced). Author: Michael Banck, Michael Paquier Reviewed-by: Fabien Coelho, Magnus Hagander, Sergei Kornilov Discussion: https://postgr.es/m/20181221201616.GD4974@nighthawk.caipicrew.dd-dns.de
2019-03-20Get rid of jsonpath_gram.h and jsonpath_scanner.hAlexander Korotkov
Jsonpath grammar and scanner are both quite small. It doesn't worth complexity to compile them separately. This commit makes grammar and scanner be compiled at once. Therefore, jsonpath_gram.h and jsonpath_gram.h are no longer needed. This commit also does some reorganization of code in jsonpath_gram.y. Discussion: https://postgr.es/m/d47b2023-3ecb-5f04-d253-d557547cf74f%402ndQuadrant.com
2019-03-16Partial implementation of SQL/JSON path languageAlexander Korotkov
SQL 2016 standards among other things contains set of SQL/JSON features for JSON processing inside of relational database. The core of SQL/JSON is JSON path language, allowing access parts of JSON documents and make computations over them. This commit implements partial support JSON path language as separate datatype called "jsonpath". The implementation is partial because it's lacking datetime support and suppression of numeric errors. Missing features will be added later by separate commits. Support of SQL/JSON features requires implementation of separate nodes, and it will be considered in subsequent patches. This commit includes following set of plain functions, allowing to execute jsonpath over jsonb values: * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]). * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]). This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb, jsonpath) correspondingly. These operators will have an index support (implemented in subsequent patches). Catversion bumped, to add new functions and operators. Code was written by Nikita Glukhov and Teodor Sigaev, revised by me. Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work was inspired by Oleg Bartunov. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
2019-03-12Create a script that can renumber manually-assigned OIDs.Tom Lane
This commit adds a Perl script renumber_oids.pl, which can reassign a range of manually-assigned OIDs to someplace else by modifying OID fields of the catalog *.dat files and OID-assigning macros in the catalog *.h files. Up to now, we've encouraged new patches that need manually-assigned OIDs to use OIDs just above the range of existing OIDs. Predictably, this leads to patches stepping on each others' toes, as whichever one gets committed first creates an OID conflict that other patch author(s) have to resolve manually. With the availability of renumber_oids.pl, we can eliminate a lot of this hassle. The new project policy, therefore, is: * Encourage new patches to use high OIDs (the documentation suggests choosing a block of OIDs at random in 8000..9999). * After feature freeze in each development cycle, run renumber_oids.pl to move all such OIDs down to lower numbers, thus freeing the high OID range for the next development cycle. This plan should greatly reduce the risk of OID collisions between concurrently-developed patches. Also, if such a collision happens anyway, we have the option to resolve it without much effort by doing an off-schedule OID renumbering to get the first-committed patch out of the way. Or a patch author could use renumber_oids.pl to change their patch's assignments without much pain. This approach does put a premium on not hard-wiring any OID values in places where renumber_oids.pl and genbki.pl can't fix them. Project practice in that respect seems to be pretty good already, but a follow-on patch will sand down some rough edges. John Naylor and Tom Lane, per an idea of Peter Geoghegan's Discussion: https://postgr.es/m/CAH2-WzmMTGMcPuph4OvsO7Ykut0AOCF_i-=eaochT0dd2BN9CQ@mail.gmail.com
2019-03-11tableam: Add and use scan APIs.Andres Freund
Too allow table accesses to be not directly dependent on heap, several new abstractions are needed. Specifically: 1) Heap scans need to be generalized into table scans. Do this by introducing TableScanDesc, which will be the "base class" for individual AMs. This contains the AM independent fields from HeapScanDesc. The previous heap_{beginscan,rescan,endscan} et al. have been replaced with a table_ version. There's no direct replacement for heap_getnext(), as that returned a HeapTuple, which is undesirable for a other AMs. Instead there's table_scan_getnextslot(). But note that heap_getnext() lives on, it's still used widely to access catalog tables. This is achieved by new scan_begin, scan_end, scan_rescan, scan_getnextslot callbacks. 2) The portion of parallel scans that's shared between backends need to be able to do so without the user doing per-AM work. To achieve that new parallelscan_{estimate, initialize, reinitialize} callbacks are introduced, which operate on a new ParallelTableScanDesc, which again can be subclassed by AMs. As it is likely that several AMs are going to be block oriented, block oriented callbacks that can be shared between such AMs are provided and used by heap. table_block_parallelscan_{estimate, intiialize, reinitialize} as callbacks, and table_block_parallelscan_{nextpage, init} for use in AMs. These operate on a ParallelBlockTableScanDesc. 3) Index scans need to be able to access tables to return a tuple, and there needs to be state across individual accesses to the heap to store state like buffers. That's now handled by introducing a sort-of-scan IndexFetchTable, which again is intended to be subclassed by individual AMs (for heap IndexFetchHeap). The relevant callbacks for an AM are index_fetch_{end, begin, reset} to create the necessary state, and index_fetch_tuple to retrieve an indexed tuple. Note that index_fetch_tuple implementations need to be smarter than just blindly fetching the tuples for AMs that have optimizations similar to heap's HOT - the currently alive tuple in the update chain needs to be fetched if appropriate. Similar to table_scan_getnextslot(), it's undesirable to continue to return HeapTuples. Thus index_fetch_heap (might want to rename that later) now accepts a slot as an argument. Core code doesn't have a lot of call sites performing index scans without going through the systable_* API (in contrast to loads of heap_getnext calls and working directly with HeapTuples). Index scans now store the result of a search in IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the target is not generally a HeapTuple anymore that seems cleaner. To be able to sensible adapt code to use the above, two further callbacks have been introduced: a) slot_callbacks returns a TupleTableSlotOps* suitable for creating slots capable of holding a tuple of the AMs type. table_slot_callbacks() and table_slot_create() are based upon that, but have additional logic to deal with views, foreign tables, etc. While this change could have been done separately, nearly all the call sites that needed to be adapted for the rest of this commit also would have been needed to be adapted for table_slot_callbacks(), making separation not worthwhile. b) tuple_satisfies_snapshot checks whether the tuple in a slot is currently visible according to a snapshot. That's required as a few places now don't have a buffer + HeapTuple around, but a slot (which in heap's case internally has that information). Additionally a few infrastructure changes were needed: I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now internally uses a slot to keep track of tuples. While systable_getnext() still returns HeapTuples, and will so for the foreseeable future, the index API (see 1) above) now only deals with slots. The remainder, and largest part, of this commit is then adjusting all scans in postgres to use the new APIs. Author: Andres Freund, Haribabu Kommi, Alvaro Herrera Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-09Simplify release-note links to back branches.Tom Lane
Now that https://www.postgresql.org/docs/release/ is populated, replace the stopgap text we had under "Prior Releases" with a pointer to that archive. Discussion: https://postgr.es/m/e0f09c9a-bd2b-862a-d379-601dfabc8969@postgresql.org
2019-03-06tableam: introduce table AM infrastructure.Andres Freund
This introduces the concept of table access methods, i.e. CREATE ACCESS METHOD ... TYPE TABLE and CREATE TABLE ... USING (storage-engine). No table access functionality is delegated to table AMs as of this commit, that'll be done in following commits. Subsequent commits will incrementally abstract table access functionality to be routed through table access methods. That change is too large to be reviewed & committed at once, so it'll be done incrementally. Docs will be updated at the end, as adding them incrementally would likely make them less coherent, and definitely is a lot more work, without a lot of benefit. Table access methods are specified similar to index access methods, i.e. pg_am.amhandler returns, as INTERNAL, a pointer to a struct with callbacks. In contrast to index AMs that struct needs to live as long as a backend, typically that's achieved by just returning a pointer to a constant struct. Psql's \d+ now displays a table's access method. That can be disabled with HIDE_TABLEAM=true, which is mainly useful so regression tests can be run against different AMs. It's quite possible that this behaviour still needs to be fine tuned. For now it's not allowed to set a table AM for a partitioned table, as we've not resolved how partitions would inherit that. Disallowing allows us to introduce, if we decide that's the way forward, such a behaviour without a compatibility break. Catversion bumped, to add the heap table AM and references to it. Author: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql https://postgr.es/m/20190107235616.6lur25ph22u5u5av@alap3.anarazel.de https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
2019-02-15Make use of compiler builtins and/or assembly for CLZ, CTZ, POPCNT.Tom Lane
Test for the compiler builtins __builtin_clz, __builtin_ctz, and __builtin_popcount, and make use of these in preference to handwritten C code if they're available. Create src/port infrastructure for "leftmost one", "rightmost one", and "popcount" so as to centralize these decisions. On x86_64, __builtin_popcount generally won't make use of the POPCNT opcode because that's not universally supported yet. Provide code that checks CPUID and then calls POPCNT via asm() if available. This requires indirecting through a function pointer, which is an annoying amount of overhead for a one-instruction operation, but it's probably not worth working harder than this for our current use-cases. I'm not sure we've found all the existing places that could profit from this new infrastructure; but we at least touched all the ones that used copied-and-pasted versions of the bitmapset.c code, and got rid of multiple copies of the associated constant arrays. While at it, replace c-compiler.m4's one-per-builtin-function macros with a single one that can handle all the cases we need to worry about so far. Also, because I'm paranoid, make those checks into AC_LINK checks rather than just AC_COMPILE; the former coding failed to verify that libgcc has support for the builtin, in cases where it's not inline code. David Rowley, Thomas Munro, Alvaro Herrera, Tom Lane Discussion: https://postgr.es/m/CAKJS1f9WTAGG1tPeJnD18hiQW5gAk59fQ6WK-vfdAKEHyRg2RA@mail.gmail.com
2019-02-15Revert attempts to use POPCNT etc instructionsAlvaro Herrera
This reverts commits fc6c72747ae6, 109de05cbb03, d0b4663c23b7 and 711bab1e4d19. Somebody will have to try harder before submitting this patch again. I've spent entirely too much time on it already, and the #ifdef maze yet to be written in order for it to build at all got on my nerves. The amount of work needed to get a platform-specific performance improvement that's barely above the noise level is not worth it.