summaryrefslogtreecommitdiff
path: root/src/fe_utils
AgeCommit message (Collapse)Author
2021-04-29psql: Fix line continuation prompts for unbalanced parenthesesPeter Eisentraut
This was broken by a silly mistake in e717a9a18b2e34c9c40e5259ad4d31cd7e420750. Reported-by: Jeff Janes <jeff.janes@gmail.com> Author: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://www.postgresql.org/message-id/CAMkU=1zKGWEJdBbYKw7Tn7cJmYR_UjgdcXTPDqJj=dNwCETBCQ@mail.gmail.com
2021-04-16psql: Refine lexing of BEGIN...END blocks in CREATE FUNCTION statementsPeter Eisentraut
Only track BEGIN...END blocks if they are in a CREATE [OR REPLACE] {FUNCTION|PROCEDURE} statement. Ignore if in parentheses. Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at> Discussion: https://www.postgresql.org/message-id/cee01d26fe55bc086b3bcf10bfe4e8d450e2f608.camel@cybertec.at
2021-04-07Allow psql's \df and \do commands to specify argument types.Tom Lane
When dealing with overloaded function or operator names, having to look through a long list of matches is tedious. Let's extend these commands to allow specification of (input) argument types to let such results be trimmed down. Each additional argument is treated the same as the pattern argument of \dT and matched against the appropriate argument's type name. While at it, fix \dT (and these new options) to recognize the usual notation of "foo[]" for "the array type over foo", and to handle the special abbreviations allowed by the backend grammar, such as "int" for "integer". Greg Sabino Mullane, revised rather significantly by me Discussion: https://postgr.es/m/CAKAnmmLF9Hhu02N+s7uAyLc5J1xZReg72HQUoiKhNiJV3_jACQ@mail.gmail.com
2021-04-07SQL-standard function bodyPeter Eisentraut
This adds support for writing CREATE FUNCTION and CREATE PROCEDURE statements for language SQL with a function body that conforms to the SQL standard and is portable to other implementations. Instead of the PostgreSQL-specific AS $$ string literal $$ syntax, this allows writing out the SQL statements making up the body unquoted, either as a single statement: CREATE FUNCTION add(a integer, b integer) RETURNS integer LANGUAGE SQL RETURN a + b; or as a block CREATE PROCEDURE insert_data(a integer, b integer) LANGUAGE SQL BEGIN ATOMIC INSERT INTO tbl VALUES (a); INSERT INTO tbl VALUES (b); END; The function body is parsed at function definition time and stored as expression nodes in a new pg_proc column prosqlbody. So at run time, no further parsing is required. However, this form does not support polymorphic arguments, because there is no more parse analysis done at call time. Dependencies between the function and the objects it uses are fully tracked. A new RETURN statement is introduced. This can only be used inside function bodies. Internally, it is treated much like a SELECT statement. psql needs some new intelligence to keep track of function body boundaries so that it doesn't send off statements when it sees semicolons that are inside a function body. Tested-by: Jaime Casanova <jcasanov@systemguards.com.ec> Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com
2021-03-29psql: call clearerr() just before printingAlvaro Herrera
We were never doing clearerr() on the output stream, which results in a message being printed after each result once an EOF is seen: could not print result table: Success This message was added by commit b03436994bcc (in the pg13 era); before that, the error indicator would never be examined. So backpatch only that far back, even though the actual bug (to wit: the fact that the error indicator is never cleared) is older.
2021-03-11Refactor and generalize the ParallelSlot machinery.Robert Haas
Create a wrapper object, ParallelSlotArray, to encapsulate the number of slots and the slot array itself, plus some other relevant bits of information. This reduces the number of parameters we have to pass around all over the place. Allow for a ParallelSlotArray to contain slots connected to different databases within a single cluster. The current clients of this mechanism don't need this, but it is expected to be used by future patches. Defer connecting to databases until we actually need the connection for something. This is a slight behavior change for vacuumdb and reindexdb. If you specify a number of jobs that is larger than the number of objects, the extra connections will now not be used. But, on the other hand, if you specify a number of jobs that is so large that it's going to fail, the failure would previously have happened before any operations were actually started, and now it won't. Mark Dilger, reviewed by me. Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com Discussion: http://postgr.es/m/BA592F2D-F928-46FF-9516-2B827F067F57@enterprisedb.com
2021-02-05Generalize parallel slot result handling.Robert Haas
Instead of having a hard-coded behavior that we ignore missing tables and report all other errors, let the caller decide what to do by setting a callback. Mark Dilger, reviewed and somewhat revised by me. The larger patch series of which this is a part has also had review from Peter Geoghegan, Andres Freund, Álvaro Herrera, Michael Paquier, and Amul Sul, but I don't know whether any of them have reviewed this bit specifically. Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com Discussion: http://postgr.es/m/5F743835-3399-419C-8324-2D424237E999@enterprisedb.com Discussion: http://postgr.es/m/70655DF3-33CE-4527-9A4D-DDEB582B6BA0@enterprisedb.com
2021-02-05Move some code from src/bin/scripts to src/fe_utils to permit reuse.Robert Haas
The parallel slots infrastructure (which implements client-side multiplexing of server connections doing similar things, not threading or multiple processes or anything like that) are moved from src/bin/scripts/scripts_parallel.c to src/fe_utils/parallel_slot.c. The functions consumeQueryResult() and processQueryResult() which were previously part of src/bin/scripts/common.c are now moved into that file as well, becoming static helper functions. This might need to be changed in the future, but currently they're not used for anything else. Some other functions from src/bin/scripts/common.c are moved to to src/fe_utils and are split up among several files. connectDatabase(), connectMaintenanceDatabase(), and disconnectDatabase() are moved to connect_utils.c. executeQuery(), executeCommand(), and executeMaintenanceCommand() are move to query_utils.c. handle_help_version_opts() is moved to option_utils.c. Mark Dilger, reviewed by me. The larger patch series of which this is a part has also had review from Peter Geoghegan, Andres Freund, Álvaro Herrera, Michael Paquier, and Amul Sul, but I don't know whether any of them have reviewed this bit specifically. Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com Discussion: http://postgr.es/m/5F743835-3399-419C-8324-2D424237E999@enterprisedb.com Discussion: http://postgr.es/m/70655DF3-33CE-4527-9A4D-DDEB582B6BA0@enterprisedb.com
2021-02-03Factor pattern-construction logic out of processSQLNamePattern.Robert Haas
The logic for converting the shell-glob-like syntax supported by utilities like psql and pg_dump to regular expression is extracted into a new function patternToSQLRegex. The existing function processSQLNamePattern now uses this function as a subroutine. patternToSQLRegex is a little more general than what is required by processSQLNamePattern. That function is only interested in patterns that can have up to 2 parts, a schema and a relation; but patternToSQLRegex can limit the maximum number of parts to between 1 and 3, so that patterns can look like either "database.schema.relation", "schema.relation", or "relation" depending on how it's invoked and what the user specifies. processSQLNamePattern only passes two buffers, so works exactly the same as before, always interpreting the pattern as either a "schema.relation" pattern or a "relation" pattern. But, future callers can use this function in other ways. Mark Dilger, reviewed by me. The larger patch series of which this is a part has also had review from Peter Geoghegan, Andres Freund, Álvaro Herrera, Michael Paquier, and Amul Sul, but I don't know whether any of them have reviewed this bit specifically. Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com Discussion: http://postgr.es/m/5F743835-3399-419C-8324-2D424237E999@enterprisedb.com Discussion: http://postgr.es/m/70655DF3-33CE-4527-9A4D-DDEB582B6BA0@enterprisedb.com
2021-01-02Update copyright for 2021Bruce Momjian
Backpatch-through: 9.5
2020-11-09Remove incorrect %s in stringMagnus Hagander
Appears to have been a copy/paste error in the original commit that moved the messages to fe_utils/. Author: Tang, Haiying <tanghy.fnst@cn.fujitsu.com> Backpatch-through: 13 Discussion: https://postgr.es/m/3321cbcea76d4d2c8320a05c19b9304a@G08CNEXMBPEKD05.g08.fujitsu.local
2020-10-29Don't use custom OID symbols in pg_type.dat, either.Tom Lane
On the same reasoning as in commit 36b931214, forbid using custom oid_symbol macros in pg_type as well as pg_proc, so that we always rely on the predictable macro names generated by genbki.pl. We do continue to grant grandfather status to the names CASHOID and LSNOID, although those are now considered deprecated aliases for the preferred names MONEYOID and PG_LSNOID. This is because there's likely to be client-side code using the old names, and this bout of neatnik-ism doesn't quite seem worth breaking client code. There might be a case for grandfathering EVTTRIGGEROID, too, since externally-maintained PLs may reference that symbol. But renaming such references to EVENT_TRIGGEROID doesn't seem like a particularly heavy lift --- we make far more significant backend API changes in every major release. For now I didn't add that, but we could reconsider if there's pushback. The other names changed here seem pretty unlikely to have any outside uses. Again, we could add alias macros if there are complaints, but for now I didn't. As before, no need for a catversion bump. John Naylor Discussion: https://postgr.es/m/CAFBsxsHpCbjfoddNGpnnnY5pHwckWfiYkMYSF74PmP1su0+ZOw@mail.gmail.com
2020-09-24Standardize the printf format for st_sizePeter Eisentraut
Existing code used various inconsistent ways to printf struct stat's st_size member. The type of that is off_t, which is in most cases a signed 64-bit integer, so use the long long int format for it.
2020-09-14Message fixes and style improvementsPeter Eisentraut
2020-08-10Move connect.h from fe_utils to src/include/common.Noah Misch
Any libpq client can use the header. Clients include backend components postgres_fdw, dblink, and logical replication apply worker. Back-patch to v10, because another fix needs this. In released branches, just copy the header and keep the original.
2020-06-29Mop up some no-longer-necessary hacks around printf %.*s format.Tom Lane
Commit 54cd4f045 added some kluges to work around an old glibc bug, namely that %.*s could misbehave if glibc thought any characters in the supplied string were incorrectly encoded. Now that we use our own snprintf.c implementation, we need not worry about that bug (even if it still exists in the wild). Revert a couple of particularly ugly hacks, and remove or improve assorted comments. Note that there can still be encoding-related hazards here: blindly clipping at a fixed length risks producing wrongly-encoded output if the clip splits a multibyte character. However, code that's doing correct multibyte-aware clipping doesn't really need a comment about that, while code that isn't needs an explanation why not, rather than a red-herring comment about an obsolete bug. Discussion: https://postgr.es/m/279428.1593373684@sss.pgh.pa.us
2020-06-11Move frontend-side archive APIs from src/common/ to src/fe_utils/Michael Paquier
fe_archive.c was compiled only for the frontend in src/common/, but as it will never share anything with the backend, it makes most sense to move this file to src/fe_utils/. Reported-by: Peter Eisentraut Discussion: https://postgr.es/m/e9766d71-8655-ac86-bdf6-77e0e7169977@2ndquadrant.com Backpatch-through: 13
2020-06-07Rethink definition of cancel.c's CancelRequested flag.Tom Lane
As it stands, this flag is only set when we've successfully sent a cancel request, not if we get SIGINT and then fail to send a cancel. However, for almost all callers, that's the Wrong Thing: we'd prefer to abort processing after control-C even if no cancel could be sent. As an example, since commit 1d468b9ad "pgbench -i" fails to give up sending COPY data even after control-C, if the postmaster has been stopped, which is clearly not what the code intends and not what anyone would want. (The fact that it keeps going at all is the fault of a separate bug in libpq, but not letting CancelRequested become set is clearly not what we want here.) The sole exception, as far as I can find, is that scripts_parallel.c's ParallelSlotsGetIdle tries to consume a query result after issuing a cancel, which of course might not terminate quickly if no cancel happened. But that behavior was poorly thought out too. No user of ParallelSlotsGetIdle tries to continue processing after a cancel, so there is really no point in trying to clear the connection's state. Moreover this has the same defect as for other users of cancel.c, that if the cancel request fails for some reason then we end up with control-C being completely ignored. (On top of that, select_loop failed to distinguish clearly between SIGINT and other reasons for select(2) failing, which means that it's possible that the existing code would think that a cancel has been sent when it hasn't.) Hence, redefine CancelRequested as simply meaning that SIGINT was received. We could add a second flag with the other meaning, but in the absence of any compelling argument why such a flag is needed, I think it would just offer an opportunity for future callers to get it wrong. Also remove the consumeQueryResult call in ParallelSlotsGetIdle's failure exit. In passing, simplify the API of select_loop. It would now be possible to re-unify psql's cancel_pressed with CancelRequested, partly undoing 5d43c3c54. But I'm not really convinced that that's worth the trouble, so I left psql alone, other than fixing a misleading comment. This code is new in v13 (cf a4fd3aa71), so no need for back-patch. Per investigation of a complaint from Andres Freund. Discussion: https://postgr.es/m/20200603201242.ofvm4jztpqytwfye@alap3.anarazel.de
2020-04-07Add SQL type xid8 to expose FullTransactionId to users.Thomas Munro
Similar to xid, but 64 bits wide. This new type is suitable for use in various system views and administration functions. Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> Reviewed-by: Takao Fujii <btfujiitkp@oss.nttdata.com> Reviewed-by: Yoshikazu Imai <imai.yoshikazu@fujitsu.com> Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://postgr.es/m/20190725000636.666m5mad25wfbrri%40alap3.anarazel.de
2020-02-12Fix bug in pg_basebackup -F plain -R.Fujii Masao
Commit caba97a9d9 changed pg_basebackup -F plain -R so that it overwrote postgresql.auto.conf in the backup, with new connection setting. This could cause the existing postgresql.auto.conf settings in the server to get lost unexpectedly. This is a bug. This commit fixes the bug by making pg_basebackup -F plain -R append the connection setting into postgresql.auto.conf in the backup. Author: Fujii Masao Reviewed-by: Sergei Kornilov Discussion: https://postgr.es/m/250dcf2a-94e7-c05e-824a-73cfb38a48a4@oss.nttdata.com
2020-01-13Reduce size of backend scanner's tables.Tom Lane
Previously, the core scanner's yy_transition[] array had 37045 elements. Since that number is larger than INT16_MAX, Flex generated the array to contain 32-bit integers. By reimplementing some of the bulkier scanner rules, this patch reduces the array to 20495 elements. The much smaller total length, combined with the consequent use of 16-bit integers for the array elements reduces the binary size by over 200kB. This was accomplished in two ways: 1. Consolidate handling of quote continuations into a new start condition, rather than duplicating that logic for five different string types. 2. Treat Unicode strings and identifiers followed by a UESCAPE sequence as three separate tokens, rather than one. The logic to de-escape Unicode strings is moved to the filter code in parser.c, which already had the ability to provide special processing for token sequences. While we could have implemented the conversion in the grammar, that approach was rejected for performance and maintainability reasons. Performance in microbenchmarks of raw parsing seems equal or slightly faster in most cases, and it's reasonable to expect that in real-world usage (with more competition for the CPU cache) there will be a larger win. The exception is UESCAPE sequences; lexing those is about 10% slower, primarily because the scanner now has to be called three times rather than one. This seems acceptable since that feature is very rarely used. The psql and epcg lexers are likewise modified, primarily because we want to keep them all in sync. Since those lexers don't use the space-hogging -CF option, the space savings is much less, but it's still good for perhaps 10kB apiece. While at it, merge the ecpg lexer's handling of C-style comments used in SQL and in C. Those have different rules regarding nested comments, but since we already have the ability to keep track of the previous start condition, we can use that to handle both cases within a single start condition. This matches the core scanner more closely. John Naylor Discussion: https://postgr.es/m/CACPNZCvaoa3EgVWm5yZhcSTX6RAtaLgniCPcBVOCwm8h3xpWkw@mail.gmail.com
2020-01-01Update copyrights for 2020Bruce Momjian
Backpatch-through: update all files in master, backpatch legal files through 9.4
2019-12-17Fix query cancellation handling in psqlMichael Paquier
The refactoring done in a4fd3aa for query cancellation has messed up with the logic in psql by mixing CancelRequested and cancel_pressed, breaking for example \watch. The former would be switched to true if a cancellation request has been attempted and that it actually succeeded, and the latter tracks if a cancellation attempt has been done. This commit brings back the code of psql to a state consistent to what it was before a4fd3aa, without giving up on the refactoring pieces introduced. It should be actually possible to merge more both flags as their concepts are close enough, however note that psql's --single-step mode relies on cancel_pressed to be always set, so this requires more careful analysis left for later. While on it, fix the declarations of CancelRequested (in cancel.c) and cancel_pressed (in psql) to be volatile sig_atomic_t. Previously, both were declared as booleans, which should be fine on modern platforms, but the C standard recommends the use of sig_atomic_t for variables used in signal handlers. Note that since its introduction in a1792320, CancelRequested declaration was not volatile. Reported-by: Jeff Janes Author: Michael Paquier Discussion: https://postgr.es/m/CAMkU=1zpoUDGKqWKuMWkj7t-bOCaJDx0r=5te_-d0B2HVLABXg@mail.gmail.com
2019-12-02Refactor query cancellation code into src/fe_utils/Michael Paquier
Originally, this code was duplicated in src/bin/psql/ and src/bin/scripts/, but it can be useful for other frontend applications, like pgbench. This refactoring offers the possibility to setup a custom callback which would get called in the signal handler for SIGINT or when the interruption console events happen on Windows. Author: Fabien Coelho, with contributions from Michael Paquier Reviewed-by: Álvaro Herrera, Ibrar Ahmed Discussion: https://postgr.es/m/alpine.DEB.2.21.1910311939430.27369@lancre
2019-11-28Remove useless "return;" linesAlvaro Herrera
Discussion: https://postgr.es/m/20191128144653.GA27883@alvherre.pgsql
2019-11-05Split all OBJS style lines in makefiles into one-line-per-entry style.Andres Freund
When maintaining or merging patches, one of the most common sources for conflicts are the list of objects in makefiles. Especially when the split across lines has been changed on both sides, which is somewhat common due to attempting to stay below 80 columns, those conflicts are unnecessarily laborious to resolve. By splitting, and alphabetically sorting, OBJS style lines into one object per line, conflicts should be less frequent, and easier to resolve when they still occur. Author: Andres Freund Discussion: https://postgr.es/m/20191029200901.vww4idgcxv74cwes@alap3.anarazel.de
2019-10-25Make the order of the header file includes consistent in non-backend modules.Amit Kapila
Similar to commit 7e735035f2, this commit makes the order of header file inclusion consistent for non-backend modules. In passing, fix the case where we were using angle brackets (<>) for the local module includes instead of quotes (""). Author: Vignesh C Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CALDaNm2Sznv8RR6Ex-iJO6xAdsxgWhCoETkaYX=+9DW3q0QCfA@mail.gmail.com
2019-10-17Fix parallel restore of FKs to partitioned tablesAlvaro Herrera
When an FK constraint is created, it needs the index on the referenced table to exist and be valid. When doing parallel pg_restore and the referenced table was partitioned, this condition can sometimes not be met, because pg_dump didn't emit sufficient object dependencies to ensure so; this means that parallel pg_restore would fail in certain conditions. Fix by having pg_dump make the FK constraint object dependent on the partition attachment objects for the constraint's referenced index. This has been broken since f56f8f8da6af, so backpatch to Postgres 12. Discussion: https://postgr.es/m/20191005224333.GA9738@alvherre.pgsql
2019-09-25Split out recovery confing-writing code from pg_basebackupAlvaro Herrera
... into a new file, fe_utils/recovery_gen.c. This can later be used by pg_rewind. Authors: Paul Guo, Jimmy Yih, Ashwin Agrawal. A few tweaks by Álvaro Herrera Reviewed-by: Michaël Paquier Discussion: https://postgr.es/m/CAEET0ZEffUkXc48pg2iqARQgGRYDiiVxDu+yYek_bTwJF+q=Uw@mail.gmail.com
2019-07-30Fix memory leak coming from simple lists built in reindexdbMichael Paquier
When building a list of relations for a parallel processing of a schema or a database (or just a single-entry list for the non-parallel case with the database name), the list is allocated and built on-the-fly for each database processed, leaking after one database-level reindex is done. This accumulates leaks when processing all databases, and could become a visible issue with thousands of relations. This is fixed by introducing a new routine in simple_list.c to free all the elements in a simple list made of strings or OIDs. The header of the list may be using a variable declaration or an allocated pointer, so we don't have a routine to free this part to keep the interface simple. Per report from coverity for an issue introduced by 5ab892c, and valgrind complains about the leak as well. The idea to introduce a new routine in simple_list.c is from Tom Lane. Author: Michael Paquier Reviewed-by: Tom Lane
2019-07-04Use appendStringInfoString and appendPQExpBufferStr where possibleDavid Rowley
This changes various places where appendPQExpBuffer was used in places where it was possible to use appendPQExpBufferStr, and likewise for appendStringInfo and appendStringInfoString. This is really just a stylistic improvement, but there are also small performance gains to be had from doing this. Discussion: http://postgr.es/m/CAKJS1f9P=M-3ULmPvr8iCno8yvfDViHibJjpriHU8+SXUgeZ=w@mail.gmail.com
2019-06-08Update stale comments, and fix comment typos.Noah Misch
2019-05-22Phase 2 pgindent run for v12.Tom Lane
Switch to 2.1 version of pg_bsd_indent. This formats multiline function declarations "correctly", that is with additional lines of parameter declarations indented to match where the first line's left parenthesis is. Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com
2019-05-14Move logging.h and logging.c from src/fe_utils/ to src/common/.Tom Lane
The original placement of this module in src/fe_utils/ is ill-considered, because several src/common/ modules have dependencies on it, meaning that libpgcommon and libpgfeutils now have mutual dependencies. That makes it pointless to have distinct libraries at all. The intended design is that libpgcommon is lower-level than libpgfeutils, so only dependencies from the latter to the former are acceptable. We already have the precedent that fe_memutils and a couple of other modules in src/common/ are frontend-only, so it's not stretching anything out of whack to treat logging.c as a frontend-only module in src/common/. To the extent that such modules help provide a common frontend/backend environment for the rest of common/ to use, it's a reasonable design. (logging.c does not yet provide an ereport() emulation, but one can dream.) Hence, move these files over, and revert basically all of the build-system changes made by commit cc8d41511. There are no places that need to grow new dependencies on libpgcommon, further reinforcing the idea that this is the right solution. Discussion: https://postgr.es/m/a912ffff-f6e4-778a-c86a-cf5c47a12933@2ndquadrant.com
2019-04-05Ensure consistent name matching behavior in processSQLNamePattern().Tom Lane
Prior to v12, if you used a collation-sensitive regex feature in a pattern handled by processSQLNamePattern() (for instance, \d '\\w+' in psql), the behavior you got matched the database's default collation. Since commit 586b98fdf you'd usually get C-collation behavior, because the catalog "name"-type columns are now marked as COLLATE "C". Add explicit COLLATE specifications to restore the prior behavior. (Note for whoever writes the v12 release notes: the need for this shows that while 586b98fdf preserved pre-v12 behavior of "name" columns for simple comparison operators, it changed the behavior of regex operators on those columns. Although this patch fixes it for pattern matches generated by our own tools, user-written queries will still be affected. So we'd better mention this issue as a compatibility item.) Daniel Vérité Discussion: https://postgr.es/m/701e51f0-0ec0-4e70-a365-1958d66dd8d2@manitou-mail.org
2019-04-01Unified logging system for command-line programsPeter Eisentraut
This unifies the various ad hoc logging (message printing, error printing) systems used throughout the command-line programs. Features: - Program name is automatically prefixed. - Message string does not end with newline. This removes a common source of inconsistencies and omissions. - Additionally, a final newline is automatically stripped, simplifying use of PQerrorMessage() etc., another common source of mistakes. - I converted error message strings to use %m where possible. - As a result of the above several points, more translatable message strings can be shared between different components and between frontends and backend, without gratuitous punctuation or whitespace differences. - There is support for setting a "log level". This is not meant to be user-facing, but can be used internally to implement debug or verbose modes. - Lazy argument evaluation, so no significant overhead if logging at some level is disabled. - Some color in the messages, similar to gcc and clang. Set PG_COLOR=auto to try it out. Some colors are predefined, but can be customized by setting PG_COLORS. - Common files (common/, fe_utils/, etc.) can handle logging much more simply by just using one API without worrying too much about the context of the calling program, requiring callbacks, or having to pass "progname" around everywhere. - Some programs called setvbuf() to make sure that stderr is unbuffered, even on Windows. But not all programs did that. This is now done centrally. Soft goals: - Reduces vertical space use and visual complexity of error reporting in the source code. - Encourages more deliberate classification of messages. For example, in some cases it wasn't clear without analyzing the surrounding code whether a message was meant as an error or just an info. - Concepts and terms are vaguely aligned with popular logging frameworks such as log4j and Python logging. This is all just about printing stuff out. Nothing affects program flow (e.g., fatal exits). The uses are just too varied to do that. Some existing code had wrappers that do some kind of print-and-exit, and I adapted those. I tried to keep the output mostly the same, but there is a lot of historical baggage to unwind and special cases to consider, and I might not always have succeeded. One significant change is that pg_rewind used to write all error messages to stdout. That is now changed to stderr. Reviewed-by: Donald Dong <xdong@csumb.edu> Reviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru> Discussion: https://www.postgresql.org/message-id/flat/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com
2019-03-25pgbench: Remove \csetAlvaro Herrera
Partial revert of commit 6260cc550b0e, "pgbench: add \cset and \gset commands". While \gset is widely considered a useful and necessary tool for user- defined benchmarks, \cset does not have as much value, and its implementation was considered "not to be up to project standards" (though I, Álvaro, can't quite understand exactly how). Therefore, remove \cset. Author: Fabien Coelho Discussion: https://postgr.es/m/alpine.DEB.2.21.1903230716030.18811@lancre Discussion: https://postgr.es/m/201901101900.mv7zduch6sad@alvherre.pgsql
2019-02-13More unconstify usePeter Eisentraut
Replace casts whose only purpose is to cast away const with the unconstify() macro. Discussion: https://www.postgresql.org/message-id/flat/53a28052-f9f3-1808-fed9-460fd43035ab%402ndquadrant.com
2019-01-10pgbench: add \cset and \gset commandsAlvaro Herrera
These commands allow assignment of values produced by queries to pgbench variables, where they can be used by further commands. \gset terminates a command sequence (just like a bare semicolon); \cset separates multiple queries in a compound command, like an escaped semicolon (\;). A prefix can be provided to the \-command and is prepended to the name of each output column to produce the final variable name. This feature allows pgbench scripts to react meaningfully to the actual database contents, allowing more powerful benchmarks to be written. Authors: Fabien Coelho, Álvaro Herrera Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> Reviewed-by: Stephen Frost <sfrost@snowman.net> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Tatsuo Ishii <ishii@sraoss.co.jp> Reviewed-by: Rafia Sabih <rafia.sabih@enterprisedb.com> Discussion: https://postgr.es/m/alpine.DEB.2.20.1607091005330.3412@sto
2019-01-06Replace the data structure used for keyword lookup.Tom Lane
Previously, ScanKeywordLookup was passed an array of string pointers. This had some performance deficiencies: the strings themselves might be scattered all over the place depending on the compiler (and some quick checking shows that at least with gcc-on-Linux, they indeed weren't reliably close together). That led to very cache-unfriendly behavior as the binary search touched strings in many different pages. Also, depending on the platform, the string pointers might need to be adjusted at program start, so that they couldn't be simple constant data. And the ScanKeyword struct had been designed with an eye to 32-bit machines originally; on 64-bit it requires 16 bytes per keyword, making it even more cache-unfriendly. Redesign so that the keyword strings themselves are allocated consecutively (as part of one big char-string constant), thereby eliminating the touch-lots-of-unrelated-pages syndrome. And get rid of the ScanKeyword array in favor of three separate arrays: uint16 offsets into the keyword array, uint16 token codes, and uint8 keyword categories. That reduces the overhead per keyword to 5 bytes instead of 16 (even less in programs that only need one of the token codes and categories); moreover, the binary search only touches the offsets array, further reducing its cache footprint. This also lets us put the token codes somewhere else than the keyword strings are, which avoids some unpleasant build dependencies. While we're at it, wrap the data used by ScanKeywordLookup into a struct that can be treated as an opaque type by most callers. That doesn't change things much right now, but it will make it less painful to switch to a hash-based lookup method, as is being discussed in the mailing list thread. Most of the change here is associated with adding a generator script that can build the new data structure from the same list-of-PG_KEYWORD header representation we used before. The PG_KEYWORD lists that plpgsql and ecpg used to embed in their scanner .c files have to be moved into headers, and the Makefiles have to be taught to invoke the generator script. This work is also necessary if we're to consider hash-based lookup, since the generator script is what would be responsible for constructing a hash table. Aside from saving a few kilobytes in each program that includes the keyword table, this seems to speed up raw parsing (flex+bison) by a few percent. So it's worth doing even as it stands, though we think we can gain even more with a follow-on patch to switch to hash-based lookup. John Naylor, with further hacking by me Discussion: https://postgr.es/m/CAJVSVGXdFVU2sgym89XPL=Lv1zOS5=EHHQ8XWNzFL=mTXkKMLw@mail.gmail.com
2019-01-02Update copyright for 2019Bruce Momjian
Backpatch-through: certain files through 9.4
2018-11-26Fix translation of special characters in psql's LaTeX output modes.Tom Lane
latex_escaped_print() mistranslated \ and failed to provide any translation for # ^ and ~, all of which would typically lead to LaTeX document syntax errors. In addition it didn't translate < > and |, which would typically render as unexpected characters. To some extent this represents shortcomings in ancient versions of LaTeX, which if memory serves had no easy way to render these control characters as ASCII text. But that's been fixed for, um, decades. In any case there is no value in emitting guaranteed-to-fail output for these characters. Noted while fooling with test cases added by commit 9a98984f4. Back-patch the code change to all supported versions.
2018-11-26Add CSV table output mode in psql.Tom Lane
"\pset format csv", or --csv, selects comma-separated values table format. This is compliant with RFC 4180, except that we aren't too picky about whether the record separator is LF or CRLF; also, the user may choose a field separator other than comma. This output format is directly compatible with the server's COPY CSV format, and will also be useful as input to other programs. It's considerably safer for that purpose than the old recommendation to use "unaligned" format, since the latter couldn't handle data containing the field separator character. Daniel Vérité, reviewed by Fabien Coelho and David Fetter, some tweaking by me Discussion: https://postgr.es/m/a8de371e-006f-4f92-ab72-2bbe3ee78f03@manitou-mail.org
2018-11-13Fix realfailN lexer rules to not make assumptions about input format.Tom Lane
The realfail1 and realfail2 backup-prevention rules always returned token type FCONST, ignoring the possibility that what we've scanned is more appropriately described as ICONST. I think that at the time that code was added, it might actually have been safe to not distinguish; but since we started allowing AS-less aliases in SELECT target lists, it's definitely legal to have a number immediately followed by an identifier. In the SELECT case, it seems there's no visible consequence because make_const() will change the type back to integer anyway. But I'm worried that there are other contexts, or will be in future, where it's more important to get the constant's type right. Hence, use process_integer_literal to correctly determine which token type to return. Arguably this is a bug fix, but given the lack of evidence of user-visible problems, I'll refrain from back-patching. Discussion: https://postgr.es/m/21364.1542136808@sss.pgh.pa.us
2018-11-13Align ECPG lexer more closely with the core and psql lexers.Tom Lane
Make a bunch of basically-cosmetic changes to reduce the diffs between the flex rules in scan.l, psqlscan.l, and pgc.l. Reorder some code, adjust a lot of whitespace, sync some comments, make use of flex start condition scopes to do that. There are a few non-cosmetic changes in the ECPG lexer: * Bring over the decimalfail rule (and support function process_integer_literal) so that ECPG will lex "1..10" into the same tokens as the backend would. I'm not sure this makes any visible difference to users, but I'm not sure it doesn't, either. * <xdc><<EOF>> gets its own rule so as to produce a more on-point error message. * Remove duplicate <SQL>{xdstart} rule. John Naylor, with a few additional changes by me Discussion: https://postgr.es/m/CAJVSVGWGqY9YBs2EwtRUkbNv=hXkN8yRPOoD1wxE6COgvvrz5g@mail.gmail.com
2018-08-23Fix lexing of standard multi-character operators in edge cases.Andrew Gierth
Commits c6b3c939b (which fixed the precedence of >=, <=, <> operators) and 865f14a2d (which added support for the standard => notation for named arguments) created a class of lexer tokens which look like multi-character operators but which have their own token IDs distinct from Op. However, longest-match rules meant that following any of these tokens with another operator character, as in (1<>-1), would cause them to be incorrectly returned as Op. The error here isn't immediately obvious, because the parser would usually still find the correct operator via the Op token, but there were more subtle problems: 1. If immediately followed by a comment or +-, >= <= <> would be given the old precedence of Op rather than the correct new precedence; 2. If followed by a comment, != would be returned as Op rather than as NOT_EQUAL, causing it not to be found at all; 3. If followed by a comment or +-, the => token for named arguments would be lexed as Op, causing the argument to be mis-parsed as a simple expression, usually causing an error. Fix by explicitly checking for the operators in the {operator} code block in addition to all the existing special cases there. Backpatch to 9.5 where the problem was introduced. Analysis and patch by me; review by Tom Lane. Discussion: https://postgr.es/m/87va851ppl.fsf@news-spur.riddles.org.uk
2018-08-23Reduce an unnecessary O(N^3) loop in lexer.Andrew Gierth
The lexer's handling of operators contained an O(N^3) hazard when dealing with long strings of + or - characters; it seems hard to prevent this case from being O(N^2), but the additional N multiplier was not needed. Backpatch all the way since this has been there since 7.x, and it presents at least a mild hazard in that trying to do Bind, PREPARE or EXPLAIN on a hostile query could take excessive time (without honouring cancels or timeouts) even if the query was never executed.
2018-08-17Ensure schema qualification in pg_restore DISABLE/ENABLE TRIGGER commands.Tom Lane
Previously, this code blindly followed the common coding pattern of passing PQserverVersion(AH->connection) as the server-version parameter of fmtQualifiedId. That works as long as we have a connection; but in pg_restore with text output, we don't. Instead we got a zero from PQserverVersion, which fmtQualifiedId interpreted as "server is too old to have schemas", and so the name went unqualified. That still accidentally managed to work in many cases, which is probably why this ancient bug went undetected for so long. It only became obvious in the wake of the changes to force dump/restore to execute with restricted search_path. In HEAD/v11, let's deal with this by ripping out fmtQualifiedId's server- version behavioral dependency, and just making it schema-qualify all the time. We no longer support pg_dump from servers old enough to need the ability to omit schema name, let alone restoring to them. (Also, the few callers outside pg_dump already didn't work with pre-schema servers.) In older branches, that's not an acceptable solution, so instead just tweak the DISABLE/ENABLE TRIGGER logic to ensure it will schema-qualify its output regardless of server version. Per bug #15338 from Oleg somebody. Back-patch to all supported branches. Discussion: https://postgr.es/m/153452458706.1316.5328079417086507743@wrigleys.postgresql.org
2018-04-26Post-feature-freeze pgindent run.Tom Lane
Discussion: https://postgr.es/m/15719.1523984266@sss.pgh.pa.us
2018-04-08Switch client-side code to include catalog/pg_foo_d.h not pg_foo.h.Tom Lane
Everything of use to frontend code should now appear in the _d.h files, and making this change frees us from needing to worry about whether the catalog header files proper are frontend-safe. Remove src/interfaces/ecpg/ecpglib/pg_type.h entirely, as the previous commit reduced it to a confusingly-named wrapper around pg_type_d.h. In passing, make test_rls_hooks.c follow project convention of including our own files with #include "" not <>. Discussion: https://postgr.es/m/23690.1523031777@sss.pgh.pa.us