summaryrefslogtreecommitdiff
path: root/src/backend/utils
AgeCommit message (Collapse)Author
2016-01-05Make the to_reg*() functions accept text not cstring.Tom Lane
Using cstring as the input type was a poor decision, because that's not really a full-fledged type. In particular, it lacks implicit coercions from text or varchar, meaning that usages like to_regproc('foo'||'bar') wouldn't work; basically the only case that did work without explicit casting was a simple literal constant argument. The lack of field complaints about this suggests that hardly anyone is using these functions, so hopefully fixing it won't cause much of a compatibility problem. They've only been there since 9.4, anyway. Petr Korobeinikov
2016-01-05Make pg_shseclabel available in early backend startupAlvaro Herrera
While the in-core authentication mechanism doesn't need to access pg_shseclabel at all, it's reasonable to think that an authentication hook will want to look at the label for the role logging in, or for rows in other catalogs used during the authentication phase of startup. Catalog version bumped, because this changes the "is nailed" status for pg_shseclabel. Author: Adam Brightwell
2016-01-04Adjust behavior of row_security GUC to match the docs.Tom Lane
Some time back we agreed that row_security=off should not be a way to bypass RLS entirely, but only a way to get an error if it was being applied. However, the code failed to act that way for table owners. Per discussion, this is a must-fix bug for 9.5.0. Adjust the logic in rls.c to behave as expected; also, modify the error message to be more consistent with the new interpretation. The regression tests need minor corrections as well. Also update the comments about row_security in ddl.sgml to be correct. (The official description of the GUC in config.sgml is already correct.) I failed to resist the temptation to do some other very minor cleanup as well, such as getting rid of a duplicate extern declaration.
2016-01-04Fix regrole and regnamespace output functions to do quoting, too.Tom Lane
We discussed this but somehow failed to implement it...
2016-01-04Fix regrole and regnamespace types to honor quoting like other reg* types.Tom Lane
Aside from any consistency arguments, this is logically necessary because the I/O functions for these types also handle numeric OID values. Without a quoting rule it is impossible to distinguish numeric OIDs from role or namespace names that happen to contain only digits. Also change the to_regrole and to_regnamespace functions to dequote their arguments. While not logically essential, this seems like a good idea since the other to_reg* functions do it. Anyone who really wants raw lookup of an uninterpreted name can fall back on the time-honored solution of (SELECT oid FROM pg_namespace WHERE nspname = whatever). Report and patch by Jim Nasby, reviewed by Michael Paquier
2016-01-03Guard against null arguments in binary_upgrade_create_empty_extension().Tom Lane
The CHECK_IS_BINARY_UPGRADE macro is not sufficient security protection if we're going to dereference pass-by-reference arguments before it. But in any case we really need to explicitly check PG_ARGISNULL for all the arguments of a non-strict function, not only the ones we expect null values for. Oversight in commits 30982be4e5019684e1772dd9170aaa53f5a8e894 and f92fc4c95ddcc25978354a8248d3df22269201bc. Found by Andreas Seltenreich. (The other usages in pg_upgrade_support.c seem safe.)
2016-01-02Update copyright for 2016Bruce Momjian
Backpatch certain files through 9.1
2016-01-01Teach flatten_reloptions() to quote option values safely.Tom Lane
flatten_reloptions() supposed that it didn't really need to do anything beyond inserting commas between reloption array elements. However, in principle the value of a reloption could be nearly anything, since the grammar allows a quoted string there. Any restrictions on it would come from validity checking appropriate to the particular option, if any. A reloption value that isn't a simple identifier or number could thus lead to dump/reload failures due to syntax errors in CREATE statements issued by pg_dump. We've gotten away with not worrying about this so far with the core-supported reloptions, but extensions might allow reloption values that cause trouble, as in bug #13840 from Kouhei Sutou. To fix, split the reloption array elements explicitly, and then convert any value that doesn't look like a safe identifier to a string literal. (The details of the quoting rule could be debated, but this way is safe and requires little code.) While we're at it, also quote reloption names if they're not safe identifiers; that may not be a likely problem in the field, but we might as well try to be bulletproof here. It's been like this for a long time, so back-patch to all supported branches. Kouhei Sutou, adjusted some by me
2016-01-01Add some more defenses against silly estimates to gincostestimate().Tom Lane
A report from Andy Colson showed that gincostestimate() was not being nearly paranoid enough about whether to believe the statistics it finds in the index metapage. The problem is that the metapage stats (other than the pending-pages count) are only updated by VACUUM, and in the worst case could still reflect the index's original empty state even when it has grown to many entries. We attempted to deal with that by scaling up the stats to match the current index size, but if nEntries is zero then scaling it up still gives zero. Moreover, the proportion of pages that are entry pages vs. data pages vs. pending pages is unlikely to be estimated very well by scaling if the index is now orders of magnitude larger than before. We can improve matters by expanding the use of the rule-of-thumb estimates I introduced in commit 7fb008c5ee59b040: if the index has grown by more than a cutoff amount (here set at 4X growth) since VACUUM, then use the rule-of-thumb numbers instead of scaling. This might not be exactly right but it seems much less likely to produce insane estimates. I also improved both the scaling estimate and the rule-of-thumb estimate to account for numPendingPages, since it's reasonable to expect that that is accurate in any case, and certainly pages that are in the pending list are not either entry or data pages. As a somewhat separate issue, adjust the estimation equations that are concerned with extra fetches for partial-match searches. These equations suppose that a fraction partialEntries / numEntries of the entry and data pages will be visited as a consequence of a partial-match search. Now, it's physically impossible for that fraction to exceed one, but our estimate of partialEntries is mostly bunk, and our estimate of numEntries isn't exactly gospel either, so we could arrive at a silly value. In the example presented by Andy we were coming out with a value of 100, leading to insane cost estimates. Clamp the fraction to one to avoid that. Like the previous patch, back-patch to all supported branches; this problem can be demonstrated in one form or another in all of them.
2015-12-22Remove unnecessary escaping in C character literalsPeter Eisentraut
'\"' is more commonly written simply as '"'.
2015-12-22Allow omitting one or both boundaries in an array slice specifier.Tom Lane
Omitted boundaries represent the upper or lower limit of the corresponding array subscript. This allows simpler specification of many common use-cases. (Revised version of commit 9246af6799819847faa33baf441251003acbb8fe) YUriy Zhuravlev
2015-12-22Comment improvements for abbreviated keys.Robert Haas
Peter Geoghegan and Robert Haas
2015-12-17Use just one standalone-backend session for initdb's post-bootstrap steps.Tom Lane
Previously, each subroutine in initdb fired up its own standalone backend session. Over time we'd grown as many as fifteen of these sessions, and the cumulative startup and shutdown work for them was getting pretty noticeable. Combining things so that all these steps share a single backend session cuts a good 10% off the total runtime of initdb, more if you're not fsync'ing. The main stumbling block to doing this before was that some of the sessions were run with -j and some not. The improved definition of -j mode implemented by my previous commit makes it possible to fix that by running all the post-bootstrap steps with -j; we just have to use double instead of single newlines to end command strings. (This is only absolutely necessary around the VACUUM and CREATE DATABASE steps, since those can't be run in a transaction block. But it seems best to make them all use double newlines so that the commands remain separate for error-reporting purposes.) A minor disadvantage is that since initdb can't tell how much of its output the backend has executed, we can no longer have the per-step progress reporting initdb used to print. But things are fast enough nowadays that that's not really all that useful anyway. In passing, add more const decoration to some of the static arrays in initdb.c.
2015-12-14Add missing CHECK_FOR_INTERRUPTS in lseg_inside_polyAlvaro Herrera
Apparently, there are bugs in this code that cause it to loop endlessly. That bug still needs more research, but in the meantime it's clear that the loop is missing a check for interrupts so that it can be cancelled timely. Backpatch to 9.1 -- this has been missing since 49475aab8d0d.
2015-12-10Improve some messagesPeter Eisentraut
2015-12-08Improve performance in freeing memory contextsKevin Grittner
The single linked list of memory contexts could result in O(N^2) performance to free a set of contexts if they were not freed in reverse order of creation. In many cases the reverse order was used, but there were some significant exceptions that caused real- world performance problems. Rather than requiring all callers to care about the order in which contexts were freed, and hunting down and changing all existing cases where the wrong order was used, we add one pointer per memory context so that the implementation details are not so visible. Jan Wieck
2015-12-01Make gincostestimate() cope with hypothetical GIN indexes.Tom Lane
We tried to fetch statistics data from the index metapage, which does not work if the index isn't actually present. If the index is hypothetical, instead extrapolate some plausible internal statistics based on the index page count provided by the index-advisor plugin. There was already some code in gincostestimate() to invent internal stats in this way, but since it was only meant as a stopgap for pre-9.1 GIN indexes that hadn't been vacuumed since upgrading, it was pretty crude. If we want it to support index advisors, we should try a little harder. A small amount of testing says that it's better to estimate the entry pages as 90% of the index, not 100%. Also, estimating the number of entries (keys) as equal to the heap tuple count could be wildly wrong in either direction. Instead, let's estimate 100 entries per entry page. Perhaps someday somebody will want the index advisor to be able to provide these numbers more directly, but for the moment this should serve. Problem report and initial patch by Julien Rouhaud; modified by me to invent less-bogus internal statistics. Back-patch to all supported branches, since we've supported index advisors since 9.0.
2015-11-29Avoid caching expression state trees for domain constraints across queries.Tom Lane
In commit 8abb3cda0ddc00a0ab98977a1633a95b97068d4e I attempted to cache the expression state trees constructed for domain CHECK constraints for the life of the backend (assuming the domain's constraints don't get redefined). However, this turns out not to work very well, because execQual.c will run those state trees with ecxt_per_query_memory pointing to a query-lifespan context, and in some situations we'll end up with pointers into that context getting stored into the state trees. This happens in particular with SQL-language functions, as reported by Emre Hasegeli, but there are many other cases. To fix, keep only the expression plan trees for domain CHECK constraints in the typcache's data structure, and revert to performing ExecInitExpr (at least) once per query to set up expression state trees in the query's context. Eventually it'd be nice to undo this, but that will require some careful thought about memory management for expression state trees, and it seems far too late for any such redesign in 9.5. This way is still much more efficient than what happened before 8abb3cda0.
2015-11-28Avoid doing encoding conversions by double-conversion via MULE_INTERNAL.Tom Lane
Previously, we did many conversions for Cyrillic and Central European single-byte encodings by converting to a related MULE_INTERNAL coding scheme before converting to the destination. This seems unnecessarily inefficient. Moreover, if the conversion encounters an untranslatable character, the error message will confusingly complain about failure to convert to or from MULE_INTERNAL, rather than the user-visible encodings. Worse still, this approach results in some completely unnecessary conversion failures; there are cases where the chosen MULE subset lacks characters that exist in both of the user-visible encodings, causing a conversion failure that need not occur. This patch fixes the first two of those deficiencies by introducing a new local2local() conversion support subroutine for direct conversion between any two single-byte character sets, and adding new conversion tables where needed. However, I generated the new conversion tables by testing PG 9.5's behavior, so that the actual conversion behavior is bug-compatible with previous releases; the only user-visible behavior change is that the error messages for conversion failures are saner. Changes in the conversion behavior will probably ensue after discussion. Interestingly, although this approach requires more tables, the .so files actually end up smaller (at least on my x86_64 machine); the tables are smaller than the management code needed for double conversion. Per a complaint from Albe Laurenz.
2015-11-27Update UCS_to_GB18030.pl with info about origin of the reference file.Tom Lane
2015-11-27Auto-generate file header comments in Unicode mapping files.Tom Lane
Some of the Unicode/*.map files had identification comments added to them, evidently by hand. Others did not. Modify the generating scripts to produce these comments automatically, and update the generated files that lacked them. This is just minor cleanup as a by-product of trying to verify that the *.map files can indeed be reproduced from authoritative data. There are a depressingly large number that fail to reproduce from the claimed sources. I have not touched those in this commit, except for the JIS 2004-related files which required only a single comment update to match. Since this only affects comments, no need to consider a back-patch.
2015-11-26Fix failure to consider failure cases in GetComboCommandId().Tom Lane
Failure to initially palloc the comboCids array, or to realloc it bigger when needed, left combocid's data structures in an inconsistent state that would cause trouble if the top transaction continues to execute. Noted while examining a user complaint about the amount of memory used for this. (There's not much we can do about that, but it does point up that repalloc failure has a non-negligible chance of occurring here.) In HEAD/9.5, also avoid possible invocation of memcpy() with a null pointer in SerializeComboCIDState; cf commit 13bba0227.
2015-11-25Improve div_var_fast(), mostly by making comments better.Tom Lane
The integer overflow situation in div_var_fast() is a great deal more complicated than the pre-existing comments would suggest. Moreover, the comments were also flat out incorrect as to the precise statement of the maxdiv loop invariant. Upon clarifying that, it becomes apparent that the way in which we updated maxdiv after a carry propagation pass was overly slow, complex, and conservative: we can just reset it to one, which is much easier and also reduces the number of times carry propagation occurs. Fix that and improve the relevant comments. Since this is mostly a comment fix, with only a rather marginal performance boost, no need for back-patch. Tom Lane and Dean Rasheed
2015-11-20Fix handling of inherited check constraints in ALTER COLUMN TYPE (again).Tom Lane
The previous way of reconstructing check constraints was to do a separate "ALTER TABLE ONLY tab ADD CONSTRAINT" for each table in an inheritance hierarchy. However, that way has no hope of reconstructing the check constraints' own inheritance properties correctly, as pointed out in bug #13779 from Jan Dirk Zijlstra. What we should do instead is to do a regular "ALTER TABLE", allowing recursion, at the topmost table that has a particular constraint, and then suppress the work queue entries for inherited instances of the constraint. Annoyingly, we'd tried to fix this behavior before, in commit 5ed6546cf, but we failed to notice that it wasn't reconstructing the pg_constraint field values correctly. As long as I'm touching pg_get_constraintdef_worker anyway, tweak it to always schema-qualify the target table name; this seems like useful backup to the protections installed by commit 5f173040. In HEAD/9.5, get rid of get_constraint_relation_oids, which is now unused. (I could alternatively have modified it to also return conislocal, but that seemed like a pretty single-purpose API, so let's not pretend it has some other use.) It's unused in the back branches as well, but I left it in place just in case some third-party code has decided to use it. In HEAD/9.5, also rename pg_get_constraintdef_string to pg_get_constraintdef_command, as the previous name did nothing to explain what that entry point did differently from others (and its comment was equally useless). Again, that change doesn't seem like material for back-patching. I did a bit of re-pgindenting in tablecmds.c in HEAD/9.5, as well. Otherwise, back-patch to all supported branches.
2015-11-17Fix possible internal overflow in numeric division.Tom Lane
div_var_fast() postpones propagating carries in the same way as mul_var(), so it has the same corner-case overflow risk we fixed in 246693e5ae8a36f0, namely that the size of the carries has to be accounted for when setting the threshold for executing a carry propagation step. We've not devised a test case illustrating the brokenness, but the required fix seems clear enough. Like the previous fix, back-patch to all active branches. Dean Rasheed
2015-11-16Message improvementsPeter Eisentraut
2015-11-16Speed up ruleutils' name de-duplication code, and fix overlength-name case.Tom Lane
Since commit 11e131854f8231a21613f834c40fe9d046926387, ruleutils.c has attempted to ensure that each RTE in a query or plan tree has a unique alias name. However, the code that was added for this could be quite slow, even as bad as O(N^3) if N identical RTE names must be replaced, as noted by Jeff Janes. Improve matters by building a transient hash table within set_rtable_names. The hash table in itself reduces the cost of detecting a duplicate from O(N) to O(1), and we can save another factor of N by storing the number of de-duplicated names already created for each entry, so that we don't have to re-try names already created. This way is probably a bit slower overall for small range tables, but almost by definition, such cases should not be a performance problem. In principle the same problem applies to the column-name-de-duplication code; but in practice that seems to be less of a problem, first because N is limited since we don't support extremely wide tables, and second because duplicate column names within an RTE are fairly rare, so that in practice the cost is more like O(N^2) not O(N^3). It would be very much messier to fix the column-name code, so for now I've left that alone. An independent problem in the same area was that the de-duplication code paid no attention to the identifier length limit, and would happily produce identifiers that were longer than NAMEDATALEN and wouldn't be unique after truncation to NAMEDATALEN. This could result in dump/reload failures, or perhaps even views that silently behaved differently than before. We can fix that by shortening the base name as needed. Fix it for both the relation and column name cases. In passing, check for interrupts in set_rtable_names, just in case it's still slow enough to be an issue. Back-patch to 9.3 where this code was introduced.
2015-11-15Fix ruleutils.c's dumping of whole-row Vars in ROW() and VALUES() contexts.Tom Lane
Normally ruleutils prints a whole-row Var as "foo.*". We already knew that that doesn't work at top level of a SELECT list, because the parser would treat the "*" as a directive to expand the reference into separate columns, not a whole-row Var. However, Joshua Yanovski points out in bug #13776 that the same thing happens at top level of a ROW() construct; and some nosing around in the parser shows that the same is true in VALUES(). Hence, apply the same workaround already devised for the SELECT-list case, namely to add a forced cast to the appropriate rowtype in these cases. (The alternative of just printing "foo" was rejected because it is difficult to avoid ambiguity against plain columns named "foo".) Back-patch to all supported branches.
2015-11-14Improve type numeric's calculations for ln(), log(), exp(), pow().Tom Lane
Set the "rscales" for intermediate-result calculations to ensure that suitable numbers of significant digits are maintained throughout. The previous coding hadn't thought this through in any detail, and as a result could deliver results with many inaccurate digits, or in the worst cases even fail with divide-by-zero errors as a result of losing all nonzero digits of intermediate results. In exp_var(), get rid entirely of the logic that separated the calculation into integer and fractional parts: that was neither accurate nor particularly fast. The existing range-reduction method of dividing by 2^n can be applied across the full input range instead of only 0..1, as long as we are careful to set an appropriate rscale for each step. Also fix the logic in mul_var() for shortening the calculation when the caller asks for fewer output digits than an exact calculation would require. This bug doesn't affect simple multiplications since that code path asks for an exact result, but it does contribute to accuracy issues in the transcendental math functions. In passing, improve performance of mul_var() a bit by forcing the shorter input to be on the left, thus reducing the number of iterations of the outer loop and probably also reducing the number of carry-propagation steps needed. This is arguably a bug fix, but in view of the lack of field complaints, it does not seem worth the risk of back-patching. Dean Rasheed
2015-11-14Fix spelling error in postgresql.confBruce Momjian
Report by Greg Clough
2015-11-11Be more noisy about "wrong number of nailed relations" initfile problems.Tom Lane
In commit 5d1ff6bd559ea8df1b7302e245e690b01b9a4fa4 I added some logic to relcache.c to try to ensure that the regression tests would fail if we made a mistake about which relations belong in the relcache init files. I'm quite sure I tested that, but I must have done so only for the non-shared-catalog case, because a report from Adam Brightwell showed that the regression tests still pass just fine if we bollix the shared-catalog init file in the way this code was supposed to catch. The reason is that that file gets loaded before we do client authentication, so the WARNING is not sent to the client, only to the postmaster log, where it's far too easily missed. The least Rube Goldbergian answer to this is to put an Assert(false) after the elog(WARNING). That will certainly get developers' attention, while not breaking production builds' ability to recover from corner cases with similar symptoms. Since this is only of interest to developers, there seems no need for a back-patch, even though the previous commit went into all branches.
2015-11-11Generate parallel sequential scan plans in simple cases.Robert Haas
Add a new flag, consider_parallel, to each RelOptInfo, indicating whether a plan for that relation could conceivably be run inside of a parallel worker. Right now, we're pretty conservative: for example, it might be possible to defer applying a parallel-restricted qual in a worker, and later do it in the leader, but right now we just don't try to parallelize access to that relation. That's probably the right decision in most cases, anyway. Using the new flag, generate parallel sequential scan plans for plain baserels, meaning that we now have parallel sequential scan in PostgreSQL. The logic here is pretty unsophisticated right now: the costing model probably isn't right in detail, and we can't push joins beneath Gather nodes, so the number of plans that can actually benefit from this is pretty limited right now. Lots more work is needed. Nevertheless, it seems time to enable this functionality so that all this code can actually be tested easily by users and developers. Note that, if you wish to test this functionality, it will be necessary to set max_parallel_degree to a value greater than the default of 0. Once a few more loose ends have been tidied up here, we might want to consider changing the default value of this GUC, but I'm leaving it alone for now. Along the way, fix a bug in cost_gather: the previous coding thought that a Gather node's transfer overhead should be costed on the basis of the relation size rather than the number of tuples that actually need to be passed off to the leader. Patch by me, reviewed in earlier versions by Amit Kapila.
2015-11-07Add "xid <> xid" and "xid <> int4" operators.Tom Lane
The corresponding "=" operators have been there a long time, and not having their negators is a bit of a nuisance. Michael Paquier
2015-11-06Remove unnecessary cast in previous commit.Robert Haas
Noted by Kyotaro Horiguchi, who also reviewed the previous patch, but I failed to notice his review before committing.
2015-11-06Add sort support routine for the UUID data type.Robert Haas
This introduces a simple encoding scheme to produce abbreviated keys: pack as many bytes of each UUID as will fit into a Datum. On little-endian machines, a byteswap is also performed; the abbreviated comparator can therefore just consist of a simple 3-way unsigned integer comparison. The purpose of this change is to speed up sorting data on a column of type UUID. Peter Geoghegan
2015-11-06pg_size_pretty: Format negative values similar to positive ones.Robert Haas
Previously, negative values were always displayed in bytes, regardless of how large they were. Adrian Vondendriesch, reviewed by Julien Rouhaud and myself
2015-11-05Fix erroneous hash calculations in gin_extract_jsonb_path().Tom Lane
The jsonb_path_ops code calculated hash values inconsistently in some cases involving nested arrays and objects. This would result in queries possibly not finding entries that they should find, when using a jsonb_path_ops GIN index for the search. The problem cases involve JSONB values that contain both scalars and sub-objects at the same nesting level, for example an array containing both scalars and sub-arrays. To fix, reset the current stack->hash after processing each value or sub-object, not before; and don't try to be cute about the outermost level's initial hash. Correcting this means that existing jsonb_path_ops indexes may now be inconsistent with the new hash calculation code. The symptom is the same --- searches not finding entries they should find --- but the specific rows affected are likely to be different. Users will need to REINDEX jsonb_path_ops indexes to make sure that all searches work as expected. Per bug #13756 from Daniel Cheng. Back-patch to 9.4 where the faulty logic was introduced.
2015-11-03Allow postgres_fdw to ship extension funcs/operators for remote execution.Tom Lane
The user can whitelist specified extension(s) in the foreign server's options, whereupon we will treat immutable functions and operators of those extensions as candidates to be sent for remote execution. Whitelisting an extension in this way basically promises that the extension exists on the remote server and behaves compatibly with the local instance. We have no way to prove that formally, so we have to rely on the user to get it right. But this seems like something that people can usually get right in practice. We might in future allow functions and operators to be whitelisted individually, but extension granularity is a very convenient special case, so it got done first. The patch as-committed lacks any regression tests, which is unfortunate, but introducing dependencies on other extensions for testing purposes would break "make installcheck" scenarios, which is worse. I have some ideas about klugy ways around that, but it seems like material for a separate patch. For the moment, leave the problem open. Paul Ramsey, hacked up a bit more by me
2015-11-03Improve comments about abbreviation abort.Robert Haas
Peter Geoghegan
2015-11-03Correct tiny inaccuracy in strxfrm cache comment.Robert Haas
Peter Geoghegan
2015-11-02Remove some more dead Alpha-specific code.Tom Lane
2015-11-02Fix problems with ParamListInfo serialization mechanism.Robert Haas
Commit d1b7c1ffe72e86932b5395f29e006c3f503bc53d introduced a mechanism for serializing a ParamListInfo structure to be passed to a parallel worker. However, this mechanism failed to handle external expanded values, as pointed out by Noah Misch. Repair. Moreover, plpgsql_param_fetch requires adjustment because the serialization mechanism needs it to skip evaluating unused parameters just as we would do when it is called from copyParamList, but params == estate->paramLI in that case. To fix, make the bms_is_member test in that function unconditional. Finally, have setup_param_list set a new ParamListInfo field, paramMask, to the parameters actually used in the expression, so that we don't try to fetch those that are not needed when serializing a parameter list. This isn't necessary for correctness, but it makes the performance of the parallel executor code comparable to what we do for cases involving cursors. Design suggestions and extensive review by Noah Misch. Patch by me.
2015-10-28Message style improvementsPeter Eisentraut
Message style, plurals, quoting, spelling, consistency with similar messages
2015-10-22Remove redundant CREATEUSER/NOCREATEUSER options in CREATE ROLE et al.Tom Lane
Once upon a time we did not have a separate CREATEROLE privilege, and CREATEUSER effectively meant SUPERUSER. When we invented CREATEROLE (in 8.1) we also added SUPERUSER so as to have a less confusing keyword for this role property. However, we left CREATEUSER in place as a deprecated synonym for SUPERUSER, because of backwards-compatibility concerns. It's still there and is still confusing people, as for example in bug #13694 from Justin Catterson. 9.6 will be ten years or so later, which surely ought to be long enough to end the deprecation and just remove these old keywords. Hence, do so.
2015-10-20Fix incorrect translation of minus-infinity datetimes for json/jsonb.Tom Lane
Commit bda76c1c8cfb1d11751ba6be88f0242850481733 caused both plus and minus infinity to be rendered as "infinity", which is not only wrong but inconsistent with the pre-9.4 behavior of to_json(). Fix that by duplicating the coding in date_out/timestamp_out/timestamptz_out more closely. Per bug #13687 from Stepan Perlov. Back-patch to 9.4, like the previous commit. In passing, also re-pgindent json.c, since it had gotten a bit messed up by recent patches (and I was already annoyed by indentation-related problems in back-patching this fix ...)
2015-10-20Put back ssl_renegotiation_limit parameter, but only allow 0.Robert Haas
Per a report from Shay Rojansky, Npgsql sends ssl_renegotiation_limit=0 in the startup packet because it does not support renegotiation; other clients which have not attempted to support renegotiation might well behave similarly. The recent removal of this parameter forces them to break compatibility with either current PostgreSQL versions, or previous ones. Per discussion, the best solution is to accept the parameter but only allow a value of 0. Shay Rojansky, edited a little by me.
2015-10-20Be a bit more rigorous about how we cache strcoll and strxfrm results.Robert Haas
Commit 0e57b4d8bd9674adaf5747421b3255b85e385534 contained some clever logic that attempted to make sure that we couldn't get confused about whether the last thing we cached was a strcoll() result or a strxfrm() result, but it wasn't quite clever enough, because we can perform further abbreviations after having already performed some comparisons. Introduce an explicit flag in the hopes of making this watertight. Peter Geoghegan, reviewed by me.
2015-10-20Remove obsolete comment.Robert Haas
Peter Geoghegan
2015-10-16Remove volatile qualifiers from dynahash.c, shmem.c, and sinvaladt.cRobert Haas
Prior to commit 0709b7ee72e4bc71ad07b7120acd117265ab51d0, access to variables within a spinlock-protected critical section had to be done through a volatile pointer, but that should no longer be necessary. Thomas Munro
2015-10-15Fix NULL handling in datum_to_jsonb().Tom Lane
The function failed to adhere to its specification that the "tcategory" argument should not be examined when the input value is NULL. This resulted in a crash in some cases. Per bug #13680 from Boyko Yordanov. In passing, re-pgindent some recent changes in jsonb.c, and fix a rather ungrammatical comment. Diagnosis and patch by Michael Paquier, cosmetic changes by me