summaryrefslogtreecommitdiff
path: root/src/backend/utils/adt
AgeCommit message (Collapse)Author
2014-12-02Improve error messages for malformed array input strings.Tom Lane
Make the error messages issued by array_in() uniformly follow the style ERROR: malformed array literal: "actual input string" DETAIL: specific complaint here and rewrite many of the specific complaints to be clearer. The immediate motivation for doing this is a complaint from Josh Berkus that json_to_record() produced an unintelligible error message when dealing with an array item, because it tries to feed the JSON-format array value to array_in(). Really it ought to be smart enough to perform JSON-to-Postgres array conversion, but that's a future feature not a bug fix. In the meantime, this change is something we agreed we could back-patch into 9.4, and it should help de-confuse things a bit.
2014-12-02Fix JSON aggregates to work properly when final function is re-executed.Tom Lane
Davide S. reported that json_agg() sometimes produced multiple trailing right brackets. This turns out to be because json_agg_finalfn() attaches the final right bracket, and was doing so by modifying the aggregate state in-place. That's verboten, though unfortunately it seems there's no way for nodeAgg.c to check for such mistakes. Fix that back to 9.3 where the broken code was introduced. In 9.4 and HEAD, likewise fix json_object_agg(), which had copied the erroneous logic. Make some cosmetic cleanups as well.
2014-12-01Guard against bad "dscale" values in numeric_recv().Tom Lane
We were not checking to see if the supplied dscale was valid for the given digit array when receiving binary-format numeric values. While dscale can validly be more than the number of nonzero fractional digits, it shouldn't be less; that case causes fractional digits to be hidden on display even though they're there and participate in arithmetic. Bug #12053 from Tommaso Sala indicates that there's at least one broken client library out there that sometimes supplies an incorrect dscale value, leading to strange behavior. This suggests that simply throwing an error might not be the best response; it would lead to failures in applications that might seem to be working fine today. What seems the least risky fix is to truncate away any digits that would be hidden by dscale. This preserves the existing behavior in terms of what will be printed for the transmitted value, while preventing subsequent arithmetic from producing results inconsistent with that. In passing, throw a specific error for the case of dscale being outside the range that will fit into a numeric's header. Before you got "value overflows numeric format", which is a bit misleading. Back-patch to all supported branches.
2014-12-01Fix hstore_to_json_loose's detection of valid JSON number values.Andrew Dunstan
We expose a function IsValidJsonNumber that internally calls the lexer for json numbers. That allows us to use the same test everywhere, instead of inventing a broken test for hstore conversions. The new function is also used in datum_to_json, replacing the code that is now moved to the new function. Backpatch to 9.3 where hstore_to_json_loose was introduced.
2014-11-25Support arrays as input to array_agg() and ARRAY(SELECT ...).Tom Lane
These cases formerly failed with errors about "could not find array type for data type". Now they yield arrays of the same element type and one higher dimension. The implementation involves creating functions with API similar to the existing accumArrayResult() family. I (tgl) also extended the base family by adding an initArrayResult() function, which allows callers to avoid special-casing the zero-inputs case if they just want an empty array as result. (Not all do, so the previous calling convention remains valid.) This allowed simplifying some existing code in xml.c and plperl.c. Ali Akbar, reviewed by Pavel Stehule, significantly modified by me
2014-11-24Add infrastructure to save and restore GUC values.Robert Haas
This is further infrastructure for parallelism. Amit Khandekar, Noah Misch, Robert Haas
2014-11-21Rearrange CustomScan API.Tom Lane
Make it work more like FDW plans do: instead of assuming that there are expressions in a CustomScan plan node that the core code doesn't know about, insist that all subexpressions that need planner attention be in a "custom_exprs" list in the Plan representation. (Of course, the custom plugin can break the list apart again at executor initialization.) This lets us revert the parts of the patch that exposed setrefs.c and subselect.c processing to the outside world. Also revert the GetSpecialCustomVar stuff in ruleutils.c; that concept may work in future, but it's far from fully baked right now.
2014-11-13Move the guts of our Levenshtein implementation into core.Robert Haas
The hope is that we can use this to produce better diagnostics in some cases. Peter Geoghegan, reviewed by Michael Paquier, with some further changes by me.
2014-11-11Add generate_series(numeric, numeric).Fujii Masao
Платон Малюгин Reviewed by Michael Paquier, Ali Akbar and Marti Raudsepp
2014-11-07Introduce custom path and scan providers.Robert Haas
This allows extension modules to define their own methods for scanning a relation, and get the core code to use them. It's unclear as yet how much use this capability will find, but we won't find out if we never commit it. KaiGai Kohei, reviewed at various times and in various levels of detail by Shigeru Hanada, Tom Lane, Andres Freund, Álvaro Herrera, and myself.
2014-11-07BRIN: Block Range IndexesAlvaro Herrera
BRIN is a new index access method intended to accelerate scans of very large tables, without the maintenance overhead of btrees or other traditional indexes. They work by maintaining "summary" data about block ranges. Bitmap index scans work by reading each summary tuple and comparing them with the query quals; all pages in the range are returned in a lossy TID bitmap if the quals are consistent with the values in the summary tuple, otherwise not. Normal index scans are not supported because these indexes do not store TIDs. As new tuples are added into the index, the summary information is updated (if the block range in which the tuple is added is already summarized) or not; in the latter case, a subsequent pass of VACUUM or the brin_summarize_new_values() function will create the summary information. For data types with natural 1-D sort orders, the summary info consists of the maximum and the minimum values of each indexed column within each page range. This type of operator class we call "Minmax", and we supply a bunch of them for most data types with B-tree opclasses. Since the BRIN code is generalized, other approaches are possible for things such as arrays, geometric types, ranges, etc; even for things such as enum types we could do something different than minmax with better results. In this commit I only include minmax. Catalog version bumped due to new builtin catalog entries. There's more that could be done here, but this is a good step forwards. Loosely based on ideas from Simon Riggs; code mostly by Álvaro Herrera, with contribution by Heikki Linnakangas. Patch reviewed by: Amit Kapila, Heikki Linnakangas, Robert Haas. Testing help from Jeff Janes, Erik Rijkers, Emanuel Calvo. PS: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 318633.
2014-11-06Fix normalization of numeric values in JSONB GIN indexes.Tom Lane
The default JSONB GIN opclass (jsonb_ops) converts numeric data values to strings for storage in the index. It must ensure that numeric values that would compare equal (such as 12 and 12.00) produce identical strings, else index searches would have behavior different from regular JSONB comparisons. Unfortunately the function charged with doing this was completely wrong: it could reduce distinct numeric values to the same string, or reduce equivalent numeric values to different strings. The former type of error would only lead to search inefficiency, but the latter type of error would cause index entries that should be found by a search to not be found. Repairing this bug therefore means that it will be necessary for 9.4 beta testers to reindex GIN jsonb_ops indexes, if they care about getting correct results from index searches involving numeric data values within the comparison JSONB object. Per report from Thomas Fanghaenel.
2014-11-06Move the backup-block logic from XLogInsert to a new file, xloginsert.c.Heikki Linnakangas
xlog.c is huge, this makes it a little bit smaller, which is nice. Functions related to putting together the WAL record are in xloginsert.c, and the lower level stuff for managing WAL buffers and such are in xlog.c. Also move the definition of XLogRecord to a separate header file. This causes churn in the #includes of all the files that write WAL records, and redo routines, but it avoids pulling in xlog.h into most places. Reviewed by Michael Paquier, Alvaro Herrera, Andres Freund and Amit Kapila.
2014-11-04Switch to CRC-32C in WAL and other places.Heikki Linnakangas
The old algorithm was found to not be the usual CRC-32 algorithm, used by Ethernet et al. We were using a non-reflected lookup table with code meant for a reflected lookup table. That's a strange combination that AFAICS does not correspond to any bit-wise CRC calculation, which makes it difficult to reason about its properties. Although it has worked well in practice, seems safer to use a well-known algorithm. Since we're changing the algorithm anyway, we might as well choose a different polynomial. The Castagnoli polynomial has better error-correcting properties than the traditional CRC-32 polynomial, even if we had implemented it correctly. Another reason for picking that is that some new CPUs have hardware support for calculating CRC-32C, but not CRC-32, let alone our strange variant of it. This patch doesn't add any support for such hardware, but a future patch could now do that. The old algorithm is kept around for tsquery and pg_trgm, which use the values in indexes that need to remain compatible so that pg_upgrade works. While we're at it, share the old lookup table for CRC-32 calculation between hstore, ltree and core. They all use the same table, so might as well.
2014-10-31Support frontend-backend protocol communication using a shm_mq.Robert Haas
A background worker can use pq_redirect_to_shm_mq() to direct protocol that would normally be sent to the frontend to a shm_mq so that another process may read them. The receiving process may use pq_parse_errornotice() to parse an ErrorResponse or NoticeResponse from the background worker and, if it wishes, ThrowErrorData() to propagate the error (with or without further modification). Patch by me. Review by Andres Freund.
2014-10-27Fix two bugs in tsquery @> operator.Heikki Linnakangas
1. The comparison for matching terms used only the CRC to decide if there's a match. Two different terms with the same CRC gave a match. 2. It assumed that if the second operand has more terms than the first, it's never a match. That assumption is bogus, because there can be duplicate terms in either operand. Rewrite the implementation in a way that doesn't have those bugs. Backpatch to all supported versions.
2014-10-21Allow input format xxxx-xxxx-xxxx for macaddr typePeter Eisentraut
Author: Herwin Weststrate <herwin@quarantainenet.nl> Reviewed-by: Ali Akbar <the.apaan@gmail.com>
2014-10-20Fix typos.Robert Haas
Etsuro Fujita
2014-10-16Support timezone abbreviations that sometimes change.Tom Lane
Up to now, PG has assumed that any given timezone abbreviation (such as "EDT") represents a constant GMT offset in the usage of any particular region; we had a way to configure what that offset was, but not for it to be changeable over time. But, as with most things horological, this view of the world is too simplistic: there are numerous regions that have at one time or another switched to a different GMT offset but kept using the same timezone abbreviation. Almost the entire Russian Federation did that a few years ago, and later this month they're going to do it again. And there are similar examples all over the world. To cope with this, invent the notion of a "dynamic timezone abbreviation", which is one that is referenced to a particular underlying timezone (as defined in the IANA timezone database) and means whatever it currently means in that zone. For zones that use or have used daylight-savings time, the standard and DST abbreviations continue to have the property that you can specify standard or DST time and get that time offset whether or not DST was theoretically in effect at the time. However, the abbreviations mean what they meant at the time in question (or most recently before that time) rather than being absolutely fixed. The standard abbreviation-list files have been changed to use this behavior for abbreviations that have actually varied in meaning since 1970. The old simple-numeric definitions are kept for abbreviations that have not changed, since they are a bit faster to resolve. While this is clearly a new feature, it seems necessary to back-patch it into all active branches, because otherwise use of Russian zone abbreviations is going to become even more problematic than it already was. This change supersedes the changes in commit 513d06ded et al to modify the fixed meanings of the Russian abbreviations; since we've not shipped that yet, this will avoid an undesirably incompatible (not to mention incorrect) change in behavior for timestamps between 2011 and 2014. This patch makes some cosmetic changes in ecpglib to keep its usage of datetime lookup tables as similar as possible to the backend code, but doesn't do anything about the increasingly obsolete set of timezone abbreviation definitions that are hard-wired into ecpglib. Whatever we do about that will likely not be appropriate material for back-patching. Also, a potential free() of a garbage pointer after an out-of-memory failure in ecpglib has been fixed. This patch also fixes pre-existing bugs in DetermineTimeZoneOffset() that caused it to produce unexpected results near a timezone transition, if both the "before" and "after" states are marked as standard time. We'd only ever thought about or tested transitions between standard and DST time, but that's not what's happening when a zone simply redefines their base GMT offset. In passing, update the SGML documentation to refer to the Olson/zoneinfo/ zic timezone database as the "IANA" database, since it's now being maintained under the auspices of IANA.
2014-10-11Fix bogus optimization in JSONB containment tests.Tom Lane
When determining whether one JSONB object contains another, it's okay to make a quick exit if the first object has fewer pairs than the second: because we de-duplicate keys within objects, it is impossible that the first object has all the keys the second does. However, the code was applying this rule to JSONB arrays as well, where it does *not* hold because arrays can contain duplicate entries. The test was really in the wrong place anyway; we should do it within JsonbDeepContains, where it can be applied to nested objects not only top-level ones. Report and test cases by Alexander Korotkov; fix by Peter Geoghegan and Tom Lane.
2014-10-08Split builtins.h to a new header ruleutils.hAlvaro Herrera
The new header contains many prototypes for functions in ruleutils.c that are not exposed to the SQL level. Reviewed by Andres Freund and Michael Paquier.
2014-10-07Implement SKIP LOCKED for row-level locksAlvaro Herrera
This clause changes the behavior of SELECT locking clauses in the presence of locked rows: instead of causing a process to block waiting for the locks held by other processes (or raise an error, with NOWAIT), SKIP LOCKED makes the new reader skip over such rows. While this is not appropriate behavior for general purposes, there are some cases in which it is useful, such as queue-like tables. Catalog version bumped because this patch changes the representation of stored rules. Reviewed by Craig Ringer (based on a previous attempt at an implementation by Simon Riggs, who also provided input on the syntax used in the current patch), David Rowley, and Álvaro Herrera. Author: Thomas Munro
2014-09-29Revert 95d737ff to add 'ignore_nulls'Stephen Frost
Per discussion, revert the commit which added 'ignore_nulls' to row_to_json. This capability would be better added as an independent function rather than being bolted on to row_to_json. Additionally, the implementation didn't address complex JSON objects, and so was incomplete anyway. Pointed out by Tom and discussed with Andrew and Robert.
2014-09-29Change JSONB's on-disk format for improved performance.Tom Lane
The original design used an array of offsets into the variable-length portion of a JSONB container. However, such an array is basically uncompressible by simple compression techniques such as TOAST's LZ compressor. That's bad enough, but because the offset array is at the front, it tended to trigger the give-up-after-1KB heuristic in the TOAST code, so that the entire JSONB object was stored uncompressed; which was the root cause of bug #11109 from Larry White. To fix without losing the ability to extract a random array element in O(1) time, change this scheme so that most of the JEntry array elements hold lengths rather than offsets. With data that's compressible at all, there tend to be fewer distinct element lengths, so that there is scope for compression of the JEntry array. Every N'th entry is still an offset. To determine the length or offset of any specific element, we might have to examine up to N preceding JEntrys, but that's still O(1) so far as the total container size is concerned. Testing shows that this cost is negligible compared to other costs of accessing a JSONB field, and that the method does largely fix the incompressible-data problem. While at it, rearrange the order of elements in a JSONB object so that it's "all the keys, then all the values" not alternating keys and values. This doesn't really make much difference right at the moment, but it will allow providing a fast path for extracting individual object fields from large JSONB values stored EXTERNAL (ie, uncompressed), analogously to the existing optimization for substring extraction from large EXTERNAL text values. Bump catversion to denote the incompatibility in on-disk format. We will need to fix pg_upgrade to disallow upgrading jsonb data stored with 9.4 betas 1 and 2. Heikki Linnakangas and Tom Lane
2014-09-25Remove ill-conceived ban on zero length json object keys.Andrew Dunstan
We removed a similar ban on this in json_object recently, but the ban in datum_to_json was left, which generate4d sprutious errors in othee json generators, notable json_build_object. Along the way, add an assertion that datum_to_json is not passed a null key. All current callers comply with this rule, but the assertion will catch any possible future misbehaviour.
2014-09-25Return NULL from json_object_agg if it gets no rows.Andrew Dunstan
This makes it consistent with the docs and with all other builtin aggregates apart from count().
2014-09-24Code review for row security.Stephen Frost
Buildfarm member tick identified an issue where the policies in the relcache for a relation were were being replaced underneath a running query, leading to segfaults while processing the policies to be added to a query. Similar to how TupleDesc RuleLocks are handled, add in a equalRSDesc() function to check if the policies have actually changed and, if not, swap back the rsdesc field (using the original instead of the temporairly built one; the whole structure is swapped and then specific fields swapped back). This now passes a CLOBBER_CACHE_ALWAYS for me and should resolve the buildfarm error. In addition to addressing this, add a new chapter in Data Definition under Privileges which explains row security and provides examples of its usage, change \d to always list policies (even if row security is disabled- but note that it is disabled, or enabled with no policies), rework check_role_for_policy (it really didn't need the entire policy, but it did need to be using has_privs_of_role()), and change the field in pg_class to relrowsecurity from relhasrowsecurity, based on Heikki's suggestion. Also from Heikki, only issue SET ROW_SECURITY in pg_restore when talking to a 9.5+ server, list Bypass RLS in \du, and document --enable-row-security options for pg_dump and pg_restore. Lastly, fix a number of minor whitespace and typo issues from Heikki, Dimitri, add a missing #include, per Peter E, fix a few minor variable-assigned-but-not-used and resource leak issues from Coverity and add tab completion for role attribute bypassrls as well.
2014-09-19Add a fast pre-check for equality of equal-length strings.Robert Haas
Testing reveals that that doing a memcmp() before the strcoll() costs practically nothing, at least on the systems we tested, and it speeds up sorts containing many equal strings significatly. Peter Geoghegan. Review by myself and Heikki Linnakangas. Comments rewritten by me.
2014-09-19Row-Level Security Policies (RLS)Stephen Frost
Building on the updatable security-barrier views work, add the ability to define policies on tables to limit the set of rows which are returned from a query and which are allowed to be added to a table. Expressions defined by the policy for filtering are added to the security barrier quals of the query, while expressions defined to check records being added to a table are added to the with-check options of the query. New top-level commands are CREATE/ALTER/DROP POLICY and are controlled by the table owner. Row Security is able to be enabled and disabled by the owner on a per-table basis using ALTER TABLE .. ENABLE/DISABLE ROW SECURITY. Per discussion, ROW SECURITY is disabled on tables by default and must be enabled for policies on the table to be used. If no policies exist on a table with ROW SECURITY enabled, a default-deny policy is used and no records will be visible. By default, row security is applied at all times except for the table owner and the superuser. A new GUC, row_security, is added which can be set to ON, OFF, or FORCE. When set to FORCE, row security will be applied even for the table owner and superusers. When set to OFF, row security will be disabled when allowed and an error will be thrown if the user does not have rights to bypass row security. Per discussion, pg_dump sets row_security = OFF by default to ensure that exports and backups will have all data in the table or will error if there are insufficient privileges to bypass row security. A new option has been added to pg_dump, --enable-row-security, to ask pg_dump to export with row security enabled. A new role capability, BYPASSRLS, which can only be set by the superuser, is added to allow other users to be able to bypass row security using row_security = OFF. Many thanks to the various individuals who have helped with the design, particularly Robert Haas for his feedback. Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean Rasheed, with additional changes and rework by me. Reviewers have included all of the above, Greg Smith, Jeff McCormick, and Robert Haas.
2014-09-12Revert f68dc5d86b9f287f80f4417f5a24d876eb13771dBruce Momjian
Renaming will have to be more comprehensive, so I need approval.
2014-09-12More formatting.c variable renaming, for clarityBruce Momjian
2014-09-11Fix power_var_int() for large integer exponents.Tom Lane
The code for raising a NUMERIC value to an integer power wasn't very careful about large powers. It got an outright wrong answer for an exponent of INT_MIN, due to failure to consider overflow of the Abs(exp) operation; which is fixable by using an unsigned rather than signed exponent value after that point. Also, even though the number of iterations of the power-computation loop is pretty limited, it's easy for the repeated squarings to result in ridiculously enormous intermediate values, which can take unreasonable amounts of time/memory to process, or even overflow the internal "weight" field and so produce a wrong answer. We can forestall misbehaviors of that sort by bailing out as soon as the weight value exceeds what will fit in int16, since then the final answer must overflow (if exp > 0) or underflow (if exp < 0) the packed numeric format. Per off-list report from Pavel Stehule. Back-patch to all supported branches.
2014-09-11Add 'ignore_nulls' option to row_to_jsonStephen Frost
Provide an option to skip NULL values in a row when generating a JSON object from that row with row_to_json. This can reduce the size of the JSON object in cases where columns are NULL without really reducing the information in the JSON object. This also makes row_to_json into a single function with default values, rather than having multiple functions. In passing, change array_to_json to also be a single function with default values (we don't add an 'ignore_nulls' option yet- it's not clear that there is a sensible use-case there, and it hasn't been asked for in any case). Pavel Stehule
2014-09-11Silence compiler warning on Windows.Heikki Linnakangas
David Rowley.
2014-09-10Implement mxid_age() to compute multi-xid ageBruce Momjian
Report by Josh Berkus
2014-09-09Add width_bucket(anyelement, anyarray).Tom Lane
This provides a convenient method of classifying input values into buckets that are not necessarily equal-width. It works on any sortable data type. The choice of function name is a bit debatable, perhaps, but showing that there's a relationship to the SQL standard's width_bucket() function seems more attractive than the other proposals. Petr Jelinek, reviewed by Pavel Stehule
2014-09-09Allow empty content in xml typePeter Eisentraut
The xml type previously rejected "content" that is empty or consists only of spaces. But the SQL/XML standard allows that, so change that. The accepted values for XML "documents" are not changed. Reviewed-by: Ali Akbar <the.apaan@gmail.com>
2014-09-05Rename C variables in formatting.c, for clarityBruce Momjian
Also add C comments. This should help future debugging of this notorious file.
2014-08-28Add min and max aggregates for inet/cidr data types.Tom Lane
Haribabu Kommi, reviewed by Muhammad Asif Naeem
2014-08-27Allow multibyte characters as escape in SIMILAR TO and SUBSTRING.Jeff Davis
Previously, only a single-byte character was allowed as an escape. This patch allows it to be a multi-byte character, though it still must be a single character. Reviewed by Heikki Linnakangas and Tom Lane.
2014-08-26Fix typo in b34e37bfefbed1bf9396dde18f308d8b96fd176c.Robert Haas
Spotted by Peter Geoghegan.
2014-08-25rename macro isTempOrToastNamespace to isTempOrTempToastNamespaceBruce Momjian
Done for clarity
2014-08-22Fix corner-case behaviors in JSON/JSONB field extraction operators.Tom Lane
Cause the path extraction operators to return their lefthand input, not NULL, if the path array has no elements. This seems more consistent since the case ought to correspond to applying the simple extraction operator (->) zero times. Cause other corner cases in field/element/path extraction to return NULL rather than failing. This behavior is arguably more useful than throwing an error, since it allows an expression index using these operators to be built even when not all values in the column are suitable for the extraction being indexed. Moreover, we already had multiple inconsistencies between the path extraction operators and the simple extraction operators, as well as inconsistencies between the JSON and JSONB code paths. Adopt a uniform rule of returning NULL rather than throwing an error when the JSON input does not have a structure that permits the request to be satisfied. Back-patch to 9.4. Update the release notes to list this as a behavior change since 9.3.
2014-08-20Fix core dump in jsonb #> operator, and add regression test cases.Tom Lane
jsonb's #> operator segfaulted (dereferencing a null pointer) if the RHS was a zero-length array, as reported in bug #11207 from Justin Van Winkle. json's #> operator returns NULL in such cases, so for the moment let's make jsonb act likewise. Also add a bunch of regression test queries memorializing the -> and #> operators' behavior for this and other corner cases. There is a good argument for changing some of these behaviors, as they are not very consistent with each other, and throwing an error isn't necessarily a desirable behavior for operators that are likely to be used in indexes. However, everybody can agree that a core dump is the Wrong Thing, and we need test cases even if we decide to change their expected output later.
2014-08-17Use ISO 8601 format for dates converted to JSON, too.Tom Lane
Commit f30015b6d794c15d52abbb3df3a65081fbefb1ed made this happen for timestamp and timestamptz, but it seems pretty inconsistent to not do it for simple dates as well. (In passing, I re-pgindent'd json.c.)
2014-08-16Fix bogus return macros in range_overright_internal().Tom Lane
PG_RETURN_BOOL() should only be used in functions following the V1 SQL function API. This coding accidentally fails to fail since letting the compiler coerce the Datum representation of bool back to plain bool does give the right answer; but that doesn't make it a good idea. Back-patch to older branches just to avoid unnecessary code divergence.
2014-08-14Add sortsupport routines for text.Robert Haas
This provides a small but worthwhile speedup when sorting text, at least in cases to which the sortsupport machinery applies. Robert Haas and Peter Geoghegan
2014-08-09Clean up handling of unknown-type inputs in json_build_object and friends.Tom Lane
There's actually no need for any special case for unknown-type literals, since we only need to push the value through its output function and unknownout() works fine. The code that was here was completely bizarre anyway, and would fail outright in cases that should work, not to mention suffering from some copy-and-paste bugs.
2014-08-09Further cleanup of JSON-specific error messages.Tom Lane
Fix an obvious typo in json_build_object()'s complaint about invalid number of arguments, and make the errhint a bit more sensible too. Per discussion about how to word the improved hint, change the few places in the documentation that refer to JSON object field names as "names" to say "keys" instead, since that's what we've said in the vast majority of places in the docs. Arguably "name" is more correct, since that's the terminology used in RFC 7159; but we're stuck with "key" in view of the naming of json_object_keys() so let's at least be self-consistent. I adjusted a few code comments to match this as well, and failed to resist the temptation to clean up some odd whitespace choices in the same area, as well as a useless duplicate PG_ARGISNULL() check. There's still quite a bit of code that uses the phrase "field name" in non-user- visible ways, so I left those usages alone.
2014-08-05Improve some JSON error messages.Robert Haas
These messages are new in 9.4, which hasn't been released yet, so back-patch to REL9_4_STABLE. Daniele Varrazzo