summaryrefslogtreecommitdiff
path: root/src/backend/utils
AgeCommit message (Collapse)Author
2014-12-08Event Trigger for table_rewriteSimon Riggs
Generate a table_rewrite event when ALTER TABLE attempts to rewrite a table. Provide helper functions to identify table and reason. Intended use case is to help assess or to react to schema changes that might hold exclusive locks for long periods. Dimitri Fontaine, triggering an edit by Simon Riggs Reviewed in detail by Michael Paquier
2014-12-05Print wal_log_hints in the rm_desc routing of a parameter-change record.Heikki Linnakangas
It was an oversight in the original commit. Also note in the sample config file that changing wal_log_hints requires a restart. Michael Paquier. Backpatch to 9.4, where wal_log_hints was added.
2014-12-03Keep track of transaction commit timestampsAlvaro Herrera
Transactions can now set their commit timestamp directly as they commit, or an external transaction commit timestamp can be fed from an outside system using the new function TransactionTreeSetCommitTsData(). This data is crash-safe, and truncated at Xid freeze point, same as pg_clog. This module is disabled by default because it causes a performance hit, but can be enabled in postgresql.conf requiring only a server restart. A new test in src/test/modules is included. Catalog version bumped due to the new subdirectory within PGDATA and a couple of new SQL functions. Authors: Álvaro Herrera and Petr Jelínek Reviewed to varying degrees by Michael Paquier, Andres Freund, Robert Haas, Amit Kapila, Fujii Masao, Jaime Casanova, Simon Riggs, Steven Singer, Peter Eisentraut
2014-12-02Improve error messages for malformed array input strings.Tom Lane
Make the error messages issued by array_in() uniformly follow the style ERROR: malformed array literal: "actual input string" DETAIL: specific complaint here and rewrite many of the specific complaints to be clearer. The immediate motivation for doing this is a complaint from Josh Berkus that json_to_record() produced an unintelligible error message when dealing with an array item, because it tries to feed the JSON-format array value to array_in(). Really it ought to be smart enough to perform JSON-to-Postgres array conversion, but that's a future feature not a bug fix. In the meantime, this change is something we agreed we could back-patch into 9.4, and it should help de-confuse things a bit.
2014-12-02Fix JSON aggregates to work properly when final function is re-executed.Tom Lane
Davide S. reported that json_agg() sometimes produced multiple trailing right brackets. This turns out to be because json_agg_finalfn() attaches the final right bracket, and was doing so by modifying the aggregate state in-place. That's verboten, though unfortunately it seems there's no way for nodeAgg.c to check for such mistakes. Fix that back to 9.3 where the broken code was introduced. In 9.4 and HEAD, likewise fix json_object_agg(), which had copied the erroneous logic. Make some cosmetic cleanups as well.
2014-12-01Guard against bad "dscale" values in numeric_recv().Tom Lane
We were not checking to see if the supplied dscale was valid for the given digit array when receiving binary-format numeric values. While dscale can validly be more than the number of nonzero fractional digits, it shouldn't be less; that case causes fractional digits to be hidden on display even though they're there and participate in arithmetic. Bug #12053 from Tommaso Sala indicates that there's at least one broken client library out there that sometimes supplies an incorrect dscale value, leading to strange behavior. This suggests that simply throwing an error might not be the best response; it would lead to failures in applications that might seem to be working fine today. What seems the least risky fix is to truncate away any digits that would be hidden by dscale. This preserves the existing behavior in terms of what will be printed for the transmitted value, while preventing subsequent arithmetic from producing results inconsistent with that. In passing, throw a specific error for the case of dscale being outside the range that will fit into a numeric's header. Before you got "value overflows numeric format", which is a bit misleading. Back-patch to all supported branches.
2014-12-01Fix hstore_to_json_loose's detection of valid JSON number values.Andrew Dunstan
We expose a function IsValidJsonNumber that internally calls the lexer for json numbers. That allows us to use the same test everywhere, instead of inventing a broken test for hstore conversions. The new function is also used in datum_to_json, replacing the code that is now moved to the new function. Backpatch to 9.3 where hstore_to_json_loose was introduced.
2014-11-28Improve typcache: cache negative lookup results, add invalidation logic.Tom Lane
Previously, if the typcache had for example tried and failed to find a hash opclass for a given data type, it would nonetheless repeat the unsuccessful catalog lookup each time it was asked again. This can lead to a significant amount of useless bufmgr traffic, as in a recent report from Scott Marlowe. Like the catalog caches, typcache should be able to cache negative results. This patch arranges that by making use of separate flag bits to remember whether a particular item has been looked up, rather than treating a zero OID as an indicator that no lookup has been done. Also, install a credible invalidation mechanism, namely watching for inval events in pg_opclass. The sole advantage of the lack of negative caching was that the code would cope if operators or opclasses got added for a type mid-session; to preserve that behavior we have to be able to invalidate stale lookup results. Updates in pg_opclass should be pretty rare in production systems, so it seems sufficient to just invalidate all the dependent data whenever one happens. Adding proper invalidation also means that this code will now react sanely if an opclass is dropped mid-session. Arguably, that's a back-patchable bug fix, but in view of the lack of complaints from the field I'll refrain from back-patching. (Probably, in most cases where an opclass is dropped, the data type itself is dropped soon after, so that this misfeasance has no bad consequences.)
2014-11-27Rename pg_rowsecurity -> pg_policy and other fixesStephen Frost
As pointed out by Robert, we should really have named pg_rowsecurity pg_policy, as the objects stored in that catalog are policies. This patch fixes that and updates the column names to start with 'pol' to match the new catalog name. The security consideration for COPY with row level security, also pointed out by Robert, has also been addressed by remembering and re-checking the OID of the relation initially referenced during COPY processing, to make sure it hasn't changed under us by the time we finish planning out the query which has been built. Robert and Alvaro also commented on missing OCLASS and OBJECT entries for POLICY (formerly ROWSECURITY or POLICY, depending) in various places. This patch fixes that too, which also happens to add the ability to COMMENT on policies. In passing, attempt to improve the consistency of messages, comments, and documentation as well. This removes various incarnations of 'row-security', 'row-level security', 'Row-security', etc, in favor of 'policy', 'row level security' or 'row_security' as appropriate. Happy Thanksgiving!
2014-11-25Attempt to suppress uninitialized variable warning.Robert Haas
Report by Heikki Linnakangas.
2014-11-25Fix uninitialized-variable warning.Tom Lane
In passing, add an Assert defending the presumption that bytes_left is positive to start with. (I'm not exactly convinced that using an unsigned type was such a bright thing here, but let's at least do this much.)
2014-11-25Support arrays as input to array_agg() and ARRAY(SELECT ...).Tom Lane
These cases formerly failed with errors about "could not find array type for data type". Now they yield arrays of the same element type and one higher dimension. The implementation involves creating functions with API similar to the existing accumArrayResult() family. I (tgl) also extended the base family by adding an initArrayResult() function, which allows callers to avoid special-casing the zero-inputs case if they just want an empty array as result. (Not all do, so the previous calling convention remains valid.) This allowed simplifying some existing code in xml.c and plperl.c. Ali Akbar, reviewed by Pavel Stehule, significantly modified by me
2014-11-24Add infrastructure to save and restore GUC values.Robert Haas
This is further infrastructure for parallelism. Amit Khandekar, Noah Misch, Robert Haas
2014-11-21Rearrange CustomScan API.Tom Lane
Make it work more like FDW plans do: instead of assuming that there are expressions in a CustomScan plan node that the core code doesn't know about, insist that all subexpressions that need planner attention be in a "custom_exprs" list in the Plan representation. (Of course, the custom plugin can break the list apart again at executor initialization.) This lets us revert the parts of the patch that exposed setrefs.c and subselect.c processing to the outside world. Also revert the GetSpecialCustomVar stuff in ruleutils.c; that concept may work in future, but it's far from fully baked right now.
2014-11-20Revamp the WAL record format.Heikki Linnakangas
Each WAL record now carries information about the modified relation and block(s) in a standardized format. That makes it easier to write tools that need that information, like pg_rewind, prefetching the blocks to speed up recovery, etc. There's a whole new API for building WAL records, replacing the XLogRecData chains used previously. The new API consists of XLogRegister* functions, which are called for each buffer and chunk of data that is added to the record. The new API also gives more control over when a full-page image is written, by passing flags to the XLogRegisterBuffer function. This also simplifies the XLogReadBufferForRedo() calls. The function can dig the relation and block number from the WAL record, so they no longer need to be passed as arguments. For the convenience of redo routines, XLogReader now disects each WAL record after reading it, copying the main data part and the per-block data into MAXALIGNed buffers. The data chunks are not aligned within the WAL record, but the redo routines can assume that the pointers returned by XLogRecGet* functions are. Redo routines are now passed the XLogReaderState, which contains the record in the already-disected format, instead of the plain XLogRecord. The new record format also makes the fixed size XLogRecord header smaller, by removing the xl_len field. The length of the "main data" portion is now stored at the end of the WAL record, and there's a separate header after XLogRecord for it. The alignment padding at the end of XLogRecord is also removed. This compansates for the fact that the new format would otherwise be more bulky than the old format. Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera, Fujii Masao.
2014-11-17Fix relpersistence setting in reindex_indexAlvaro Herrera
Buildfarm members with CLOBBER_CACHE_ALWAYS advised us that commit 85b506bbfc2937 was mistaken in setting the relpersistence value of the index directly in the relcache entry, within reindex_index. The reason for the failure is that an invalidation message that comes after mucking with the relcache entry directly, but before writing it to the catalogs, would cause the entry to become rebuilt in place from catalogs with the old contents, losing the update. Fix by passing the correct persistence value to RelationSetNewRelfilenode instead; this routine also writes the updated tuple to pg_class, avoiding the problem. Suggested by Tom Lane.
2014-11-15Get rid of SET LOGGED indexes persistence kludgeAlvaro Herrera
This removes ATChangeIndexesPersistence() introduced by f41872d0c1239d36 which was too ugly to live for long. Instead, the correct persistence marking is passed all the way down to reindex_index, so that the transient relation built to contain the index relfilenode can get marked correctly right from the start. Author: Fabrízio de Royes Mello Review and editorialization by Michael Paquier and Álvaro Herrera
2014-11-14Clean up includes from RLS patchStephen Frost
The initial patch for RLS mistakenly included headers associated with the executor and planner bits in rewrite/rowsecurity.h. Per policy and general good sense, executor headers should not be included in planner headers or vice versa. The include of execnodes.h was a mistaken holdover from previous versions, while the include of relation.h was used for Relation's definition, which should have been coming from utils/relcache.h. This patch cleans these issues up, adds comments to the RowSecurityPolicy struct and the RowSecurityConfigType enum, and changes Relation->rsdesc to Relation->rd_rsdesc to follow Relation field naming convention. Additionally, utils/rel.h was including rewrite/rowsecurity.h, which wasn't a great idea since that was pulling in things not really needed in utils/rel.h (which gets included in quite a few places). Instead, use 'struct RowSecurityDesc' for the rd_rsdesc field and add comments explaining why. Lastly, add an include into access/nbtree/nbtsort.c for utils/sortsupport.h, which was evidently missed due to the above mess. Pointed out by Tom in 16970.1415838651@sss.pgh.pa.us; note that the concerns regarding a similar situation in the custom-path commit still need to be addressed.
2014-11-13Move the guts of our Levenshtein implementation into core.Robert Haas
The hope is that we can use this to produce better diagnostics in some cases. Peter Geoghegan, reviewed by Michael Paquier, with some further changes by me.
2014-11-13Rename pending_list_cleanup_size to gin_pending_list_limit.Fujii Masao
Since this parameter is only for GIN index, it's better to add "gin" to the parameter name for easier understanding.
2014-11-12Explicitly support the case that a plancache's raw_parse_tree is NULL.Tom Lane
This only happens if a client issues a Parse message with an empty query string, which is a bit odd; but since it is explicitly called out as legal by our FE/BE protocol spec, we'd probably better continue to allow it. Fix by adding tests everywhere that the raw_parse_tree field is passed to functions that don't or shouldn't accept NULL. Also make it clear in the relevant comments that NULL is an expected case. This reverts commits a73c9dbab0165b3395dfe8a44a7dfd16166963c4 and 2e9650cbcff8c8fb0d9ef807c73a44f241822eee, which fixed specific crash symptoms by hacking things at what now seems to be the wrong end, ie the callee functions. Making the callees allow NULL is superficially more robust, but it's not always true that there is a defensible thing for the callee to do in such cases. The caller has more context and is better able to decide what the empty-query case ought to do. Per followup discussion of bug #11335. Back-patch to 9.2. The code before that is sufficiently different that it would require development of a separate patch, which doesn't seem worthwhile for what is believed to be an essentially cosmetic change.
2014-11-11Message improvementsPeter Eisentraut
2014-11-11Remove incorrect comment.Robert Haas
This was introduced by commit 5ea86e6e65dd2da3e9a3464484985d48328e7fe3. Peter Geoghegan
2014-11-11Add generate_series(numeric, numeric).Fujii Masao
Платон Малюгин Reviewed by Michael Paquier, Ali Akbar and Marti Raudsepp
2014-11-11Add GUC and storage parameter to set the maximum size of GIN pending list.Fujii Masao
Previously the maximum size of GIN pending list was controlled only by work_mem. But the reasonable value of work_mem and the reasonable size of the list are basically not the same, so it was not appropriate to control both of them by only one GUC, i.e., work_mem. This commit separates new GUC, pending_list_cleanup_size, from work_mem to allow users to control only the size of the list. Also this commit adds pending_list_cleanup_size as new storage parameter to allow users to specify the size of the list per index. This is useful, for example, when users want to increase the size of the list only for the GIN index which can be updated heavily, and decrease it otherwise. Reviewed by Etsuro Fujita.
2014-11-07Introduce custom path and scan providers.Robert Haas
This allows extension modules to define their own methods for scanning a relation, and get the core code to use them. It's unclear as yet how much use this capability will find, but we won't find out if we never commit it. KaiGai Kohei, reviewed at various times and in various levels of detail by Shigeru Hanada, Tom Lane, Andres Freund, Álvaro Herrera, and myself.
2014-11-07Use the sortsupport infrastructure in more cases.Robert Haas
This removes some fmgr overhead from cases such as btree index builds. Peter Geoghegan, reviewed by Andreas Karlsson and me.
2014-11-07BRIN: Block Range IndexesAlvaro Herrera
BRIN is a new index access method intended to accelerate scans of very large tables, without the maintenance overhead of btrees or other traditional indexes. They work by maintaining "summary" data about block ranges. Bitmap index scans work by reading each summary tuple and comparing them with the query quals; all pages in the range are returned in a lossy TID bitmap if the quals are consistent with the values in the summary tuple, otherwise not. Normal index scans are not supported because these indexes do not store TIDs. As new tuples are added into the index, the summary information is updated (if the block range in which the tuple is added is already summarized) or not; in the latter case, a subsequent pass of VACUUM or the brin_summarize_new_values() function will create the summary information. For data types with natural 1-D sort orders, the summary info consists of the maximum and the minimum values of each indexed column within each page range. This type of operator class we call "Minmax", and we supply a bunch of them for most data types with B-tree opclasses. Since the BRIN code is generalized, other approaches are possible for things such as arrays, geometric types, ranges, etc; even for things such as enum types we could do something different than minmax with better results. In this commit I only include minmax. Catalog version bumped due to new builtin catalog entries. There's more that could be done here, but this is a good step forwards. Loosely based on ideas from Simon Riggs; code mostly by Álvaro Herrera, with contribution by Heikki Linnakangas. Patch reviewed by: Amit Kapila, Heikki Linnakangas, Robert Haas. Testing help from Jeff Janes, Erik Rijkers, Emanuel Calvo. PS: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 318633.
2014-11-06Fix normalization of numeric values in JSONB GIN indexes.Tom Lane
The default JSONB GIN opclass (jsonb_ops) converts numeric data values to strings for storage in the index. It must ensure that numeric values that would compare equal (such as 12 and 12.00) produce identical strings, else index searches would have behavior different from regular JSONB comparisons. Unfortunately the function charged with doing this was completely wrong: it could reduce distinct numeric values to the same string, or reduce equivalent numeric values to different strings. The former type of error would only lead to search inefficiency, but the latter type of error would cause index entries that should be found by a search to not be found. Repairing this bug therefore means that it will be necessary for 9.4 beta testers to reindex GIN jsonb_ops indexes, if they care about getting correct results from index searches involving numeric data values within the comparison JSONB object. Per report from Thomas Fanghaenel.
2014-11-06Move the backup-block logic from XLogInsert to a new file, xloginsert.c.Heikki Linnakangas
xlog.c is huge, this makes it a little bit smaller, which is nice. Functions related to putting together the WAL record are in xloginsert.c, and the lower level stuff for managing WAL buffers and such are in xlog.c. Also move the definition of XLogRecord to a separate header file. This causes churn in the #includes of all the files that write WAL records, and redo routines, but it avoids pulling in xlog.h into most places. Reviewed by Michael Paquier, Alvaro Herrera, Andres Freund and Amit Kapila.
2014-11-05Remove the last vestige of server-side autocommit.Tom Lane
Long ago we briefly had an "autocommit" GUC that turned server-side autocommit on and off. That behavior was removed in 7.4 after concluding that it broke far too much client-side logic, and making clients cope with both behaviors was impractical. But the GUC variable was left behind, so as not to break any client code that might be trying to read its value. Enough time has now passed that we should remove the GUC completely. Whatever vestigial backwards-compatibility benefit it had is outweighed by the risk of confusion for newbies who assume it ought to do something, as per a recent complaint from Wolfgang Wilhelm. In passing, adjust what seemed to me a rather confusing documentation reference to libpq's autocommit behavior. libpq as such knows nothing about autocommit, so psql is probably what was meant.
2014-11-04Switch to CRC-32C in WAL and other places.Heikki Linnakangas
The old algorithm was found to not be the usual CRC-32 algorithm, used by Ethernet et al. We were using a non-reflected lookup table with code meant for a reflected lookup table. That's a strange combination that AFAICS does not correspond to any bit-wise CRC calculation, which makes it difficult to reason about its properties. Although it has worked well in practice, seems safer to use a well-known algorithm. Since we're changing the algorithm anyway, we might as well choose a different polynomial. The Castagnoli polynomial has better error-correcting properties than the traditional CRC-32 polynomial, even if we had implemented it correctly. Another reason for picking that is that some new CPUs have hardware support for calculating CRC-32C, but not CRC-32, let alone our strange variant of it. This patch doesn't add any support for such hardware, but a future patch could now do that. The old algorithm is kept around for tsquery and pg_trgm, which use the values in indexes that need to remain compatible so that pg_upgrade works. While we're at it, share the old lookup table for CRC-32 calculation between hstore, ltree and core. They all use the same table, so might as well.
2014-10-31Support frontend-backend protocol communication using a shm_mq.Robert Haas
A background worker can use pq_redirect_to_shm_mq() to direct protocol that would normally be sent to the frontend to a shm_mq so that another process may read them. The receiving process may use pq_parse_errornotice() to parse an ErrorResponse or NoticeResponse from the background worker and, if it wishes, ThrowErrorData() to propagate the error (with or without further modification). Patch by me. Review by Andres Freund.
2014-10-29Avoid setup work for invalidation messages at start-of-(sub)xact.Robert Haas
Instead of initializing a new TransInvalidationInfo for every transaction or subtransaction, we can just do it for those transactions or subtransactions that actually need to queue invalidation messages. That also avoids needing to free those entries at the end of a transaction or subtransaction that does not generate any invalidation messages, which is by far the common case. Patch by me. Review by Simon Riggs and Andres Freund.
2014-10-27Fix two bugs in tsquery @> operator.Heikki Linnakangas
1. The comparison for matching terms used only the CRC to decide if there's a match. Two different terms with the same CRC gave a match. 2. It assumed that if the second operand has more terms than the first, it's never a match. That assumption is bogus, because there can be duplicate terms in either operand. Rewrite the implementation in a way that doesn't have those bugs. Backpatch to all supported versions.
2014-10-23Add a function to get the authenticated user ID.Robert Haas
Previously, this was not exposed outside of miscinit.c. It is needed for the pending pg_background patch, and will also be needed for parallelism. Without it, there's no way for a background worker to re-create the exact authentication environment that was present in the process that started it, which could lead to security exposures.
2014-10-22Update comment.Heikki Linnakangas
The _bt_tuplecompare() function mentioned in comment hasn't existed for a long time. Peter Geoghegan
2014-10-21Allow input format xxxx-xxxx-xxxx for macaddr typePeter Eisentraut
Author: Herwin Weststrate <herwin@quarantainenet.nl> Reviewed-by: Ali Akbar <the.apaan@gmail.com>
2014-10-20Fix typos.Robert Haas
Etsuro Fujita
2014-10-18Allow setting effective_io_concurrency even on unsupported systemsPeter Eisentraut
This matches the behavior of other parameters that are unsupported on some systems (e.g., ssl). Also document the default value.
2014-10-16Support timezone abbreviations that sometimes change.Tom Lane
Up to now, PG has assumed that any given timezone abbreviation (such as "EDT") represents a constant GMT offset in the usage of any particular region; we had a way to configure what that offset was, but not for it to be changeable over time. But, as with most things horological, this view of the world is too simplistic: there are numerous regions that have at one time or another switched to a different GMT offset but kept using the same timezone abbreviation. Almost the entire Russian Federation did that a few years ago, and later this month they're going to do it again. And there are similar examples all over the world. To cope with this, invent the notion of a "dynamic timezone abbreviation", which is one that is referenced to a particular underlying timezone (as defined in the IANA timezone database) and means whatever it currently means in that zone. For zones that use or have used daylight-savings time, the standard and DST abbreviations continue to have the property that you can specify standard or DST time and get that time offset whether or not DST was theoretically in effect at the time. However, the abbreviations mean what they meant at the time in question (or most recently before that time) rather than being absolutely fixed. The standard abbreviation-list files have been changed to use this behavior for abbreviations that have actually varied in meaning since 1970. The old simple-numeric definitions are kept for abbreviations that have not changed, since they are a bit faster to resolve. While this is clearly a new feature, it seems necessary to back-patch it into all active branches, because otherwise use of Russian zone abbreviations is going to become even more problematic than it already was. This change supersedes the changes in commit 513d06ded et al to modify the fixed meanings of the Russian abbreviations; since we've not shipped that yet, this will avoid an undesirably incompatible (not to mention incorrect) change in behavior for timestamps between 2011 and 2014. This patch makes some cosmetic changes in ecpglib to keep its usage of datetime lookup tables as similar as possible to the backend code, but doesn't do anything about the increasingly obsolete set of timezone abbreviation definitions that are hard-wired into ecpglib. Whatever we do about that will likely not be appropriate material for back-patching. Also, a potential free() of a garbage pointer after an out-of-memory failure in ecpglib has been fixed. This patch also fixes pre-existing bugs in DetermineTimeZoneOffset() that caused it to produce unexpected results near a timezone transition, if both the "before" and "after" states are marked as standard time. We'd only ever thought about or tested transitions between standard and DST time, but that's not what's happening when a zone simply redefines their base GMT offset. In passing, update the SGML documentation to refer to the Olson/zoneinfo/ zic timezone database as the "IANA" database, since it's now being maintained under the auspices of IANA.
2014-10-13Consistently use NULL for invalid GUC unit stringsBruce Momjian
Patch by Euler Taveira
2014-10-11Fix bogus optimization in JSONB containment tests.Tom Lane
When determining whether one JSONB object contains another, it's okay to make a quick exit if the first object has fewer pairs than the second: because we de-duplicate keys within objects, it is impossible that the first object has all the keys the second does. However, the code was applying this rule to JSONB arrays as well, where it does *not* hold because arrays can contain duplicate entries. The test was really in the wrong place anyway; we should do it within JsonbDeepContains, where it can be applied to nested objects not only top-level ones. Report and test cases by Alexander Korotkov; fix by Peter Geoghegan and Tom Lane.
2014-10-08Split builtins.h to a new header ruleutils.hAlvaro Herrera
The new header contains many prototypes for functions in ruleutils.c that are not exposed to the SQL level. Reviewed by Andres Freund and Michael Paquier.
2014-10-07Implement SKIP LOCKED for row-level locksAlvaro Herrera
This clause changes the behavior of SELECT locking clauses in the presence of locked rows: instead of causing a process to block waiting for the locks held by other processes (or raise an error, with NOWAIT), SKIP LOCKED makes the new reader skip over such rows. While this is not appropriate behavior for general purposes, there are some cases in which it is useful, such as queue-like tables. Catalog version bumped because this patch changes the representation of stored rules. Reviewed by Craig Ringer (based on a previous attempt at an implementation by Simon Riggs, who also provided input on the syntax used in the current patch), David Rowley, and Álvaro Herrera. Author: Thomas Munro
2014-10-04Update time zone data files to tzdata release 2014h.Tom Lane
Most zones in the Russian Federation are subtracting one or two hours as of 2014-10-26. Update the meanings of the abbreviations IRKT, KRAT, MAGT, MSK, NOVT, OMST, SAKT, VLAT, YAKT, YEKT to match. The IANA timezone database has adopted abbreviations of the form AxST/AxDT for all Australian time zones, reflecting what they believe to be current majority practice Down Under. These names do not conflict with usage elsewhere (other than ACST for Acre Summer Time, which has been in disuse since 1994). Accordingly, adopt these names into our "Default" timezone abbreviation set. The "Australia" abbreviation set now contains only CST,EAST,EST,SAST,SAT,WST, all of which are thought to be mostly historical usage. Note that SAST has also been changed to be South Africa Standard Time in the "Default" abbreviation set. Add zone abbreviations SRET (Asia/Srednekolymsk) and XJT (Asia/Urumqi), and use WSST/WSDT for western Samoa. Also a DST law change in the Turks & Caicos Islands (America/Grand_Turk), and numerous corrections for historical time zone data.
2014-10-01Remove num_xloginsert_locks GUC, replace with a #defineHeikki Linnakangas
I left the GUC in place for the beta period, so that people could experiment with different values. No-one's come up with any data that a different value would be better under some circumstances, so rather than try to document to users what the GUC, let's just hard-code the current value, 8.
2014-09-29Revert 95d737ff to add 'ignore_nulls'Stephen Frost
Per discussion, revert the commit which added 'ignore_nulls' to row_to_json. This capability would be better added as an independent function rather than being bolted on to row_to_json. Additionally, the implementation didn't address complex JSON objects, and so was incomplete anyway. Pointed out by Tom and discussed with Andrew and Robert.
2014-09-29Change JSONB's on-disk format for improved performance.Tom Lane
The original design used an array of offsets into the variable-length portion of a JSONB container. However, such an array is basically uncompressible by simple compression techniques such as TOAST's LZ compressor. That's bad enough, but because the offset array is at the front, it tended to trigger the give-up-after-1KB heuristic in the TOAST code, so that the entire JSONB object was stored uncompressed; which was the root cause of bug #11109 from Larry White. To fix without losing the ability to extract a random array element in O(1) time, change this scheme so that most of the JEntry array elements hold lengths rather than offsets. With data that's compressible at all, there tend to be fewer distinct element lengths, so that there is scope for compression of the JEntry array. Every N'th entry is still an offset. To determine the length or offset of any specific element, we might have to examine up to N preceding JEntrys, but that's still O(1) so far as the total container size is concerned. Testing shows that this cost is negligible compared to other costs of accessing a JSONB field, and that the method does largely fix the incompressible-data problem. While at it, rearrange the order of elements in a JSONB object so that it's "all the keys, then all the values" not alternating keys and values. This doesn't really make much difference right at the moment, but it will allow providing a fast path for extracting individual object fields from large JSONB values stored EXTERNAL (ie, uncompressed), analogously to the existing optimization for substring extraction from large EXTERNAL text values. Bump catversion to denote the incompatibility in on-disk format. We will need to fix pg_upgrade to disallow upgrading jsonb data stored with 9.4 betas 1 and 2. Heikki Linnakangas and Tom Lane
2014-09-26Fix relcache for policies, and doc updatesStephen Frost
Andres pointed out that there was an extra ';' in equalPolicies, which made me realize that my prior testing with CLOBBER_CACHE_ALWAYS was insufficient (it didn't always catch the issue, just most of the time). Thanks to that, a different issue was discovered, specifically in equalRSDescs. This change corrects eqaulRSDescs to return 'true' once all policies have been confirmed logically identical. After stepping through both functions to ensure correct behavior, I ran this for about 12 hours of CLOBBER_CACHE_ALWAYS runs of the regression tests with no failures. In addition, correct a few typos in the documentation which were pointed out by Thom Brown (thanks!) and improve the policy documentation further by adding a flushed out usage example based on a unix passwd file. Lastly, clean up a few comments in the regression tests and pg_dump.h.