Age | Commit message (Collapse) | Author |
|
Josh Kupershmidt
|
|
Andres Freund
|
|
Robins Tharakan, reviewed by Szymon Guz, substantially revised by me.
|
|
In pg_basebackup.c:reached_end_position(), we're reading from an
internal pipe with our own background process but we're possibly
reading more bytes than will actually fit into our buffer due to
an off-by-one error. As we're reading from an internal pipe
there's no real risk here, but it's good form to not depend on
such convenient arrangements.
Bug spotted by the Coverity scanner.
Back-patch to 9.2 where this showed up.
|
|
In pg_dump.c:getEventTriggers, check what major version we are on
before calling createPQExpBuffer() to avoid leaking that bit of
memory.
Leak discovered by the Coverity scanner.
Back-patch to 9.3 where support for dumping event triggers was
added.
|
|
When creating the symlink for the xlog directory, free the string
which stores the link location. Not really an issue but it doesn't
hurt to be good about this- prior cleanups have fixed similar
issues.
Leak found by the Coverity scanner.
Not back-patching as I don't see it being worth the code churn.
|
|
In receivelog.c:writeTimeLineHistoryFile(), we were not properly
closing the open'd file descriptor in error cases. While this
wouldn't matter much if we were about to exit due to such an
error, that's not the case with pg_receivexlog as it can be a
long-running process and these errors are non-fatal.
This resource leak was found by the Coverity scanner.
Back-patch to 9.3 where this issue first appeared.
|
|
In tuplesort.c:inittapes(), we calculate tapeSpace by first figuring
out how many 'tapes' we can use (maxTapes) and then multiplying the
result by the tape buffer overhead for each. Unfortunately, when
we are on a system with an 8-byte long, we allow work_mem to be
larger than 2GB and that allows maxTapes to be large enough that the
32bit arithmetic can overflow when multiplied against the buffer
overhead.
When this overflow happens, we end up adding the overflow to the
amount of space available, causing the amount of memory allocated to
be larger than work_mem.
Note that to reach this point, you have to set work mem to at least
24GB and be sorting a set which is at least that size. Given that a
user who can set work_mem to 24GB could also set it even higher, if
they were looking to run the system out of memory, this isn't
considered a security issue.
This overflow risk was found by the Coverity scanner.
Back-patch to all supported branches, as this issue has existed
since before 8.4.
|
|
|
|
In streamutil.c:GetConnection(), upgrade failure to parse the
connection string to an exit(1) instead of simply returning NULL.
Most callers already immediately exited, but pg_receivexlog would
loop on this case, continually trying to re-parse the connection
string (which can't be changed after pg_receivexlog has started).
GetConnection() was already expected to exit(1) in some cases
(eg: failure to allocate memory or if unable to determine the
integer_datetimes flag), so this change shouldn't surprise anyone.
Began looking at this due to the Coverity scanner complaining that
we were leaking err_msg in this case- no longer an issue since we
just exit(1) immediately.
|
|
The command strings read by the child processes during parallel
pg_dump, after being read and handled, were not being free'd.
This patch corrects this relatively minor memory leak.
Leak found by the Coverity scanner.
Back patch to 9.3 where parallel pg_dump was introduced.
|
|
This is like shared_preload_libraries except that it takes effect at
backend start and can be changed without a full postmaster restart. It
is like local_preload_libraries except that it is still only settable by
a superuser. This can be a better way to load modules such as
auto_explain.
Since there are now three preload parameters, regroup the documentation
a bit. Put all parameters into one section, explain common
functionality only once, update the descriptions to reflect current and
future realities.
Reviewed-by: Dimitri Fontaine <dimitri@2ndQuadrant.fr>
|
|
This makes superuser-issued REFRESH MATERIALIZED VIEW safe regardless of
the object's provenance. REINDEX is an earlier example of this pattern.
As a downside, functions called from materialized views must tolerate
running in a security-restricted operation. CREATE MATERIALIZED VIEW
need not change user ID. Nonetheless, avoid creation of materialized
views that will invariably fail REFRESH by making it, too, start a
security-restricted operation.
Back-patch to 9.3 so materialized views have this from the beginning.
Reviewed by Kevin Grittner.
|
|
|
|
|
|
|
|
Itanium doesn't have the mfence instruction - that's a 386 thing. Use the
"mf" instruction instead.
This reverts the previous commit to add "#include <emmintrinsic.h>"; the
problem was not with a missing #include.
|
|
Hopefully this fixes the build failure on buildfarm member dugong.
|
|
path_encode's "closed" argument used to take three values: TRUE, FALSE,
or -1, while being of type bool. Replace that with a three-valued enum
for more clarity.
|
|
Was broken by my xloginsert scaling patch. XLogCtl global variable needs
to be initialized in each process, as it's not inherited by fork() on
Windows.
|
|
This patch replaces WALInsertLock with a number of WAL insertion slots,
allowing multiple backends to insert WAL records to the WAL buffers
concurrently. This is particularly useful for parallel loading large amounts
of data on a system with many CPUs.
This has one user-visible change: switching to a new WAL segment with
pg_switch_xlog() now fills the remaining unused portion of the segment with
zeros. This potentially adds some overhead, but it has been a very common
practice by DBA's to clear the "tail" of the segment with an external
pg_clearxlogtail utility anyway, to make the WAL files compress better.
With this patch, it's no longer necessary to do that.
This patch adds a new GUC, xloginsert_slots, to tune the number of WAL
insertion slots. Performance testing suggests that the default, 8, works
pretty well for all kinds of worklods, but I left the GUC in place to allow
others with different hardware to test that easily. We might want to remove
that before release.
Reviewed by Andres Freund.
|
|
The code in set_append_rel_pathlist() for building parameterized paths
for append relations (inheritance and UNION ALL combinations) supposed
that the cheapest regular path for a child relation would still be cheapest
when reparameterized. Which might not be the case, particularly if the
added join conditions are expensive to compute, as in a recent example from
Jeff Janes. Fix it to compare child path costs *after* reparameterizing.
We can short-circuit that if the cheapest pre-existing path is already
parameterized correctly, which seems likely to be true often enough to be
worth checking for.
Back-patch to 9.2 where parameterized paths were introduced.
|
|
|
|
Use "MXID" as placeholder for -m option, instead of just "XID".
|
|
Looks like a cut/paste error in the original addition of the file.
Andres Freund
|
|
Avoid output formatting differences by printing str() instead of repr()
of the value.
|
|
On some platforms, posix_fallocate() is available but may still return
EINVAL if the underlying filesystem does not support it. So, in case
of an error, fall through to the alternate implementation that just
writes zeros.
Per buildfarm failure and analysis by Tom Lane.
|
|
|
|
Commit 31a891857a128828d47d93c63e041f3b69cbab70 added some tests in
plpgsql.sql that used a function rather unthinkingly named "foo()".
However, rangefuncs.sql has some much older tests that create a function
of that name, and since these test scripts run in parallel, there is a
chance of failures if the timing is just right. Use another name to
avoid that. Per buildfarm (failure seen today on "hamerkop", but
probably it's happened before and not been noticed).
|
|
The old implementation converted PostgreSQL numeric to Python float,
which was always considered a shortcoming. Now numeric is converted to
the Python Decimal object. Either the external cdecimal module or the
standard library decimal module are supported.
From: Szymon Guz <mabewlun@gmail.com>
From: Ronan Dunklau <rdunklau@gmail.com>
Reviewed-by: Steve Singer <steve@ssinger.info>
|
|
All instances of the verbiage lagging the code. Back-patch to 9.3,
where materialized views were introduced.
|
|
This function is more efficient than actually writing out zeroes to
the new file, per microbenchmarks by Jon Nelson. Also, it may reduce
the likelihood of WAL file fragmentation.
Jon Nelson, with review by Andres Freund, Greg Smith and me.
|
|
This value, now pg_stat_all_tables.n_mod_since_analyze, was already
tracked and used by autovacuum, but not exposed to the user.
Mark Kirkwood, review by Laurenz Albe
|
|
statements.
|
|
Commit 263865a48973767ce8ed7b7788059a38a24a9f37 switched tuplesort.c and
tuplestore.c variables representing memory usage from type "long" to
type "Size". This was unnecessary; I thought doing so avoided overflow
scenarios on 64-bit Windows, but guc.c already limited work_mem so as to
prevent the overflow. It was also incomplete, not touching the logic
that assumed a signed data type. Change the affected variables to
"int64". This is perfect for 64-bit platforms, and it reduces the need
to contemplate platform-specific overflow scenarios. It also puts us
close to being able to support work_mem over 2 GiB on 64-bit Windows.
Per report from Andres Freund.
|
|
Michael Paquier
|
|
Comment: This code erroneously assumes '\.' on a line alone inside a
quoted CSV string terminates the \copy.
http://www.postgresql.org/message-id/E1TdNVQ-0001ju-GO@wrigleys.postgresql.org
|
|
In 9.3, there's no particular limit on the number of bgworkers;
instead, we just count up the number that are actually registered,
and use that to set MaxBackends. However, that approach causes
problems for Hot Standby, which needs both MaxBackends and the
size of the lock table to be the same on the standby as on the
master, yet it may not be desirable to run the same bgworkers in
both places. 9.3 handles that by failing to notice the problem,
which will probably work fine in nearly all cases anyway, but is
not theoretically sound.
A further problem with simply counting the number of registered
workers is that new workers can't be registered without a
postmaster restart. This is inconvenient for administrators,
since bouncing the postmaster causes an interruption of service.
Moreover, there are a number of applications for background
processes where, by necessity, the background process must be
started on the fly (e.g. parallel query). While this patch
doesn't actually make it possible to register new background
workers after startup time, it's a necessary prerequisite.
Patch by me. Review by Michael Paquier.
|
|
Bug introduced by commit 6697aa2bc25c83b88d6165340348a31328c35de6 and
reported by Robert Haas.
|
|
Treat TOAST index just the same as normal one and get the OID
of TOAST index from pg_index but not pg_class.reltoastidxid.
This change allows us to handle multiple TOAST indexes, and
which is required infrastructure for upcoming
REINDEX CONCURRENTLY feature.
Patch by Michael Paquier, reviewed by Andres Freund and me.
|
|
This reverts commit 263645305b8f14a3821e04dffa96fa7c1bc2ae86.
The buildfarm is sad.
|
|
The collate.linux.utf8 test covers some of the same territory, but
isn't portable and so probably does not get run often, or on
non-Linux platforms. If this approach turns out to be sufficiently
portable, we may want to look at trimming the redundant tests out
of that file to avoid duplication.
Robins Tharakan, reviewed by Michael Paquier and Fabien Coelho,
with further changes and cleanup by me.
|
|
An INSERT into such a view should work just like an INSERT into its base
table, ie the insertion should go directly into that table ... not be
duplicated into each child table, as was happening before, per bug #8275
from Rushabh Lathia. On the other hand, the current behavior for
UPDATE/DELETE seems reasonable: the update/delete traverses the child
tables, or not, depending on whether the view specifies ONLY or not.
Add some regression tests covering this area.
Dean Rasheed
|
|
In patch 82233ce7ea42, AbortStartTime wasn't being reset appropriately
after the restart sequence, causing subsequent iterations through
ServerLoop to malfunction.
|
|
Robins Tharakan, reviewed by Szymon Guz
|
|
Robins Tharakan, reviewed by Szymon Guz
|
|
Specifically, permit attaching them to the error in RAISE and retrieving
them from a caught error in GET STACKED DIAGNOSTICS. RAISE enforces
nothing about the content of the fields; for its purposes, they are just
additional string fields. Consequently, clarify in the protocol and
libpq documentation that the usual relationships between error fields,
like a schema name appearing wherever a table name appears, are not
universal. This freedom has other applications; consider a FDW
propagating an error from an RDBMS having no schema support.
Back-patch to 9.3, where core support for the error fields was
introduced. This prevents the confusion of having a release where libpq
exposes the fields and PL/pgSQL does not.
Pavel Stehule, lexical revisions by Noah Misch.
|
|
|
|
This mirrors the equivalent error cases in pg_dump.
|
|
To that end, support tags rather than lengths for external datums.
As an example of how this can be used, add support or "indirect"
tuples which point to some externally allocated memory containing
a toast tuple. Similar infrastructure could be used for other
purposes, including, perhaps, support for alternative compression
algorithms.
Andres Freund, reviewed by Hitoshi Harada and myself
|