summaryrefslogtreecommitdiff
path: root/src/include/commands
AgeCommit message (Collapse)Author
2023-07-11Report index vacuum progress.Masahiko Sawada
This commit adds two columns: indexes_total and indexes_processed, to pg_stat_progress_vacuum system view to show the index vacuum progress. These numbers are reported in the "vacuuming indexes" and "cleaning up indexes" phases. This uses the new parallel message type for progress reporting added by be06506e7. Bump catversion because this changes the definition of pg_stat_progress_vacuum. Author: Sami Imseih Reviewed by: Masahiko Sawada, Michael Paquier, Nathan Bossart, Andres Freund Discussion: https://www.postgresql.org/message-id/flat/5478DFCD-2333-401A-B2F0-0D186AB09228@amazon.com
2023-07-07Revert MAINTAIN privilege and pg_maintain predefined role.Nathan Bossart
This reverts the following commits: 4dbdb82513, c2122aae63, 5b1a879943, 9e1e9d6560, ff9618e82a, 60684dd834, 4441fc704d, and b5d6382496. A role with the MAINTAIN privilege may be able to use search_path tricks to escalate privileges to the table owner. Unfortunately, it is too late in the v16 development cycle to apply the proposed fix, i.e., restricting search_path when running maintenance commands. Bumps catversion. Reviewed-by: Jeff Davis Discussion: https://postgr.es/m/E1q7j7Y-000z1H-Hr%40gemulon.postgresql.org Backpatch-through: 16
2023-07-03Take pg_attribute out of VacAttrStatsPeter Eisentraut
The VacAttrStats structure contained the whole Form_pg_attribute for a column, but it actually only needs attstattarget from there. So remove the Form_pg_attribute field and make a separate field for attstattarget. This simplifies some code for extended statistics that doesn't deal with a column but an expression, which had to fake up pg_attribute rows to satisfy internal APIs. Also, we can remove some comments that essentially said "don't look at pg_attribute directly". Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/d6069765-5971-04d3-c10d-e4f7b2e9c459%40eisentraut.org
2023-07-03Add macro for maximum statistics targetPeter Eisentraut
The number of places where 10000 was hardcoded had grown a bit beyond the comfort level. Introduce a macro MAX_STATISTICS_TARGET instead. Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/d6069765-5971-04d3-c10d-e4f7b2e9c459%40eisentraut.org
2023-06-22Fix cache lookup hazards introduced by ff9618e82a.Nathan Bossart
ff9618e82a introduced has_partition_ancestor_privs(), which is used to check whether a user has MAINTAIN on any partition ancestors. This involves syscache lookups, and presently this function does not take any relation locks, so it is likely subject to the same kind of cache lookup failures that were fixed by 19de0ab23c. To fix this problem, this commit partially reverts ff9618e82a. Specifically, it removes the partition-related changes, including the has_partition_ancestor_privs() function mentioned above. This means that MAINTAIN on a partitioned table is no longer sufficient to perform maintenance commands on its partitions. This is more like how privileges for maintenance commands work on supported versions. Privileges are checked for each partition, so a command that flows down to all partitions might refuse to process them (e.g., if the current user doesn't have MAINTAIN on the partition). In passing, adjust a few related comments and error messages, and add a test for the privilege checks for CLUSTER on a partitioned table. Reviewed-by: Michael Paquier, Jeff Davis Discussion: https://postgr.es/m/20230613211246.GA219055%40nathanxps13
2023-06-20Move bool parameter for vacuum_rel() to option bits.Nathan Bossart
ff9618e82a introduced the skip_privs parameter, which is used to skip privilege checks when recursing to a relation's TOAST table. This parameter should have been added as a flag bit in VacuumParams->options instead. Suggested-by: Michael Paquier Reviewed-by: Michael Paquier, Jeff Davis Discussion: https://postgr.es/m/ZIj4v1CwqlDVJZfB%40paquier.xyz
2023-04-12Revert "Catalog NOT NULL constraints" and falloutAlvaro Herrera
This reverts commit e056c557aef4 and minor later fixes thereof. There's a few problems in this new feature -- most notably regarding pg_upgrade behavior, but others as well. This new feature is not in any way critical on its own, so instead of scrambling to fix it we revert it and try again in early 17 with these issues in mind. Discussion: https://postgr.es/m/3801207.1681057430@sss.pgh.pa.us
2023-04-07Catalog NOT NULL constraintsAlvaro Herrera
We now create pg_constaint rows for NOT NULL constraints with contype='n'. We propagate these constraints during operations such as adding inheritance relationships, creating and attaching partitions, creating tables LIKE other tables. We mostly follow the well-known rules of conislocal and coninhcount that we have for CHECK constraints, with some adaptations; for example, as opposed to CHECK constraints, we don't match NOT NULL ones by name when descending a hierarchy to alter it; instead we match by column number. This means we don't require the constraint names to be identical across a hierarchy. For now, we omit them from system catalogs. Maybe this is worth reconsidering. We don't support NOT VALID nor DEFERRABLE clauses either; these can be added as separate features later (this patch is already large and complicated enough.) This has been very long in the making. The first patch was written by Bernd Helmle in 2010 to add a new pg_constraint.contype value ('n'), which I (Álvaro) then hijacked in 2011 and 2012, until that one was killed by the realization that we ought to use contype='c' instead: manufactured CHECK constraints. However, later SQL standard development, as well as nonobvious emergent properties of that design (mostly, failure to distinguish them from "normal" CHECK constraints as well as the performance implication of having to test the CHECK expression) led us to reconsider this choice, so now the current implementation uses contype='n' again. In 2016 Vitaly Burovoy also worked on this feature[1] but found no consensus for his proposed approach, which was claimed to be closer to the letter of the standard, requiring additional pg_attribute columns to track the OID of the NOT NULL constraint for that column. [1] https://postgr.es/m/CAKOSWNkN6HSyatuys8xZxzRCR-KL1OkHS5-b9qd9bf1Rad3PLA@mail.gmail.com Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Author: Bernd Helmle <mailings@oopsware.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Discussion: https://postgr.es/m/CACA0E642A0267EDA387AF2B%40%5B172.26.14.62%5D Discussion: https://postgr.es/m/AANLkTinLXMOEMz+0J29tf1POokKi4XDkWJ6-DDR9BKgU@mail.gmail.com Discussion: https://postgr.es/m/20110707213401.GA27098@alvh.no-ip.org Discussion: https://postgr.es/m/1343682669-sup-2532@alvh.no-ip.org Discussion: https://postgr.es/m/CAKOSWNkN6HSyatuys8xZxzRCR-KL1OkHS5-b9qd9bf1Rad3PLA@mail.gmail.com Discussion: https://postgr.es/m/20220817181249.q7qvj3okywctra3c@alvherre.pgsql
2023-04-07Add --buffer-usage-limit option to vacuumdbDavid Rowley
1cbbee033 added BUFFER_USAGE_LIMIT to the VACUUM and ANALYZE commands, so here we permit that option to be specified in vacuumdb. In passing, adjust the documents for vacuum_buffer_usage_limit and the BUFFER_USAGE_LIMIT VACUUM option to mention "kB" rather than "KB". Do the same for the ERROR message in ExecVacuum() and check_vacuum_buffer_usage_limit(). Without that we might tell a user that the valid minimum value is 128 KB only to reject that because we accept only "kB" and not "KB". Also, add a small reminder comment in vacuum.h to try to trigger the memory of anyone adding new fields to VacuumParams that they might want to consider if vacuumdb needs to grow a new option too. Author: Melanie Plageman Reviewed-by: Justin Pryzby Reviewed-by: David Rowley Discussion: https://postgr.es/m/ZAzTg3iEnubscvbf@telsasoft.com
2023-04-07Refresh cost-based delay params more frequently in autovacuumDaniel Gustafsson
Allow autovacuum to reload the config file more often so that cost-based delay parameters can take effect while VACUUMing a relation. Previously, autovacuum workers only reloaded the config file once per relation vacuumed, so config changes could not take effect until beginning to vacuum the next table. Now, check if a reload is pending roughly once per block, when checking if we need to delay. In order for autovacuum workers to safely update their own cost delay and cost limit parameters without impacting performance, we had to rethink when and how these values were accessed. Previously, an autovacuum worker's wi_cost_limit was set only at the beginning of vacuuming a table, after reloading the config file. Therefore, at the time that autovac_balance_cost() was called, workers vacuuming tables with no cost-related storage parameters could still have different values for their wi_cost_limit_base and wi_cost_delay. Now that the cost parameters can be updated while vacuuming a table, workers will (within some margin of error) have no reason to have different values for cost limit and cost delay (in the absence of cost-related storage parameters). This removes the rationale for keeping cost limit and cost delay in shared memory. Balancing the cost limit requires only the number of active autovacuum workers vacuuming a table with no cost-based storage parameters. Author: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAAKRu_ZngzqnEODc7LmS1NH04Kt6Y9huSjz5pp7%2BDXhrjDA0gw%40mail.gmail.com
2023-04-07Separate vacuum cost variables from GUCsDaniel Gustafsson
Vacuum code run both by autovacuum workers and a backend doing VACUUM/ANALYZE previously inspected VacuumCostLimit and VacuumCostDelay, which are the global variables backing the GUCs vacuum_cost_limit and vacuum_cost_delay. Autovacuum workers needed to override these variables with their own values, derived from autovacuum_vacuum_cost_limit and autovacuum_vacuum_cost_delay and worker cost limit balancing logic. This led to confusing code which, in some cases, both derived and set a new value of VacuumCostLimit from VacuumCostLimit. In preparation for refreshing these GUC values more often, introduce new, independent global variables and add a function to update them using the GUCs and existing logic. Per suggestion by Kyotaro Horiguchi Author: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAAKRu_ZngzqnEODc7LmS1NH04Kt6Y9huSjz5pp7%2BDXhrjDA0gw%40mail.gmail.com
2023-04-07Make vacuum failsafe_active globally visibleDaniel Gustafsson
While vacuuming a table in failsafe mode, VacuumCostActive should not be re-enabled. This currently isn't a problem because vacuum cost parameters are only refreshed in between vacuuming tables and failsafe status is reset for every table. In preparation for allowing vacuum cost parameters to be updated more frequently, elevate LVRelState->failsafe_active to a global, VacuumFailsafeActive, which will be checked when determining whether or not to re-enable vacuum cost-related delays. Author: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAAKRu_ZngzqnEODc7LmS1NH04Kt6Y9huSjz5pp7%2BDXhrjDA0gw%40mail.gmail.com
2023-04-06Move various prechecks from vacuum() into ExecVacuum()David Rowley
vacuum() is used for both the VACUUM command and for autovacuum. There were many prechecks being done inside vacuum() that were just not relevant to autovacuum. Let's move the bulk of these into ExecVacuum() so that they're only executed when running the VACUUM command. This removes a small amount of overhead when autovacuum vacuums a table. While we are at it, allocate VACUUM's BufferAccessStrategy in ExecVacuum() and pass it into vacuum() instead of expecting vacuum() to make it if it's not already made by the calling function. To make this work, we need to create the vacuum memory context slightly earlier, so we now need to pass that down to vacuum() so that it's available for use in other memory allocations. Author: Melanie Plageman Reviewed-by: David Rowley Discussion: https://postgr.es/m/20230405211534.4skgskbilnxqrmxg@awork3.anarazel.de
2023-04-04Perform logical replication actions as the table owner.Robert Haas
Up until now, logical replication actions have been performed as the subscription owner, who will generally be a superuser. Commit cec57b1a0fbcd3833086ba686897c5883e0a2afc documented hazards associated with that situation, namely, that any user who owns a table on the subscriber side could assume the privileges of the subscription owner by attaching a trigger, expression index, or some other kind of executable code to it. As a remedy, it suggested not creating configurations where users who are not fully trusted own tables on the subscriber. Although that will work, it basically precludes using logical replication in the way that people typically want to use it, namely, to replicate a database from one node to another without necessarily having any restrictions on which database users can own tables. So, instead, change logical replication to execute INSERT, UPDATE, DELETE, and TRUNCATE operations as the table owner when they are replicated. Since this involves switching the active user frequently within a session that is authenticated as the subscription user, also impose SECURITY_RESTRICTED_OPERATION restrictions on logical replication code. As an exception, if the table owner can SET ROLE to the subscription owner, these restrictions have no security value, so don't impose them in that case. Subscription owners are now required to have the ability to SET ROLE to every role that owns a table that the subscription is replicating. If they don't, replication will fail. Superusers, who normally own subscriptions, satisfy this property by default. Non-superusers users who own subscriptions will need to be granted the roles that own relevant tables. Patch by me, reviewed (but not necessarily in its entirety) by Jelte Fennema, Jeff Davis, and Noah Misch. Discussion: http://postgr.es/m/CA+TgmoaSCkg9ww9oppPqqs+9RVqCexYCE6Aq=UsYPfnOoDeFkw@mail.gmail.com
2023-03-25Fix CREATE INDEX progress reporting for multi-level partitioning.Tom Lane
The "partitions_total" and "partitions_done" fields were updated as though the current level of partitioning was the only one. In multi-level cases, not only could partitions_total change over the course of the command, but partitions_done could go backwards or exceed the currently-reported partitions_total. Fix by setting partitions_total to the total number of direct and indirect children once at command start, and then just incrementing partitions_done at appropriate points. Invent a new progress monitoring function "pgstat_progress_incr_param" to simplify doing the latter. We can avoid adding cost for the former when doing CREATE INDEX, because ProcessUtility already enumerates the children and it's pretty easy to pass the count down to DefineIndex. In principle the same could be done in ALTER TABLE, but that's structurally difficult; for now, just eat the cost of an extra find_all_inheritors scan in that case. Ilya Gladyshev and Justin Pryzby Discussion: https://postgr.es/m/a15f904a70924ffa4ca25c3c744cff31e0e6e143.camel@gmail.com
2023-03-24Invent GENERIC_PLAN option for EXPLAIN.Tom Lane
This provides a very simple way to see the generic plan for a parameterized query. Without this, it's necessary to define a prepared statement and temporarily change plan_cache_mode, which is a bit tedious. One thing that's a bit of a hack perhaps is that we disable execution-time partition pruning when the GENERIC_PLAN option is given. That's because the pruning code may attempt to fetch the value of one of the parameters, which would fail. Laurenz Albe, reviewed by Julien Rouhaud, Christoph Berg, Michel Pelletier, Jim Jones, and myself Discussion: https://postgr.es/m/0a29b954b10b57f0d135fe12aa0909bd41883eb0.camel@cybertec.at
2023-03-13Add a DEFAULT option to COPY FROMAndrew Dunstan
This allows for a string which if an input field matches causes the column's default value to be inserted. The advantage of this is that the default can be inserted in some rows and not others, for which non-default data is available. The file_fdw extension is also modified to take allow use of this option. Israel Barth Rubio Discussion: https://postgr.es/m/CAO_rXXAcqesk6DsvioOZ5zmeEmpUN5ktZf-9=9yu+DTr0Xr8Uw@mail.gmail.com
2023-03-13Fix concurrent update issues with MERGE.Dean Rasheed
If MERGE attempts an UPDATE or DELETE on a table with BEFORE ROW triggers, or a cross-partition UPDATE (with or without triggers), and a concurrent UPDATE or DELETE happens, the merge code would fail. In some cases this would lead to a crash, while in others it would cause the wrong merge action to be executed, or no action at all. The immediate cause of the crash was the trigger code calling ExecGetUpdateNewTuple() as part of the EPQ mechanism, which fails because during a merge ri_projectNew is NULL, since merge has its own per-action projection information, which ExecGetUpdateNewTuple() knows nothing about. Fix by arranging for the trigger code to exit early, returning the TM_Result and TM_FailureData information, if a concurrent modification is detected, allowing the merge code to do the necessary EPQ handling in its own way. Similarly, prevent the cross-partition update code from doing any EPQ processing for a merge, allowing the merge code to work out what it needs to do. This leads to a number of simplifications in nodeModifyTable.c. Most notably, the ModifyTableContext->GetUpdateNewTuple() callback is no longer needed, and mergeGetUpdateNewTuple() can be deleted, since there is no longer any requirement for get-update-new-tuple during a merge. Similarly, ModifyTableContext->cpUpdateRetrySlot is no longer needed. Thus ExecGetUpdateNewTuple() and the retry_slot handling of ExecCrossPartitionUpdate() can be restored to how they were in v14, before the merge code was added, and ExecMergeMatched() no longer needs any special-case handling for cross-partition updates. While at it, tidy up ExecUpdateEpilogue() a bit, making it handle recheckIndexes locally, rather than passing it in as a parameter, ensuring that it is freed properly. This dates back to when it was split off from ExecUpdate() to support merge. Per bug #17809 from Alexander Lakhin, and follow-up investigation of bug #17792, also from Alexander Lakhin. Back-patch to v15, where MERGE was introduced, taking care to preserve backwards-compatibility of the trigger API in v15 for any extensions that might use it. Discussion: https://postgr.es/m/17809-9e6650bef133f0fe%40postgresql.org https://postgr.es/m/17792-0f89452029662c36%40postgresql.org
2023-03-07Make get_extension_schema() availableMichael Paquier
This routine is able to retrieve the OID of the schema used with an extension (pg_extension.extnamespace), or InvalidOid if this information is not available. plpgsql_check embeds a copy of this code when performing checks on functions, as one out-of-core example. Author: Pavel Stehule Reviewed-by: Julien Rouhaud Discussion: https://postgr.es/m/CAFj8pRD+9x55hjDoi285jCcjPc8uuY_D+FLn5RpXggdz+4O2sQ@mail.gmail.com
2023-03-06Add PROCESS_MAIN to VACUUMMichael Paquier
Disabling this option is useful to run VACUUM (with or without FULL) on only the toast table of a relation, bypassing the main relation. This option is enabled by default. Running directly VACUUM on a toast table was already possible without this feature, by using the non-deterministic name of a toast relation (as of pg_toast.pg_toast_N, where N would be the OID of the parent relation) in the VACUUM command, and it required a scan of pg_class to know the name of the toast table. So this feature is basically a shortcut to be able to run VACUUM or VACUUM FULL on a toast relation, using only the name of the parent relation. A new switch called --no-process-main is added to vacuumdb, to work as an equivalent of PROCESS_MAIN. Regression tests are added to cover VACUUM and VACUUM FULL, looking at pg_stat_all_tables.vacuum_count to see how many vacuums have run on each table, main or toast. Author: Nathan Bossart Reviewed-by: Masahiko Sawada Discussion: https://postgr.es/m/20221230000028.GA435655@nathanxps13
2023-03-04Avoid failure when altering state of partitioned foreign-key triggers.Tom Lane
Beginning in v15, if you apply ALTER TABLE ENABLE/DISABLE TRIGGER to a partitioned table, it also affects the partitions' cloned versions of the affected trigger(s). The initial implementation of this located the clones by name, but that fails on foreign-key triggers which have names incorporating their own OIDs. We can fix that, and also make the behavior more bulletproof in the face of user-initiated trigger renames, by identifying the cloned triggers by tgparentid. Following the lead of earlier commits in this area, I took care not to break ABI in the v15 branch, even though I rather doubt there are any external callers of EnableDisableTrigger. While here, update the documentation, which was not touched when the semantics were changed. Per bug #17817 from Alan Hodgson. Back-patch to v15; older versions do not have this behavior. Discussion: https://postgr.es/m/17817-31dfb7c2100d9f3d@postgresql.org
2023-01-25Revert "Add eager and lazy freezing strategies to VACUUM."Peter Geoghegan
This reverts commit 4d417992613949af35530b4e8e83670c4e67e1b2. Broad concerns about regressions caused by eager freezing strategy have been raised. Whether or not these concerns can be worked through in any time frame is far from certain. Discussion: https://postgr.es/m/20230126004347.gepcmyenk2csxrri@awork3.anarazel.de
2023-01-25Add eager and lazy freezing strategies to VACUUM.Peter Geoghegan
Eager freezing strategy avoids large build-ups of all-visible pages. It makes VACUUM trigger page-level freezing whenever doing so will enable the page to become all-frozen in the visibility map. This is useful for tables that experience continual growth, particularly strict append-only tables such as pgbench's history table. Eager freezing significantly improves performance stability by spreading out the cost of freezing over time, rather than doing most freezing during aggressive VACUUMs. It complements the insert autovacuum mechanism added by commit b07642db. VACUUM determines its freezing strategy based on the value of the new vacuum_freeze_strategy_threshold GUC (or reloption) with logged tables. Tables that exceed the size threshold use the eager freezing strategy. Unlogged tables and temp tables always use eager freezing strategy, since the added cost is negligible there. Non-permanent relations won't incur any extra overhead in WAL written (for the obvious reason), nor in pages dirtied (since any extra freezing will only take place on pages whose PD_ALL_VISIBLE bit needed to be set either way). VACUUM uses lazy freezing strategy for logged tables that fall under the GUC size threshold. Page-level freezing triggers based on the criteria established in commit 1de58df4, which added basic page-level freezing. Eager freezing is strictly more aggressive than lazy freezing. Settings like vacuum_freeze_min_age still get applied in just the same way in every VACUUM, independent of the strategy in use. The only mechanical difference between eager and lazy freezing strategies is that only the former applies its own additional criteria to trigger freezing pages. Note that even lazy freezing strategy will trigger freezing whenever a page happens to have required that an FPI be written during pruning, provided that the page will thereby become all-frozen in the visibility map afterwards (due to the FPI optimization from commit 1de58df4). The vacuum_freeze_strategy_threshold default setting is 4GB. This is a relatively low setting that prioritizes performance stability. It will be reviewed at the end of the Postgres 16 beta period. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Jeff Davis <pgsql@j-davis.com> Reviewed-By: Andres Freund <andres@anarazel.de> Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/CAH2-WzkFok_6EAHuK39GaW4FjEFQsY=3J0AAd6FXk93u-Xq3Fg@mail.gmail.com
2023-01-24Adjust interaction of CREATEROLE with role properties.Robert Haas
Previously, a CREATEROLE user without SUPERUSER could not alter REPLICATION users in any way, and could not set the BYPASSRLS attribute. However, they could manipulate the CREATEDB property even if they themselves did not possess it. With this change, a CREATEROLE user without SUPERUSER can set or clear the REPLICATION, BYPASSRLS, or CREATEDB property on a new role or a role that they have rights to manage if and only if that property is set for their own role. This implements the standard idea that you can't give permissions you don't have (but you can give the ones you do have). We might in the future want to provide more powerful ways to constrain what a CREATEROLE user can do - for example, to limit whether CONNECTION LIMIT can be set or the values to which it can be set - but that is left as future work. Patch by me, reviewed by Nathan Bossart, Tushar Ahuja, and Neha Sharma. Discussion: http://postgr.es/m/CA+TgmobX=LHg_J5aT=0pi9gJy=JdtrUVGAu0zhr-i5v5nNbJDg@mail.gmail.com
2023-01-14Fix MAINTAIN privileges for toast tables and partitions.Jeff Davis
Commit 60684dd8 left loose ends when it came to maintaining toast tables or partitions. For toast tables, simply skip the privilege check if the toast table is an indirect target of the maintenance command, because the main table privileges have already been checked. For partitions, allow the maintenance command if the user has the MAINTAIN privilege on the partition or any parent. Also make CLUSTER emit "skipping" messages when the user doesn't have privileges, similar to VACUUM. Author: Nathan Bossart Reported-by: Pavel Luzanov Reviewed-by: Pavel Luzanov, Ted Yu Discussion: https://postgr.es/m/20230113231339.GA2422750@nathanxps13
2023-01-10Add new GUC createrole_self_grant.Robert Haas
Can be set to the empty string, or to either or both of "set" or "inherit". If set to a non-empty value, a non-superuser who creates a role (necessarily by relying up the CREATEROLE privilege) will grant that role back to themselves with the specified options. This isn't a security feature, because the grant that this feature triggers can also be performed explicitly. Instead, it's a user experience feature. A superuser would necessarily inherit the privileges of any created role and be able to access all such roles via SET ROLE; with this patch, you can configure createrole_self_grant = 'set, inherit' to provide a similar experience for a user who has CREATEROLE but not SUPERUSER. Discussion: https://postgr.es/m/CA+TgmobN59ct+Emmz6ig1Nua2Q-_o=r6DSD98KfU53kctq_kQw@mail.gmail.com
2023-01-09Perform apply of large transactions by parallel workers.Amit Kapila
Currently, for large transactions, the publisher sends the data in multiple streams (changes divided into chunks depending upon logical_decoding_work_mem), and then on the subscriber-side, the apply worker writes the changes into temporary files and once it receives the commit, it reads from those files and applies the entire transaction. To improve the performance of such transactions, we can instead allow them to be applied via parallel workers. In this approach, we assign a new parallel apply worker (if available) as soon as the xact's first stream is received and the leader apply worker will send changes to this new worker via shared memory. The parallel apply worker will directly apply the change instead of writing it to temporary files. However, if the leader apply worker times out while attempting to send a message to the parallel apply worker, it will switch to "partial serialize" mode - in this mode, the leader serializes all remaining changes to a file and notifies the parallel apply workers to read and apply them at the end of the transaction. We use a non-blocking way to send the messages from the leader apply worker to the parallel apply to avoid deadlocks. We keep this parallel apply assigned till the transaction commit is received and also wait for the worker to finish at commit. This preserves commit ordering and avoid writing to and reading from files in most cases. We still need to spill if there is no worker available. This patch also extends the SUBSCRIPTION 'streaming' parameter so that the user can control whether to apply the streaming transaction in a parallel apply worker or spill the change to disk. The user can set the streaming parameter to 'on/off', or 'parallel'. The parameter value 'parallel' means the streaming will be applied via a parallel apply worker, if available. The parameter value 'on' means the streaming transaction will be spilled to disk. The default value is 'off' (same as current behaviour). In addition, the patch extends the logical replication STREAM_ABORT message so that abort_lsn and abort_time can also be sent which can be used to update the replication origin in parallel apply worker when the streaming transaction is aborted. Because this message extension is needed to support parallel streaming, parallel streaming is not supported for publications on servers < PG16. Author: Hou Zhijie, Wang wei, Amit Kapila with design inputs from Sawada Masahiko Reviewed-by: Sawada Masahiko, Peter Smith, Dilip Kumar, Shi yu, Kuroda Hayato, Shveta Mallik Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
2023-01-06Add options to control whether VACUUM runs vac_update_datfrozenxid.Tom Lane
VACUUM normally ends by running vac_update_datfrozenxid(), which requires a scan of pg_class. Therefore, if one attempts to vacuum a database one table at a time --- as vacuumdb has done since v12 --- we will spend O(N^2) time in vac_update_datfrozenxid(). That causes serious performance problems in databases with tens of thousands of tables, and indeed the effect is measurable with only a few hundred. To add insult to injury, only one process can run vac_update_datfrozenxid at the same time per DB, so this behavior largely defeats vacuumdb's -j option. Hence, invent options SKIP_DATABASE_STATS and ONLY_DATABASE_STATS to allow applications to postpone vac_update_datfrozenxid() until the end of a series of VACUUM requests, and teach vacuumdb to use them. Per bug #17717 from Gunnar L. Sadly, this answer doesn't seem like something we'd consider back-patching, so the performance problem will remain in v12-v15. Tom Lane and Nathan Bossart Discussion: https://postgr.es/m/17717-6c50eb1c7d23a886@postgresql.org
2023-01-02Update copyright for 2023Bruce Momjian
Backpatch-through: 11
2022-12-22Refactor how VACUUM passes around its XID cutoffs.Peter Geoghegan
Use a dedicated struct for the XID/MXID cutoffs used by VACUUM, such as FreezeLimit and OldestXmin. This state is initialized in vacuum.c, and then passed around by code from vacuumlazy.c to heapam.c freezing related routines. The new convention is that everybody works off of the same cutoff state, which is passed around via pointers to const. Also simplify some of the logic for dealing with frozen xmin in heap_prepare_freeze_tuple: add dedicated "xmin_already_frozen" state to clearly distinguish xmin XIDs that we're going to freeze from those that were already frozen from before. That way the routine's xmin handling code is symmetrical with the existing xmax handling code. This is preparation for an upcoming commit that will add page level freezing. Also refactor the control flow within FreezeMultiXactId(), while adding stricter sanity checks. We now test OldestXmin directly, instead of using FreezeLimit as an inexact proxy for OldestXmin. This is further preparation for the page level freezing work, which will make the function's caller cede control of page level freezing to the function where appropriate (where heap_prepare_freeze_tuple sees a tuple that happens to contain a MultiXactId in its xmax). Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Jeff Davis <pgsql@j-davis.com> Discussion: https://postgr.es/m/CAH2-WznS9TxXmz2_=SY+SyJyDFbiOftKofM9=aDo68BbXNBUMA@mail.gmail.com
2022-12-13Add grantable MAINTAIN privilege and pg_maintain role.Jeff Davis
Allows VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZED VIEW, CLUSTER, and LOCK TABLE. Effectively reverts 4441fc704d. Instead of creating separate privileges for VACUUM, ANALYZE, and other maintenance commands, group them together under a single MAINTAIN privilege. Author: Nathan Bossart Discussion: https://postgr.es/m/20221212210136.GA449764@nathanxps13 Discussion: https://postgr.es/m/45224.1670476523@sss.pgh.pa.us
2022-12-06Rework query relation permission checkingAlvaro Herrera
Currently, information about the permissions to be checked on relations mentioned in a query is stored in their range table entries. So the executor must scan the entire range table looking for relations that need to have permissions checked. This can make the permission checking part of the executor initialization needlessly expensive when many inheritance children are present in the range range. While the permissions need not be checked on the individual child relations, the executor still must visit every range table entry to filter them out. This commit moves the permission checking information out of the range table entries into a new plan node called RTEPermissionInfo. Every top-level (inheritance "root") RTE_RELATION entry in the range table gets one and a list of those is maintained alongside the range table. This new list is initialized by the parser when initializing the range table. The rewriter can add more entries to it as rules/views are expanded. Finally, the planner combines the lists of the individual subqueries into one flat list that is passed to the executor for checking. To make it quick to find the RTEPermissionInfo entry belonging to a given relation, RangeTblEntry gets a new Index field 'perminfoindex' that stores the corresponding RTEPermissionInfo's index in the query's list of the latter. ExecutorCheckPerms_hook has gained another List * argument; the signature is now: typedef bool (*ExecutorCheckPerms_hook_type) (List *rangeTable, List *rtePermInfos, bool ereport_on_violation); The first argument is no longer used by any in-core uses of the hook, but we leave it in place because there may be other implementations that do. Implementations should likely scan the rtePermInfos list to determine which operations to allow or deny. Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CA+HiwqGjJDmUhDSfv-U2qhKJjt9ST7Xh9JXC_irsAQ1TAUsJYg@mail.gmail.com
2022-11-28Provide per-table permissions for vacuum and analyze.Andrew Dunstan
Currently a table can only be vacuumed or analyzed by its owner or a superuser. This can now be extended to any user by means of an appropriate GRANT. Nathan Bossart Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael Paquier. Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
2022-11-23Simplify vacuum_set_xid_limits() signature.Peter Geoghegan
Pass VACUUM parameters (VacuumParams state) to vacuum_set_xid_limits() directly, rather than passing most individual VacuumParams fields as separate arguments. Also make vacuum_set_xid_limits() output parameter symbol names match those used by its vacuumlazy.c caller. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-Wz=TE7gW5DgSahDkf0UEZigFGAoHNNN6EvSrdzC=Kn+hrA@mail.gmail.com
2022-11-04Fix CREATE DATABASE so we can pg_upgrade DBs with OIDs above 2^31.Tom Lane
Commit aa0105141 repeated one of the oldest mistakes in our book: thinking that OID is the same as int32. It isn't of course, and unsurprisingly the first person who came along with a database OID above 2 billion broke it. Repair. Per bug #17677 from Sergey Pankov. Back-patch to v15. Discussion: https://postgr.es/m/17677-a99fa067d7ed71c9@postgresql.org
2022-10-13Allow batch insertion during COPY into a foreign table.Etsuro Fujita
Commit 3d956d956 allowed the COPY, but it's done by inserting individual rows to the foreign table, so it can be inefficient due to the overhead caused by each round-trip to the foreign server. To improve performance of the COPY in such a case, this patch allows batch insertion, by extending the multi-insert machinery in CopyFrom() to the foreign-table case so that we insert multiple rows to the foreign table at once using the FDW callback routine added by commit b663a4136. This patch also allows this for postgres_fdw. It is enabled by the "batch_size" option added by commit b663a4136, which is disabled by default. When doing batch insertion, we update progress of the COPY command after performing the FDW callback routine, to count rows not suppressed by the FDW as well as a BEFORE ROW INSERT trigger. For consistency, this patch changes the timing of updating it for plain tables: previously, we updated it immediately after adding each row to the multi-insert buffer, but we do so only after writing the rows stored in the buffer out to the table using table_multi_insert(), which I think would be consistent even with non-batching mode, because in that mode we update it after writing each row out to the table using table_tuple_insert(). Andrey Lepikhov, heavily revised by me, with review from Ian Barwick, Andrey Lepikhov, and Zhihong Yu. Discussion: https://postgr.es/m/bc489202-9855-7550-d64c-ad2d83c24867%40postgrespro.ru
2022-10-11Add support for COPY TO callback functionsMichael Paquier
This is useful as a way for extensions to process COPY TO rows in the way they see fit (say auditing, analytics, backend, etc.) without the need to invoke an external process running as the OS user running the backend through PROGRAM that requires superuser rights. COPY FROM already provides a similar callback for logical replication. For COPY TO, the callback is triggered when we are ready to send a row in CopySendEndOfRow(), which is the same code path as when sending a row to a frontend or a pipe/file. A small test module, test_copy_callbacks, is added to provide some coverage for this facility. Author: Bilva Sanaba, Nathan Bossart Discussion: https://postgr.es/m/253C21D1-FCEB-41D9-A2AF-E6517015B7D7@amazon.com
2022-09-20Harmonize more parameter names in bulk.Peter Geoghegan
Make sure that function declarations use names that exactly match the corresponding names from function definitions in optimizer, parser, utility, libpq, and "commands" code, as well as in remaining library code. Do the same for all code related to frontend programs (with the exception of pg_dump/pg_dumpall related code). Like other recent commits that cleaned up function parameter names, this commit was written with help from clang-tidy. Later commits will handle ecpg and pg_dump/pg_dumpall. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/CAH2-WznJt9CMM9KJTMjJh_zbL5hD9oX44qdJ4aqZtjFi-zA3Tg@mail.gmail.com
2022-09-19Harmonize parameter names in storage and AM code.Peter Geoghegan
Make sure that function declarations use names that exactly match the corresponding names from function definitions in storage, catalog, access method, executor, and logical replication code, as well as in miscellaneous utility/library code. Like other recent commits that cleaned up function parameter names, this commit was written with help from clang-tidy. Later commits will do the same for other parts of the codebase. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/CAH2-WznJt9CMM9KJTMjJh_zbL5hD9oX44qdJ4aqZtjFi-zA3Tg@mail.gmail.com
2022-09-19Harmonize heapam and tableam parameter names.Peter Geoghegan
Make sure that function declarations use names that exactly match the corresponding names from function definitions. Having parameter names that are reliably consistent in this way will make it easier to reason about groups of related C functions from the same translation unit as a module. It will also make certain refactoring tasks easier. Like other recent commits that cleaned up function parameter names, this commit was written with help from clang-tidy. Later commits will do the same for other parts of the codebase. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/CAH2-WznJt9CMM9KJTMjJh_zbL5hD9oX44qdJ4aqZtjFi-zA3Tg@mail.gmail.com
2022-09-13Split up guc.c for better build speed and ease of maintenance.Tom Lane
guc.c has grown to be one of our largest .c files, making it a bottleneck for compilation. It's also acquired a bunch of knowledge that'd be better kept elsewhere, because of our not very good habit of putting variable-specific check hooks here. Hence, split it up along these lines: * guc.c itself retains just the core GUC housekeeping mechanisms. * New file guc_funcs.c contains the SET/SHOW interfaces and some SQL-accessible functions for GUC manipulation. * New file guc_tables.c contains the data arrays that define the built-in GUC variables, along with some already-exported constant tables. * GUC check/assign/show hook functions are moved to the variable's home module, whenever that's clearly identifiable. A few hard- to-classify hooks ended up in commands/variable.c, which was already a home for miscellaneous GUC hook functions. To avoid cluttering a lot more header files with #include "guc.h", I also invented a new header file utils/guc_hooks.h and put all the GUC hook functions' declarations there, regardless of their originating module. That allowed removal of #include "guc.h" from some existing headers. The fallout from that (hopefully all caught here) demonstrates clearly why such inclusions are best minimized: there are a lot of files that, for example, were getting array.h at two or more levels of remove, despite not having any connection at all to GUCs in themselves. There is some very minor code beautification here, such as renaming a couple of inconsistently-named hook functions and improving some comments. But mostly this just moves code from point A to point B and deals with the ensuing needs for #include adjustments and exporting a few functions that previously weren't exported. Patch by me, per a suggestion from Andres Freund; thanks also to Michael Paquier for the idea to invent guc_funcs.c. Discussion: https://postgr.es/m/587607.1662836699@sss.pgh.pa.us
2022-08-31Derive freeze cutoff from nextXID, not OldestXmin.Peter Geoghegan
Before now, the cutoffs that VACUUM used to determine which XIDs/MXIDs to freeze were determined at the start of each VACUUM by taking related cutoffs that represent which XIDs/MXIDs VACUUM should treat as still running, and subtracting an XID/MXID age based value controlled by GUCs like vacuum_freeze_min_age. The FreezeLimit cutoff (XID freeze cutoff) was derived by subtracting an XID age value from OldestXmin, while the MultiXactCutoff cutoff (MXID freeze cutoff) was derived by subtracting an MXID age value from OldestMxact. This approach didn't match the approach used nearby to determine whether this VACUUM operation should be an aggressive VACUUM or not. VACUUM now uses the standard approach instead: it subtracts the same age-based values from next XID/next MXID (rather than subtracting from OldestXmin/OldestMxact). This approach is simpler and more uniform. Most of the time it will have only a negligible impact on how and when VACUUM freezes. It will occasionally make VACUUM more robust in the event of problems caused by long running transaction. These are cases where OldestXmin and OldestMxact are held back by so much that they attain an age that is a significant fraction of the value of age-based settings like vacuum_freeze_min_age. There is no principled reason why freezing should be affected in any way by the presence of a long-running transaction -- at least not before the point that the OldestXmin and OldestMxact limits used by each VACUUM operation attain an age that makes it unsafe to freeze some of the XIDs/MXIDs whose age exceeds the value of the relevant age-based settings. The new approach should at least make freezing degrade more gracefully than before, even in the most extreme cases. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Nathan Bossart <nathandbossart@gmail.com> Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/CAH2-WzkOv5CEeyOO=c91XnT5WBR_0gii0Wn5UbZhJ=4TTykDYg@mail.gmail.com
2022-08-25Allow grant-level control of role inheritance behavior.Robert Haas
The GRANT statement can now specify WITH INHERIT TRUE or WITH INHERIT FALSE to control whether the member inherits the granted role's permissions. For symmetry, you can now likewise write WITH ADMIN TRUE or WITH ADMIN FALSE to turn ADMIN OPTION on or off. If a GRANT does not specify WITH INHERIT, the behavior based on whether the member role is marked INHERIT or NOINHERIT. This means that if all roles are marked INHERIT or NOINHERIT before any role grants are performed, the behavior is identical to what we had before; otherwise, it's different, because ALTER ROLE [NO]INHERIT now only changes the default behavior of future grants, and has no effect on existing ones. Patch by me. Reviewed and testing by Nathan Bossart and Tushar Ahuja, with design-level comments from various others. Discussion: http://postgr.es/m/CA+Tgmoa5Sf4PiWrfxA=sGzDKg0Ojo3dADw=wAHOhR9dggV=RmQ@mail.gmail.com
2022-08-06Fix comment in copyfrom_internal.hMichael Paquier
COPY_NEW_FE has become COPY_FRONTEND in 3174d69, that has removed the frontend-backend protocol v2. Author: Zhang Mingli Discussion: https://postgr.es/m/aeec12a3-501e-449a-8d25-901c4ff706cf@Spark
2022-08-05Fix ENABLE/DISABLE TRIGGER to handle recursion correctlyAlvaro Herrera
Using ATSimpleRecursion() in ATPrepCmd() to do so as bbb927b4db9b did is not correct, because ATPrepCmd() can't distinguish between triggers that may be cloned and those that may not, so would wrongly try to recurse for the latter category of triggers. So this commit restores the code in EnableDisableTrigger() that 86f575948c77 had added to do the recursion, which would do it only for triggers that may be cloned, that is, row-level triggers. This also changes tablecmds.c such that ATExecCmd() is able to pass the value of ONLY flag down to EnableDisableTrigger() using its new 'recurse' parameter. This also fixes what seems like an oversight of 86f575948c77 that the recursion to partition triggers would only occur if EnableDisableTrigger() had actually changed the trigger. It is more apt to recurse to inspect partition triggers even if the parent's trigger didn't need to be changed: only then can we be certain that all descendants share the same state afterwards. Backpatch all the way back to 11, like bbb927b4db9b. Care is taken not to break ABI compatibility (and that no catversion bump is needed.) Co-authored-by: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Dmitry Koval <d.koval@postgrespro.ru> Discussion: https://postgr.es/m/CA+HiwqG-cZT3XzGAnEgZQLoQbyfJApVwOTQaCaas1mhpf+4V5A@mail.gmail.com
2022-07-06Change internal RelFileNode references to RelFileNumber or RelFileLocator.Robert Haas
We have been using the term RelFileNode to refer to either (1) the integer that is used to name the sequence of files for a certain relation within the directory set aside for that tablespace/database combination; or (2) that value plus the OIDs of the tablespace and database; or occasionally (3) the whole series of files created for a relation based on those values. Using the same name for more than one thing is confusing. Replace RelFileNode with RelFileNumber when we're talking about just the single number, i.e. (1) from above, and with RelFileLocator when we're talking about all the things that are needed to locate a relation's files on disk, i.e. (2) from above. In the places where we refer to (3) as a relfilenode, instead refer to "relation storage". Since there is a ton of SQL code in the world that knows about pg_class.relfilenode, don't change the name of that column, or of other SQL-facing things that derive their name from it. On the other hand, do adjust closely-related internal terminology. For example, the structure member names dbNode and spcNode appear to be derived from the fact that the structure itself was called RelFileNode, so change those to dbOid and spcOid. Likewise, various variables with names like rnode and relnode get renamed appropriately, according to how they're being used in context. Hopefully, this is clearer than before. It is also preparation for future patches that intend to widen the relfilenumber fields from its current width of 32 bits. Variables that store a relfilenumber are now declared as type RelFileNumber rather than type Oid; right now, these are the same, but that can now more easily be changed. Dilip Kumar, per an idea from me. Reviewed also by Andres Freund. I fixed some whitespace issues, changed a couple of words in a comment, and made one other minor correction. Discussion: http://postgr.es/m/CA+TgmoamOtXbVAQf9hWFzonUo6bhhjS6toZQd7HZ-pmojtAmag@mail.gmail.com Discussion: http://postgr.es/m/CA+Tgmobp7+7kmi4gkq7Y+4AM9fTvL+O1oQ4-5gFTT+6Ng-dQ=g@mail.gmail.com Discussion: http://postgr.es/m/CAFiTN-vTe79M8uDH1yprOU64MNFE+R3ODRuA+JWf27JbhY4hJw@mail.gmail.com
2022-07-06autho_explain: Add GUC to log query parametersMichael Paquier
auto_explain.log_parameter_max_length is a new GUC part of the extension, similar to the corresponding core setting, that controls the inclusion of query parameters in the logged explain output. More tests are added to check the behavior of this new parameter: when parameters logged in full (the default of -1), when disabled (value of 0) and when partially truncated (value different than the two others). Author: Dagfinn Ilmari Mannsåker Discussion: https://postgr.es/m/87ee09mohb.fsf@wibble.ilmari.org
2022-05-12Pre-beta mechanical code beautification.Tom Lane
Run pgindent, pgperltidy, and reformat-dat-files. I manually fixed a couple of comments that pgindent uglified.
2022-04-13Remove "recheck" argument from check_index_is_clusterable()Michael Paquier
The last usage of this argument in this routine can be tracked down to 7e2f9062, aka 11 years ago. Getting rid of this argument can also be an advantage for extensions calling check_index_is_clusterable(), as it removes any need to worry about the meaning of what a recheck would be at this level. Author: Justin Pryzby Discussion: https://postgr.es/m/20220411140609.GF26620@telsasoft.com
2022-04-08Apply PGDLLIMPORT markings broadly.Robert Haas
Up until now, we've had a policy of only marking certain variables in the PostgreSQL header files with PGDLLIMPORT, but now we've decided to mark them all. This means that extensions running on Windows should no longer operate at a disadvantage as compared to extensions running on Linux: if the variable is present in a header file, it should be accessible. Discussion: http://postgr.es/m/CA+TgmoYanc1_FSfimhgiWSqVyP5KKmh5NP2BWNwDhO8Pg2vGYQ@mail.gmail.com