summaryrefslogtreecommitdiff
path: root/src/test
AgeCommit message (Collapse)Author
20 hoursFix spelling mistake in fk-snapshot-3.specHEADorigin/masterorigin/HEADmasterDavid Rowley
Author: Aditya Gollamudi <adigollamudi@gmail.com> Discussion: https://postgr.es/m/CAD-KL_EdOOWp_cmPk9%3D5vNxo%2BabTTRpNx4vex-gVUm8u3GnkTg%40mail.gmail.com
31 hoursUpdate copyright for 2026Bruce Momjian
Backpatch-through: 14
32 hoursAdd paths of extensions to pg_available_extensionsAndrew Dunstan
Add a new "location" column to the pg_available_extensions and pg_available_extension_versions views, exposing the directory where the extension is located. The default system location is shown as '$system', the same value that can be used to configure the extension_control_path GUC. User-defined locations are only visible for super users, otherwise '<insufficient privilege>' is returned as a column value, the same behaviour that we already use in pg_stat_activity. I failed to resist the temptation to do a little extra editorializing of the TAP test script. Catalog version bumped. Author: Matheus Alcantara <mths.dev@pm.me> Reviewed-By: Chao Li <li.evan.chao@gmail.com> Reviewed-By: Rohit Prasad <rohit.prasad@arm.com> Reviewed-By: Michael Banck <mbanck@gmx.net> Reviewed-By: Manni Wood <manni.wood@enterprisedb.com> Reviewed-By: Euler Taveira <euler@eulerto.com> Reviewed-By: Quan Zongliang <quanzongliang@yeah.net>
3 daysChange IndexAmRoutines to be statically-allocated structs.Tom Lane
Up to now, index amhandlers were expected to produce a new, palloc'd struct on each call. That requires palloc/pfree overhead, and creates a risk of memory leaks if the caller fails to pfree, and the time taken to fill such a large structure isn't nil. Moreover, we were storing these things in the relcache, eating several hundred bytes for each cached index. There is not anything in these structs that needs to vary at runtime, so let's change the definition so that an amhandler can return a pointer to a "static const" struct of which there's only one copy per index AM. Mark all the core code's IndexAmRoutine pointers const so that we catch anyplace that might still try to change or pfree one. (This is similar to the way we were already handling TableAmRoutine structs. This commit does fix one comment that was infelicitously copied-and-pasted into tableamapi.c.) This commit needs to be called out in the v19 release notes as an API change for extension index AMs. An un-updated AM will still work (as of now, anyway) but it risks memory leaks and will be slower than necessary. Author: Matthias van de Meent <boekewurm+postgres@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAEoWx2=vApYk2LRu8R0DdahsPNEhWUxGBZ=rbZo1EXE=uA+opQ@mail.gmail.com
4 daysAdd pg_get_multixact_stats()Michael Paquier
This new function exposes at SQL level some information related to multixacts, not available until now. This data is useful for monitoring purposes, especially for workloads that make a heavy use of multixacts: - num_mxids, number of MultiXact IDs in use. - num_members, number of member entries in use. - members_size, bytes used by num_members in pg_multixact/members/. - oldest_multixact: oldest MultiXact still needed. This patch has been originally proposed when MultiXactOffset was still 32 bits, to monitor wraparound. This part is not relevant anymore since bd8d9c9bdfa0 that has widen MultiXactOffset to 64 bits. The monitoring of disk space usage for the members is still relevant. Some tests are added to check this function, in the shape of one isolation test with concurrent transactions that take a ROW SHARE lock, and some SQL tests for pg_read_all_stats. Some documentation is added to explain some patterns that can come from the information provided by the function. Bump catalog version. Author: Naga Appani <nagnrik@gmail.com> Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Atsushi Torikoshi <torikoshia@oss.nttdata.com> Discussion: https://postgr.es/m/CA+QeY+AAsYK6WvBW4qYzHz4bahHycDAY_q5ECmHkEV_eB9ckzg@mail.gmail.com
4 daysEnsure sanity of hash-join costing when there are no MCV statistics.Tom Lane
estimate_hash_bucket_stats is defined to return zero to *mcv_freq if it cannot obtain a value for the frequency of the most common value. Its sole caller final_cost_hashjoin ignored this provision and would blindly believe the zero value, resulting in computing zero for the largest bucket size. In consequence, the safety check that intended to prevent the largest bucket from exceeding get_hash_memory_limit() was ineffective, allowing very silly plans to be chosen if statistics were missing. After fixing final_cost_hashjoin to disregard zero results for mcv_freq, a second problem appeared: some cases that should use hash joins failed to. This is because estimate_hash_bucket_stats was unaware of the fact that ANALYZE won't store MCV statistics if it doesn't find any multiply-occurring values. Thus the lack of an MCV stats entry doesn't necessarily mean that we know nothing; we may well know that the column is unique. The former coding returned zero for *mcv_freq in this case, which was pretty close to correct, but now final_cost_hashjoin doesn't believe it and disables the hash join. So check to see if there is a HISTOGRAM stats entry; if so, ANALYZE has in fact run for this column and must have found it to be unique. In that case report the MCV frequency as 1 / rows, instead of claiming ignorance. Reporting a more accurate *mcv_freq in this case can also affect the bucket-size skew adjustment further down in estimate_hash_bucket_stats, causing hash-join cost estimates to change slightly. This affects some plan choices in the core regression tests. The first diff in join.out corresponds to a case where we have no stats and should not risk a hash join, but the remaining changes are caused by producing a better bucket-size estimate for unique join columns. Those are all harmless changes so far as I can tell. The existing behavior was introduced in commit 4867d7f62 in v11. It appears from the commit log that disabling the bucket-size safety check in the absence of statistics was intentional; but we've now seen a case where the ensuing behavior is bad enough to make that seem like a poor decision. In any case the lack of other problems with that safety check after several years helps to justify enforcing it more strictly. However, we won't risk back-patching this, in case any applications are depending on the existing behavior. Bug: #19363 Reported-by: Jinhui Lai <jinhui.lai@qq.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/2380165.1766871097@sss.pgh.pa.us Discussion: https://postgr.es/m/19363-8dd32fc7600a1153@postgresql.org
5 daysFix Mkvcbuild.pm builds of test_cloexec.c.Thomas Munro
Mkvcbuild.pm scrapes Makefile contents, but couldn't understand the change made by commit bec2a0aa. Revealed by BF animal hamerkop in branch REL_16_STABLE. 1. It used += instead of =, which didn't match the pattern that Mkvcbuild.pm looks for. Drop the +. 2. Mkvcbuild.pm doesn't link PROGRAM executables with libpgport. Apply a local workaround to REL_16_STABLE only (later branches dropped Mkvcbuild.pm). Backpatch-through: 16 Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/175163.1766357334%40sss.pgh.pa.us
5 daysIgnore PlaceHolderVars when looking up statisticsRichard Guo
When looking up statistical data about an expression, we failed to look through PlaceHolderVar nodes, treating them as opaque. This could prevent us from matching an expression to base columns, index expressions, or extended statistics, as examine_variable() relies on strict structural matching. As a result, queries involving PlaceHolderVar nodes often fell back to default selectivity estimates, potentially leading to poor plan choices. This patch updates examine_variable() to strip PlaceHolderVars before analysis. This is safe during estimation because PlaceHolderVars are transparent for the purpose of statistics lookup: they do not alter the value distribution of the underlying expression. To minimize performance overhead on this hot path, a lightweight walker first checks for the presence of PlaceHolderVars. The more expensive mutator is invoked only when necessary. There is one ensuing plan change in the regression tests, which is expected and demonstrates the fix: the rowcount estimate becomes much more accurate with this patch. Back-patch to v18. Although this issue exists before that, changes in this version made it common enough to notice. Given the lack of field reports for older versions, I am not back-patching further. Reported-by: Haowu Ge <gehaowu@bitmoe.com> Author: Richard Guo <guofenglinux@gmail.com> Discussion: https://postgr.es/m/62af586c-c270-44f3-9c5e-02c81d537e3d.gehaowu@bitmoe.com Backpatch-through: 18
5 daysStrip PlaceHolderVars from index operandsRichard Guo
When pulling up a subquery, we may need to wrap its targetlist items in PlaceHolderVars to enforce separate identity or as a result of outer joins. However, this causes any upper-level WHERE clauses referencing these outputs to contain PlaceHolderVars, which prevents indxpath.c from recognizing that they could be matched to index columns or index expressions, potentially affecting the planner's ability to use indexes. To fix, explicitly strip PlaceHolderVars from index operands. A PlaceHolderVar appearing in a relation-scan-level expression is effectively a no-op. Nevertheless, to play it safe, we strip only PlaceHolderVars that are not marked nullable. The stripping is performed recursively to handle cases where PlaceHolderVars are nested or interleaved with other node types. To minimize performance impact, we first use a lightweight walker to check for the presence of strippable PlaceHolderVars. The expensive mutator is invoked only if a candidate is found, avoiding unnecessary memory allocation and tree copying in the common case where no PlaceHolderVars are present. Back-patch to v18. Although this issue exists before that, changes in this version made it common enough to notice. Given the lack of field reports for older versions, I am not back-patching further. Reported-by: Haowu Ge <gehaowu@bitmoe.com> Author: Richard Guo <guofenglinux@gmail.com> Discussion: https://postgr.es/m/62af586c-c270-44f3-9c5e-02c81d537e3d.gehaowu@bitmoe.com Backpatch-through: 18
6 daysSplit some long Makefile listsMichael Paquier
This change makes more readable code diffs when adding new items or removing old items, while ensuring that lines do not get excessively long. Some SUBDIRS, PROGRAMS and REGRESS lists are split. Note that there are a few more REGRESS lists that could be split, particularly in contrib/. Author: Jelte Fennema-Nio <postgres@jeltef.nl> Co-Authored-By: Jacob Champion <jacob.champion@enterprisedb.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Japin Li <japinli@hotmail.com> Reviewed-by: Man Zeng <zengman@halodbtech.com> Discussion: https://postgr.es/m/DF6HDGB559U5.3MPRFCWPONEAE@jeltef.nl
6 daysFix incorrectly spelled city nameDaniel Gustafsson
The correct spelling is Beijing, fix in regression test and docs. Author: JiaoShuntian <jiaoshuntian@gmail.com> Reviewed-by: Kirill Reshke <reshkekirill@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/ebfa3ec2-dc3c-4adb-be2a-4a882c2e85a7@gmail.com
9 daysFix planner error with SRFs and grouping setsRichard Guo
If there are any SRFs in a PathTarget, we must separate it into SRF-computing and SRF-free targets. This is because the executor can only handle SRFs that appear at the top level of the targetlist of a ProjectSet plan node. If we find a subexpression that matches an expression already computed in the previous plan level, we should treat it like a Var and should not split it again. setrefs.c will later replace the expression with a Var referencing the subplan output. However, when processing the grouping target for grouping sets, the planner can fail to recognize that an expression is already computed in the scan/join phase. The root cause is a mismatch in the nullingrels bits. Expressions in the grouping target carry the grouping nulling bit in their nullingrels to indicate that they can be nulled by the grouping step. However, the corresponding expressions in the scan/join target do not have these bits. As a result, the exact match check in list_member() fails, leading the planner to incorrectly believe that the expression needs to be re-evaluated from its arguments, which are often not available in the subplan. This can lead to planner errors such as "variable not found in subplan target list". To fix, ignore the grouping nulling bit when checking whether an expression from the grouping target is available in the pre-grouping input target. This aligns with the matching logic in setrefs.c. Backpatch to v18, where this issue was introduced. Bug: #19353 Reported-by: Marian MULLER REBEYROL <marian.muller@serli.com> Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/19353-aaa179bba986a19b@postgresql.org Backpatch-through: 18
9 daysFix regression test failure when wal_level is set to minimal.Masahiko Sawada
Commit 67c209 removed the WARNING for insufficient wal_level from the expected output, but the WARNING may still appear on buildfarm members that run with wal_level=minimal. To avoid unstable test output depending on wal_level, this commit the test to use ALTER PUBLICATION for verifying the same behavior, ensuring the output remains consistent regardless of the wal_level setting. Per buildfarm member thorntail. Author: Zhijie Hou <houzj.fnst@fujitsu.com> Discussion: https://postgr.es/m/TY4PR01MB16907680E27BAB146C8EB1A4294B2A@TY4PR01MB16907.jpnprd01.prod.outlook.com
10 daysTeach expr_is_nonnullable() to handle more expression typesRichard Guo
Currently, the function expr_is_nonnullable() checks only Const and Var expressions to determine if an expression is non-nullable. This patch extends the detection logic to handle more expression types. This can enable several downstream optimizations, such as reducing NullTest quals to constant truth values (e.g., "COALESCE(var, 1) IS NULL" becomes FALSE) and converting "COUNT(expr)" to the more efficient "COUNT(*)" when the expression is proven non-nullable. This breaks a test case in test_predtest.sql, since we now simplify "ARRAY[] IS NULL" to constant FALSE, preventing it from weakly refuting a strict ScalarArrayOpExpr ("x = any(ARRAY[])"). To ensure the refutation logic is still exercised as intended, wrap the array argument in opaque_array(). Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Reviewed-by: David Rowley <dgrowleyml@gmail.com> Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com> Discussion: https://postgr.es/m/CAMbWs49UhPBjm+NRpxerjaeuFKyUZJ_AjM3NBcSYK2JgZ6VTEQ@mail.gmail.com
10 daysSimplify COALESCE expressions using non-nullable argumentsRichard Guo
The COALESCE function returns the first of its arguments that is not null. When an argument is proven non-null, if it is the first non-null-constant argument, the entire COALESCE expression can be replaced by that argument. If it is a subsequent argument, all following arguments can be dropped, since they will never be reached. Currently, we perform this simplification only for Const arguments. This patch extends the simplification to support any expression that can be proven non-nullable. This can help avoid the overhead of evaluating unreachable arguments. It can also lead to better plans when the first argument is proven non-nullable and replaces the expression, as the planner no longer has to treat the expression as non-strict, and can also leverage index scans on the resulting expression. There is an ensuing plan change in generated_virtual.out, and we have to modify the test to ensure that it continues to test what it is intended to. Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Reviewed-by: David Rowley <dgrowleyml@gmail.com> Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com> Discussion: https://postgr.es/m/CAMbWs49UhPBjm+NRpxerjaeuFKyUZJ_AjM3NBcSYK2JgZ6VTEQ@mail.gmail.com
10 daysDon't advance origin during apply failure.Amit Kapila
The logical replication parallel apply worker could incorrectly advance the origin progress during an error or failed apply. This behavior risks transaction loss because such transactions will not be resent by the server. Commit 3f28b2fcac addressed a similar issue for both the apply worker and the table sync worker by registering a before_shmem_exit callback to reset origin information. This prevents the worker from advancing the origin during transaction abortion on shutdown. This patch applies the same fix to the parallel apply worker, ensuring consistent behavior across all worker types. As with 3f28b2fcac, we are backpatching through version 16, since parallel apply mode was introduced there and the issue only occurs when changes are applied before the transaction end record (COMMIT or ABORT) is received. Author: Hou Zhijie <houzj.fnst@fujitsu.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Backpatch-through: 16 Discussion: https://postgr.es/m/TY4PR01MB169078771FB31B395AB496A6B94B4A@TY4PR01MB16907.jpnprd01.prod.outlook.com Discussion: https://postgr.es/m/TYAPR01MB5692FAC23BE40C69DA8ED4AFF5B92@TYAPR01MB5692.jpnprd01.prod.outlook.com
10 daysToggle logical decoding dynamically based on logical slot presence.Masahiko Sawada
Previously logical decoding required wal_level to be set to 'logical' at server start. This meant that users had to incur the overhead of logical-level WAL logging even when no logical replication slots were in use. This commit adds functionality to automatically control logical decoding availability based on logical replication slot presence. The newly introduced module logicalctl.c allows logical decoding to be dynamically activated when needed when wal_level is set to 'replica'. When the first logical replication slot is created, the system automatically increases the effective WAL level to maintain logical-level WAL records. Conversely, after the last logical slot is dropped or invalidated, it decreases back to 'replica' WAL level. While activation occurs synchronously right after creating the first logical slot, deactivation happens asynchronously through the checkpointer process. This design avoids a race condition at the end of recovery; a concurrent deactivation could happen while the startup process enables logical decoding at the end of recovery, but WAL writes are still not permitted until recovery fully completes. The checkpointer will handle it after recovery is done. Asynchronous deactivation also avoids excessive toggling of the logical decoding status in workloads that repeatedly create and drop a single logical slot. On the other hand, this lazy approach can delay changes to effective_wal_level and the disabling logical decoding, especially when the checkpointer is busy with other tasks. We chose this lazy approach in all deactivation paths to keep the implementation simple, even though laziness is strictly required only for end-of-recovery cases. Future work might address this limitation either by using a dedicated worker instead of the checkpointer, or by implementing synchronous waiting during slot drops if workloads are significantly affected by the lazy deactivation of logical decoding. The effective WAL level, determined internally by XLogLogicalInfo, is allowed to change within a transaction until an XID is assigned. Once an XID is assigned, the value becomes fixed for the remainder of the transaction. This behavior ensures that the logging mode remains consistent within a writing transaction, similar to the behavior of GUC parameters. A new read-only GUC parameter effective_wal_level is introduced to monitor the actual WAL level in effect. This parameter reflects the current operational WAL level, which may differ from the configured wal_level setting. Bump PG_CONTROL_VERSION as it adds a new field to CheckPoint struct. Reviewed-by: Shveta Malik <shveta.malik@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com> Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Discussion: https://postgr.es/m/CAD21AoCVLeLYq09pQPaWs+Jwdni5FuJ8v2jgq-u9_uFbcp6UbA@mail.gmail.com
11 daysFix bug in following update chain when locking a heap tupleHeikki Linnakangas
After waiting for a concurrent updater to finish, heap_lock_tuple() followed the update chain to lock all tuple versions. However, when stepping from the initial tuple to the next one, it failed to check that the next tuple's XMIN matches the initial tuple's XMAX. That's an important check whenever following an update chain, and the recursive part that follows the chain did it, but the initial step missed it. Without the check, if the updating transaction aborts, the updated tuple is vacuumed away and replaced by an unrelated tuple, the unrelated tuple might get incorrectly locked. Author: Jasper Smit <jasper.smit@servicenow.com> Discussion: https://www.postgresql.org/message-id/CAOG+RQ74x0q=kgBBQ=mezuvOeZBfSxM1qu_o0V28bwDz3dHxLw@mail.gmail.com Backpatch-through: 14
11 daysFix orphaned origin in shared memory after DROP SUBSCRIPTIONMichael Paquier
Since ce0fdbfe9722, a replication slot and an origin are created by each tablesync worker, whose information is stored in both a catalog and shared memory (once the origin is set up in the latter case). The transaction where the origin is created is the same as the one that runs the initial COPY, with the catalog state of the origin becoming visible for other sessions only once the COPY transaction has committed. The catalog state is coupled with a state in shared memory, initialized at the same time as the origin created in the catalogs. Note that the transaction doing the initial data sync can take a long time, time that depends on the amount of data to transfer from a publication node to its subscriber node. Now, when a DROP SUBSCRIPTION is executed, all its workers are stopped with the origins removed. The removal of each origin relies on a catalog lookup. A worker still running the initial COPY would fail its transaction, with the catalog state of the origin rolled back while the shared memory state remains around. The session running the DROP SUBSCRIPTION should be in charge of cleaning up the catalog and the shared memory state, but as there is no data in the catalogs the shared memory state is not removed. This issue would leave orphaned origin data in shared memory, leading to a confusing state as it would still show up in pg_replication_origin_status. Note that this shared memory data is sticky, being flushed on disk in replorigin_checkpoint at checkpoint. This prevents other origins from reusing a slot position in the shared memory data. To address this problem, the commit moves the creation of the origin at the end of the transaction that precedes the one executing the initial COPY, making the origin immediately visible in the catalogs for other sessions, giving DROP SUBSCRIPTION a way to know about it. A different solution would have been to clean up the shared memory state using an abort callback within the tablesync worker. The solution of this commit is more consistent with the apply worker that creates an origin in a short transaction. A test is added in the subscription test 004_sync.pl, which was able to display the problem. The test fails when this commit is reverted. Reported-by: Tenglong Gu <brucegu@amazon.com> Reported-by: Daisuke Higuchi <higudai@amazon.com> Analyzed-by: Michael Paquier <michael@paquier.xyz> Author: Hou Zhijie <houzj.fnst@fujitsu.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/aUTekQTg4OYnw-Co@paquier.xyz Backpatch-through: 14
11 daysAdd missing .gitignore for src/test/modules/test_cloexec.Tom Lane
13 daysClean up test_cloexec.c and Makefile.Thomas Munro
An unused variable caused a compiler warning on BF animal fairywren, an snprintf() call was redundant, and some buffer sizes were inconsistent. Per code review from Tom Lane. The Makefile's test ifeq ($(PORTNAME), win32) never succeeded due to a circularity, so only Meson builds were actually compiling the new test code, partially explaining why CI didn't tell us about the warning sooner (the other problem being that CompilerWarnings only makes world-bin, a problem for another commit). Simplify. Backpatch-through: 16, like commit c507ba55 Author: Bryan Green <dbryan.green@gmail.com> Co-authored-by: Thomas Munro <tmunro@gmail.com> Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/1086088.1765593851%40sss.pgh.pa.us
2025-12-18Fix intermittent BF failure in 040_standby_failover_slots_sync.Amit Kapila
Commit 0d2d4a0ec3 introduced a test that verifies replication slot synchronization to a standby server via SQL API. However, the test did not configure synchronized_standby_slots. Without this setting, logical failover slots can advance beyond the physical replication slot, causing intermittent synchronization failures. Author: Hou Zhijie <houzj.fnst@fujitsu.com> Discussion: https://postgr.es/m/TY4PR01MB16907DF70205308BE918E0D4494ABA@TY4PR01MB16907.jpnprd01.prod.outlook.com
2025-12-18Fix const correctness in pgstat data serialization callbacksMichael Paquier
4ba012a8ed9c defined the "header" (pointer to the stats data) of from_serialized_data() as a const, even though it is fine (and expected!) for the callback to modify the shared memory entry when loading the stats at startup. While on it, this commit updates the callback to_serialized_data() in the test module test_custom_stats to make the data extracted from the "header" parameter a const since it should never be modified: the stats are written to disk and no modifications are expected in the shared memory entry. This clarifies the API contract of these new callbacks. Reported-By: Peter Eisentraut <peter@eisentraut.org> Author: Michael Paquier <michael@paquier.xyz> Co-authored-by: Sami Imseih <samimseih@gmail.com> Discussion: https://postgr.es/m/d87a93b0-19c7-4db6-b9c0-d6827e7b2da1@eisentraut.org
2025-12-17oauth_validator: Avoid races in log_check()Jacob Champion
Commit e0f373ee4 fixed up races in Cluster::connect_fails when using log_like. t/002_client.pl didn't get the memo, though, because it doesn't use Test::Cluster to perform its custom hook tests. (This is probably not an issue at the moment, since the log check is only done after authentication success and not failure, but there's no reason to wait for someone to hit it.) Introduce the fix, based on debug2 logging, to its use of log_check() as well, and move the logic into the test() helper so that any additions don't need to continually duplicate it. Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/CAOYmi%2BmrGg%2Bn_X2MOLgeWcj3v_M00gR8uz_D7mM8z%3DdX1JYVbg%40mail.gmail.com Backpatch-through: 18
2025-12-17Rename regress.so's .mo file to postgresql-regress-VERSION.mo.Tom Lane
I originally used just "regress-VERSION.mo", but that seems too generic considering that some packagers will put this file into a system-wide directory. Per suggestion from Christoph Berg. Reported-by: Christoph Berg <myon@debian.org> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/aULSW7Xqx5MqDW_1@msg.df7cb.de
2025-12-17Make postmaster 003_start_stop.pl test less flakyHeikki Linnakangas
The test is very sensitive to how backends start and exit, because it tests dead-end backends which occur when all the connection slots are in use. The test failed occasionally in the CI, when the backend that was launched for the raw_connect_works() check lingered for a while, and exited only later during the test. When it exited, it released a connection slot, when the test expected all the slots to be in use at that time. The 002_connection_limits.pl test had a similar issue: if the backend launched for safe_psql() in the test initialization lingers around, it uses up a connection slot during the test, messing up the test's connection counting. I haven't seen that in the CI, but when I added a "sleep(1);" to proc_exit(), the test failed. To make the tests more robust, restart the server to ensure that the lingering backends doesn't interfere with the later test steps. In the passing, fix a bogus test name. Report and analysis by Jelte Fennema-Nio, Andres Freund, Thomas Munro. Discussion: https://www.postgresql.org/message-id/CAGECzQSU2iGuocuP+fmu89hmBmR3tb-TNyYKjCcL2M_zTCkAFw@mail.gmail.com Backpatch-through: 18
2025-12-16Test PRI* macros even when we can't test NLS translation.Tom Lane
Further research shows that the reason commit 7db6809ce failed is that recent glibc versions short-circuit translation attempts when LC_MESSAGES is 'C.<encoding>', not only when it's 'C'. There seems no way around that, so we'll have to live with only testing NLS when a suitable real locale is installed. However, something can still be salvaged: it still seems like a good idea to verify that the PRI* macros work as-expected even when we can't check their translations (see f8715ec86 for motivation). Hence, adjust the test to always run the ereport calls, and tweak the parameter values in hopes of detecting any cases where there's confusion about the actual widths of the parameters. Discussion: https://postgr.es/m/1991599.1765818338@sss.pgh.pa.us
2025-12-16Add TAP test to check recovery when redo LSN is missingMichael Paquier
This commit provides test coverage for dc7c77f825d7, where the redo record and the checkpoint record finish on different WAL segments with the start of recovery able to detect that the redo record is missing. This test uses a wait injection point done in the critical section of a checkpoint, method that requires not one but actually two wait injection points to avoid any memory allocations within the critical section of the checkpoint: - Checkpoint run with a background psql. - One first wait point is run by the checkpointer before the critical section, allocating the shared memory required by the DSM registry for the wait machinery in the library injection_points. - First point is woken up. - Second wait point is loaded before the critical section, allocating the memory to build the path to the library loaded, then run in the critical section once the checkpoint redo record has been logged. - WAL segment is switched while waiting on the second point. - Checkpoint completes. - Stop cluster with immediate mode. - The segment that includes the redo record is removed. - Start, recovery fails as the redo record cannot be found. The error message introduced in dc7c77f825d7 is now reduced to a FATAL, meaning that the information is still provided while being able to use a test for it. Nitin has provided a basic version of the test, that I have enhanced to make it portable with two points. Without dc7c77f825d7, the cluster crashes in this test, not on a PANIC but due to the pointer dereference at the beginning of recovery, failure mentioned in the other commit. Author: Nitin Jadhav <nitinjadhavpostgres@gmail.com> Co-authored-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CAMm1aWaaJi2w49c0RiaDBfhdCL6ztbr9m=daGqiOuVdizYWYaA@mail.gmail.com
2025-12-15Revert "Avoid requiring Spanish locale to test NLS infrastructure."Tom Lane
This reverts commit 7db6809ced4406257a80766e4109c8be8e1ea744. That doesn't seem to work with recent (last couple of years) glibc, and the reasons are obscure. I can't let the farm stay this broken for long.
2025-12-15Allow passing a pointer to GetNamedDSMSegment()'s init callback.Nathan Bossart
This commit adds a new "void *arg" parameter to GetNamedDSMSegment() that is passed to the initialization callback function. This is useful for reusing an initialization callback function for multiple DSM segments. Author: Zsolt Parragi <zsolt.parragi@percona.com> Reviewed-by: Sami Imseih <samimseih@gmail.com> Discussion: https://postgr.es/m/CAN4CZFMjh8TrT9ZhWgjVTzBDkYZi2a84BnZ8bM%2BfLPuq7Cirzg%40mail.gmail.com
2025-12-15Avoid requiring Spanish locale to test NLS infrastructure.Tom Lane
I had supposed that the majority of machines with gettext installed would have most language locales installed, but at least in the buildfarm it turns out less than half have es_ES installed. So depending on that to run the test now seems like a bad idea. But it turns out that gettext can be persuaded to "translate" even in the C locale, as long as you fake out its short-circuit logic by spelling the locale name like "C.UTF-8" or similar. (Many thanks to Bryan Green for correcting my misconceptions about that.) Quick testing suggests that that spelling is accepted by most platforms, though again the buildfarm may show that "most" isn't "all". Hence, remove the es_ES dependency and instead create a "C" message catalog. I've made the test unconditionally set lc_messages to 'C.UTF-8'. That approach might need adjustment depending on what the buildfarm shows, but let's keep it simple until proven wrong. While at it, tweak the test so that we run the various ereport's even when !ENABLE_NLS. This is useful to verify that the macros provided by <inttypes.h> are compatible with snprintf.c, as we now know is worth questioning. Discussion: https://postgr.es/m/1991599.1765818338@sss.pgh.pa.us
2025-12-15Disable recently added CIC/RI isolation testsÁlvaro Herrera
We have tried to stabilize them several times already, but they are very flaky -- apparently there's some intrinsic instability that's hard to solve with the isolationtester framework. They are very noisy in CI runs (whereas buildfarm has not registered any such failures). They may need to be rewritten completely. In the meantime just comment them out in Makefile/meson.build, leaving the spec files around. Per complaint from Andres Freund. Discussion: https://postgr.es/m/202512112014.icpomgc37zx4@alvherre.pgsql
2025-12-15Add retry logic to pg_sync_replication_slots().Amit Kapila
Previously, pg_sync_replication_slots() would finish without synchronizing slots that didn't meet requirements, rather than failing outright. This could leave some failover slots unsynchronized if required catalog rows or WAL segments were missing or at risk of removal, while the standby continued removing needed data. To address this, the function now waits for the primary slot to advance to a position where all required data is available on the standby before completing synchronization. It retries cyclically until all failover slots that existed on the primary at the start of the call are synchronized. Slots created after the function begins are not included. If the standby is promoted during this wait, the function exits gracefully and the temporary slots will be removed. Author: Ajin Cherian <itsajin@gmail.com> Author: Hou Zhijie <houzj.fnst@fujitsu.com> Reviewed-by: Shveta Malik <shveta.malik@gmail.com> Reviewed-by: Japin Li <japinli@hotmail.com> Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Reviewed-by: Ashutosh Sharma <ashu.coek88@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Yilin Zhang <jiezhilove@126.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/CAFPTHDZAA%2BgWDntpa5ucqKKba41%3DtXmoXqN3q4rpjO9cdxgQrw%40mail.gmail.com
2025-12-15test_custom_stats: Fix compilation warningMichael Paquier
I have fat-fingered an error message related to an offset while switching the code to use pgoff_t. Let's switch to the same error message used in the rest of the tree for similar failures with fseeko(), instead. Per buildfarm members running macos: longfin, sifaka and indri.
2025-12-15test_custom_stats: Add tests with read/write of auxiliary dataMichael Paquier
This commit builds upon 4ba012a8ed9c, giving an example of what can be achieved with the new callbacks. This provides coverage for the new pgstats APIs, while serving as a reference template. Note that built-in stats kinds could use them, we just don't have a use-case there yet. Author: Sami Imseih <samimseih@gmail.com> Co-authored-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/CAA5RZ0s9SDOu+Z6veoJCHWk+kDeTktAtC-KY9fQ9Z6BJdDUirQ@mail.gmail.com
2025-12-14Update typedefs.list to match what the buildfarm currently reports.Tom Lane
The current list from the buildfarm includes quite a few typedef names that it used to miss. The reason is a bit obscure, but it seems likely to have something to do with our recent increased use of palloc_object and palloc_array. In any case, this makes the relevant struct declarations be much more nicely formatted, so I'll take it. Install the current list and re-run pgindent to update affected code. Syncing with the current list also removes some obsolete typedef names and fixes some alphabetization errors. Discussion: https://postgr.es/m/1681301.1765742268@sss.pgh.pa.us
2025-12-14Looks like we can't test NLS on machines that lack any es_ES locale.Tom Lane
While commit 5b275a6e1 fixed a few unhappy buildfarm animals, it looks like the remainder simply don't have any es_ES locale at all. There's little point in running the test in that case, so minimize the number of variant expected-files by bailing out. Also emit a log entry so that it's possible to tell from buildfarm postmaster logs which case occurred. Possibly, the scope of this testing could be improved by providing additional translations. But I think it's likely that the failing animals have no non-C locales installed at all. In passing, update typedefs.list so that koel doesn't think regress.c is misformatted. Discussion: https://postgr.es/m/E1vUpNU-000kcQ-1D@gemulon.postgresql.org
2025-12-14Try a few different locale name spellings in nls.sql.Tom Lane
While CI testing in advance of commit 8c498479d suggested that all Unix-ish platforms would accept 'es_ES.UTF-8', the buildfarm has a different opinion. Let's dynamically select something that works, if possible. Discussion: https://postgr.es/m/E1vUpNU-000kcQ-1D@gemulon.postgresql.org
2025-12-14Add a regression test to verify that NLS translation works.Tom Lane
We've never actually had a formal test for this facility. It seems worth adding one now, mainly because we are starting to depend on gettext() being able to handle the PRI* macros from <inttypes.h>, and it's not all that certain that that works everywhere. So the test goes to a bit of effort to check all the PRI* macros we are likely to use. (This effort has indeed found one problem already, now fixed in commit f8715ec86.) Discussion: https://postgr.es/m/3098752.1765221796@sss.pgh.pa.us Discussion: https://postgr.es/m/292844.1765315339@sss.pgh.pa.us
2025-12-14Implement ALTER TABLE ... SPLIT PARTITION ... commandAlexander Korotkov
This new DDL command splits a single partition into several partitions. Just like the ALTER TABLE ... MERGE PARTITIONS ... command, new partitions are created using the createPartitionTable() function with the parent partition as the template. This commit comprises a quite naive implementation which works in a single process and holds the ACCESS EXCLUSIVE LOCK on the parent table during all the operations, including the tuple routing. This is why the new DDL command can't be recommended for large, partitioned tables under high load. However, this implementation comes in handy in certain cases, even as it is. Also, it could serve as a foundation for future implementations with less locking and possibly parallelism. Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru Author: Dmitry Koval <d.koval@postgrespro.ru> Co-authored-by: Alexander Korotkov <aekorotkov@gmail.com> Co-authored-by: Tender Wang <tndrwang@gmail.com> Co-authored-by: Richard Guo <guofenglinux@gmail.com> Co-authored-by: Dagfinn Ilmari Mannsaker <ilmari@ilmari.org> Co-authored-by: Fujii Masao <masao.fujii@gmail.com> Co-authored-by: Jian He <jian.universality@gmail.com> Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com> Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at> Reviewed-by: Zhihong Yu <zyu@yugabyte.com> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Robert Haas <rhaas@postgresql.org> Reviewed-by: Stephane Tachoires <stephane.tachoires@gmail.com> Reviewed-by: Jian He <jian.universality@gmail.com> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Daniel Gustafsson <dgustafsson@postgresql.org> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Noah Misch <noah@leadboat.com>
2025-12-14Implement ALTER TABLE ... MERGE PARTITIONS ... commandAlexander Korotkov
This new DDL command merges several partitions into a single partition of the target table. The target partition is created using the new createPartitionTable() function with the parent partition as the template. This commit comprises a quite naive implementation which works in a single process and holds the ACCESS EXCLUSIVE LOCK on the parent table during all the operations, including the tuple routing. This is why this new DDL command can't be recommended for large partitioned tables under a high load. However, this implementation comes in handy in certain cases, even as it is. Also, it could serve as a foundation for future implementations with less locking and possibly parallelism. Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru Author: Dmitry Koval <d.koval@postgrespro.ru> Co-authored-by: Alexander Korotkov <aekorotkov@gmail.com> Co-authored-by: Tender Wang <tndrwang@gmail.com> Co-authored-by: Richard Guo <guofenglinux@gmail.com> Co-authored-by: Dagfinn Ilmari Mannsaker <ilmari@ilmari.org> Co-authored-by: Fujii Masao <masao.fujii@gmail.com> Co-authored-by: Jian He <jian.universality@gmail.com> Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com> Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at> Reviewed-by: Zhihong Yu <zyu@yugabyte.com> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Robert Haas <rhaas@postgresql.org> Reviewed-by: Stephane Tachoires <stephane.tachoires@gmail.com> Reviewed-by: Jian He <jian.universality@gmail.com> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Daniel Gustafsson <dgustafsson@postgresql.org> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Noah Misch <noah@leadboat.com>
2025-12-13Fix jsonb_object_agg crash after eliminating null-valued pairs.Tom Lane
In commit b61aa76e4 I added an assumption in jsonb_object_agg_finalfn that it'd be okay to apply uniqueifyJsonbObject repeatedly to a JsonbValue. I should have studied that code more closely first, because in skip_nulls mode it removed leading nulls by changing the "pairs" array start pointer. This broke the data structure's invariants in two ways: pairs no longer references a repalloc-able chunk, and the distance from pairs to the end of its array is less than parseState->size. So any subsequent addition of more pairs is at high risk of clobbering memory and/or causing repalloc to crash. Unfortunately, adding more pairs is exactly what will happen when the aggregate is being used as a window function. Fix by rewriting uniqueifyJsonbObject to not do that. The prior coding had little to recommend it anyway. Reported-by: Alexander Lakhin <exclusion@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/ec5e96fb-ee49-4e5f-8a09-3f72b4780538@gmail.com
2025-12-12Reject opclass options in ON CONFLICT clauseÁlvaro Herrera
It's as pointless as ASC/DESC and NULLS FIRST/LAST are, so reject all of them in the same way. While at it, normalize the others' error messages to have less translatable strings. Add tests for these errors. Noticed while reviewing recent INSERT ON CONFLICT patches. Author: Álvaro Herrera <alvherre@kurilemu.de> Reviewed-by: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/202511271516.oiefpvn3z27m@alvherre.pgsql
2025-12-11Fix infer_arbiter_index for partitioned tablesÁlvaro Herrera
The fix for concurrent index operations in bc32a12e0db2 started considering indexes that are not yet marked indisvalid as arbiters for INSERT ON CONFLICT. For partitioned tables, this leads to including indexes that may not exist in partitions, causing a trivially reproducible "invalid arbiter index list" error to be thrown because of failure to match the index. To fix, it suffices to ignore !indisvalid indexes on partitioned tables. There should be no risk that the set of indexes will change for concurrent transactions, because in order for such an index to be marked valid, an ALTER INDEX ATTACH PARTITION must run which requires AccessExclusiveLock. Author: Mihail Nikalayeu <mihailnikalayeu@gmail.com> Reported-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de> Discussion: https://postgr.es/m/17622f79-117a-4a44-aa8e-0374e53faaf0%40gmail.com
2025-12-10Fix bogus extra arguments to query_safe in testHeikki Linnakangas
The test seemed to incorrectly think that query_safe() takes an argument that describes what the query does, similar to e.g. command_ok(). Until commit bd8d9c9bdf the extra arguments were harmless and were just ignored, but when commit bd8d9c9bdf introduced a new optional argument to query_safe(), the extra arguments started clashing with that, causing the test to fail. Backpatch to v17, that's the oldest branch where the test exists. The extra arguments didn't cause any trouble on the older branches, but they were clearly bogus anyway.
2025-12-10Improve DDL deparsing testHeikki Linnakangas
1. The test initially focuses on the "parent" table, then switches to the "part" table, and goes back to the "parent" table. That seems a little weird, so move the tests around so that all the commands on the "parent" table are done first, followed by the "part" table. 2. ALTER TABLE ALTER COLUMN SET EXPRESSION was not tested, so add that. Author: jian he <jian.universality@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://www.postgresql.org/message-id/CACJufxFDi7fnwB-8xXd_ExML7-7pKbTaK4j46AJ=4-14DXvtVg@mail.gmail.com
2025-12-10Fix failures with cross-version pg_upgrade testsMichael Paquier
Buildfarm members skimmer and crake have reported that pg_upgrade running from v18 fails due to the changes of d52c24b0f808, with the expectations that the objects removed in the test module injection_points should still be present post upgrades, but the test module does not have them anymore. The origin of the issue is that the following test modules depend on injection_points, but they do not drop the extension once the tests finish, leaving its traces in the dumps used for the upgrades: - gin, down to v17 - typcache, down to v18 - nbtree, HEAD-only Test modules have no upgrade requirements, as they are used only for.. Tests, so there is no point in keeping them around. An alternative solution would be to drop the databases created by these modules in AdjustUpgrade.pm, but the solution of this commit to drop the extension is simpler. Note that there would be a catch if using a solution based on AdjustUpgrade.pm as the database name used for the test runs differs between configure and meson: - configure relies on USE_MODULE_DB for the database name unicity, that would build a database name based on the *first* entry of REGRESS, that lists all the SQL tests. - meson relies on a "name" field. For example, for the test module "gin", the regression database is named "regression_gin" under meson, while it is more complex for configure, as of "contrib_regression_gin_incomplete_splits". So a AdjustUpgrade.pm would need a set of DROP DATABASE IF EXISTS to solve this issue, to cope with each build system. The failure has been caused by d52c24b0f808, and the problem can happen with upgrade dumps from v17 and v18 to HEAD. This problem is not currently reachable in the back-branches, but it could be possible that a future change in injection_points in stable branches invalidates this theory, so this commit is applied down to v17 in the test modules that matter. Per discussion with Tom Lane and Heikki Linnakangas. Discussion: https://postgr.es/m/2899652.1765167313@sss.pgh.pa.us Backpatch-through: 17
2025-12-10Fix two issues with recently-introduced nbtree testMichael Paquier
REGRESS has forgotten about the test nbtree_half_dead_pages, and a .gitignore was missing from the module. Oversights in c085aab27819 for REGRESS and 1e4e5783e7d7 for the missing .gitignore. Discussion: https://postgr.es/m/aTipJA1Y1zVSmH3H@paquier.xyz
2025-12-10Fix O_CLOEXEC flag handling in Windows port.Thomas Munro
PostgreSQL's src/port/open.c has always set bInheritHandle = TRUE when opening files on Windows, making all file descriptors inheritable by child processes. This meant the O_CLOEXEC flag, added to many call sites by commit 1da569ca1f (v16), was silently ignored. The original commit included a comment suggesting that our open() replacement doesn't create inheritable handles, but it was a mis- understanding of the code path. In practice, the code was creating inheritable handles in all cases. This hasn't caused widespread problems because most child processes (archive_command, COPY PROGRAM, etc.) operate on file paths passed as arguments rather than inherited file descriptors. Even if a child wanted to use an inherited handle, it would need to learn the numeric handle value, which isn't passed through our IPC mechanisms. Nonetheless, the current behavior is wrong. It violates documented O_CLOEXEC semantics, contradicts our own code comments, and makes PostgreSQL behave differently on Windows than on Unix. It also creates potential issues with future code or security auditing tools. To fix, define O_CLOEXEC to _O_NOINHERIT in master, previously used by O_DSYNC. We use different values in the back branches to preserve existing values. In pgwin32_open_handle() we set bInheritHandle according to whether O_CLOEXEC is specified, for the same atomic semantics as POSIX in multi-threaded programs that create processes. Backpatch-through: 16 Author: Bryan Green <dbryan.green@gmail.com> Co-authored-by: Thomas Munro <thomas.munro@gmail.com> (minor adjustments) Discussion: https://postgr.es/m/e2b16375-7430-4053-bda3-5d2194ff1880%40gmail.com
2025-12-09Add started_by column to pg_stat_progress_analyze view.Masahiko Sawada
The new column, started_by, indicates the initiator of the analyze ('manual' or 'autovacuum'), helping users and monitoring tools to better understand ANALYZE behavior. Bump catalog version. Author: Shinya Kato <shinya11.kato@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Sami Imseih <samimseih@gmail.com> Reviewed-by: Yu Wang <wangyu_runtime@163.com> Discussion: https://postgr.es/m/CAA5RZ0suoicwxFeK_eDkUrzF7s0BVTaE7M%2BehCpYcCk5wiECpw%40mail.gmail.com