Merged
Conversation
A similar check existed in the MSVC scripts that have been removed in v17 by 1301c80, but nothing of the kind was checked in meson when building with a 4-byte off_t. This commit adds a check to fail the builds when trying to use a relation file size higher than 1GB when off_t is 4 bytes, like ./configure, rather than detecting these failures at runtime because the code is not able to handle large files in this case. Backpatch down to v16, where meson has been introduced. Discussion: https://postgr.es/m/aQ0hG36IrkaSGfN8@paquier.xyz Backpatch-through: 16
Previously, error messages for oversized injection point names, libraries, and functions showed buffer sizes (64, 128, 128) instead of the usable character limits (63, 127, 127) as it did not count for the zero-terminated byte, which was confusing. These messages are adjusted to show better the reality. The limit enforced for the private area was also too strict by one byte, as specifying a zone worth exactly INJ_PRIVATE_MAXLEN should be able to work because three is no zero-terminated byte in this case. This is a stylistic change (well, mostly, a private_area size of exactly 1024 bytes can be defined with this change, something that nobody seem to care about based on the lack of complaints). However, this is a testing facility let's keep the logic consistent across all the branches where this code exists, as there is an argument in favor of out-of-core extensions that use injection points. Author: Xuneng Zhou <xunengzhou@gmail.com> Co-authored-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CABPTF7VxYp4Hny1h+7ejURY-P4O5-K8WZg79Q3GUx13cQ6B2kg@mail.gmail.com Backpatch-through: 17
The synopsis for the ALTER PUBLICATION ... DROP ... command incorrectly implied that a column list and WHERE clause could be specified as part of the publication object. However, these options are not allowed for DROP operations, making the documentation misleading. This commit corrects the synopsis to clearly show only the valid forms of publication objects. Backpatched to v15, where the incorrect synopsis was introduced. Author: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/CAHut+PsPu+47Q7b0o6h1r-qSt90U3zgbAHMHUag5o5E1Lo+=uw@mail.gmail.com Backpatch-through: 15
pg_resetwal didn't accept multixid 0 or multixact offset UINT32_MAX, but they are both valid values that can appear in the control file. That caused pg_upgrade to fail if you tried to upgrade a cluster exactly at multixid or offset wraparound, because pg_upgrade calls pg_resetwal to restore multixid/offset on the new cluster to the values from the old cluster. To fix, allow those values in pg_resetwal. Fixes bugs #18863 and #18865 reported by Dmitry Kovalenko. Backpatch down to v15. Version 14 has the same bug, but the patch doesn't apply cleanly there. It could be made to work but it doesn't seem worth the effort given how rare it is to hit this problem with pg_upgrade, and how few people are upgrading to v14 anymore. Author: Maxim Orlov <orlovmg@gmail.com> Discussion: https://www.postgresql.org/message-id/CACG%3DezaApSMTjd%3DM2Sfn5Ucuggd3FG8Z8Qte8Xq9k5-%2BRQis-g@mail.gmail.com Discussion: https://www.postgresql.org/message-id/18863-72f08858855344a2@postgresql.org Discussion: https://www.postgresql.org/message-id/18865-d4c66cf35c2a67af@postgresql.org Backpatch-through: 15
Instead of having to write a semicolon inside the macro argument, we can insert a semicolon with another macro layer. This no longer gives pg_bsd_indent indigestion, so we can remove the digestive aids that had to be installed in the pgindent Perl script. Author: Álvaro Herrera <alvherre@kurilemu.de> Reviewed-by: Andrew Dunstan <andrew@dunslane.net> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/202511111134.njrwf5w5nbjm@alvherre.pgsql Backpatch-through: 18
The range for commit_siblings was incorrectly listed as starting on 1 instead of 0 in the sample configuration file. Backpatch down to all supported branches. Author: Man Zeng <zengman@halodbtech.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/tencent_53B70BA72303AE9C6889E78E@qq.com Backpatch-through: 14
Explicitly document that privileges are transferred along with the ownership. Backpatch to all supported versions since this behavior has always been present. Author: Laurenz Albe <laurenz.albe@cybertec.at> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: David G. Johnston <david.g.johnston@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Josef Šimánek <josef.simanek@gmail.com> Reported-by: Gilles Parc <gparc@free.fr> Discussion: https://postgr.es/m/2023185982.281851219.1646733038464.JavaMail.root@zimbra15-e2.priv.proxad.net Backpatch-through: 14
Previously, if async notify processing encountered an error, we would report the error to the client and advance our read position past the offending entry to prevent trying to process it over and over again. Trying to continue after an error has a few problems however: - We have no way of telling the client that a notification was lost. They get an ERROR, but that doesn't tell you much. As such, it's not clear if keeping the connection alive after losing a notification is a good thing. Depending on the application logic, missing a notification could cause the application to get stuck waiting, for example. - If the connection is idle, PqCommReadingMsg is set and any ERROR is turned into FATAL anyway. - We bailed out of the notification processing loop on first error without processing any subsequent notifications. The subsequent notifications would not be processed until another notify interrupt arrives. For example, if there were two notifications pending, and processing the first one caused an ERROR, the second notification would not be processed until someone sent a new NOTIFY. This commit changes the behavior so that any ERROR while processing async notifications is turned into FATAL, causing the client connection to be terminated. That makes the behavior more consistent as that's what happened in idle state already, and terminating the connection is a clear signal to the application that it might've missed some notifications. The reason to do this now is that the next commits will change the notification processing code in a way that would make it harder to skip over just the offending notification entry on error. Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com> Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de> Reviewed-by: Arseniy Mukhin <arseniy.mukhin.dev@gmail.com> Discussion: https://www.postgresql.org/message-id/fedbd908-4571-4bbe-b48e-63bfdcc38f64@iki.fi Backpatch-through: 14
The async notification queue contains the XID of the sender, and when processing notifications we call TransactionIdDidCommit() on the XID. But we had no safeguards to prevent the CLOG segments containing those XIDs from being truncated away. As a result, if a backend didn't for some reason process its notifications for a long time, or when a new backend issued LISTEN, you could get an error like: test=# listen c21; ERROR: 58P01: could not access status of transaction 14279685 DETAIL: Could not open file "pg_xact/000D": No such file or directory. LOCATION: SlruReportIOError, slru.c:1087 To fix, make VACUUM "freeze" the XIDs in the async notification queue before truncating the CLOG. Old XIDs are replaced with FrozenTransactionId or InvalidTransactionId. Note: This commit is not a full fix. A race condition remains, where a backend is executing asyncQueueReadAllNotifications() and has just made a local copy of an async SLRU page which contains old XIDs, while vacuum concurrently truncates the CLOG covering those XIDs. When the backend then calls TransactionIdDidCommit() on those XIDs from the local copy, you still get the error. The next commit will fix that remaining race condition. This was first reported by Sergey Zhuravlev in 2021, with many other people hitting the same issue later. Thanks to: - Alexandra Wang, Daniil Davydov, Andrei Varashen and Jacques Combrink for investigating and providing reproducable test cases, - Matheus Alcantara and Arseniy Mukhin for review and earlier proposed patches to fix this, - Álvaro Herrera and Masahiko Sawada for reviews, - Yura Sokolov aka funny-falcon for the idea of marking transactions as committed in the notification queue, and - Joel Jacobson for the final patch version. I hope I didn't forget anyone. Backpatch to all supported versions. I believe the bug goes back all the way to commit d1e0272, which introduced the SLRU-based async notification queue. Discussion: https://www.postgresql.org/message-id/16961-25f29f95b3604a8a@postgresql.org Discussion: https://www.postgresql.org/message-id/18804-bccbbde5e77a68c2@postgresql.org Discussion: https://www.postgresql.org/message-id/CAK98qZ3wZLE-RZJN_Y%2BTFjiTRPPFPBwNBpBi5K5CU8hUHkzDpw@mail.gmail.com Backpatch-through: 14
Previous commit fixed a bug where VACUUM would truncate the CLOG that's still needed to check the commit status of XIDs in the async notify queue, but as mentioned in the commit message, it wasn't a full fix. If a backend is executing asyncQueueReadAllNotifications() and has just made a local copy of an async SLRU page which contains old XIDs, vacuum can concurrently truncate the CLOG covering those XIDs, and the backend still gets an error when it calls TransactionIdDidCommit() on those XIDs in the local copy. This commit fixes that race condition. To fix, hold the SLRU bank lock across the TransactionIdDidCommit() calls in NOTIFY processing. Per Tom Lane's idea. Backpatch to all supported versions. Reviewed-by: Joel Jacobson <joel@compiler.org> Reviewed-by: Arseniy Mukhin <arseniy.mukhin.dev@gmail.com> Discussion: https://www.postgresql.org/message-id/2759499.1761756503@sss.pgh.pa.us Backpatch-through: 14
Before we started to freeze async notify entries (commit 8eeb4a0f7c), no one looked at the 'xid' on an entry with invalid 'dboid'. But now we might actually need to freeze it later. Initialize them with InvalidTransactionId to begin with, to avoid that work later. Álvaro pointed this out in review of commit 8eeb4a0f7c, but I forgot to include this change there. Author: Álvaro Herrera <alvherre@kurilemu.de> Discussion: https://www.postgresql.org/message-id/202511071410.52ll56eyixx7@alvherre.pgsql Backpatch-through: 14
If DSM entry initialization fails, backends could try to use an uninitialized DSM segment, DSA, or dshash table (since the entry is still added to the registry). To fix, keep track of whether initialization completed, and ERROR if a backend tries to attach to an uninitialized entry. We could instead retry initialization as needed, but that seemed complicated, error prone, and unlikely to help most cases. Furthermore, such problems probably indicate a coding error. Reported-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Sami Imseih <samimseih@gmail.com> Discussion: https://postgr.es/m/dd36d384-55df-4fc2-825c-5bc56c950fa9%40gmail.com Backpatch-through: 17
On the CREATE POLICY page, the "Policies Applied by Command Type" table was missing MERGE ... THEN DELETE and some of the policies applied during INSERT ... ON CONFLICT and MERGE. Fix that, and try to improve readability by listing the various MERGE cases separately, rather than together with INSERT/UPDATE/DELETE. Mention COPY ... TO along with SELECT, since it behaves in the same way. In addition, document which policy violations cause errors to be thrown, and which just cause rows to be silently ignored. Also, a paragraph above the table states that INSERT ... ON CONFLICT DO UPDATE only checks the WITH CHECK expressions of INSERT policies for rows appended to the relation by the INSERT path, which is incorrect -- all rows proposed for insertion are checked, regardless of whether they end up being inserted. Fix that, and also mention that the same applies to INSERT ... ON CONFLICT DO NOTHING. In addition, in various other places on that page, clarify how the different types of policy are applied to different commands, and whether or not errors are thrown when policy checks do not pass. Backpatch to all supported versions. Prior to v17, MERGE did not support RETURNING, and so MERGE ... THEN INSERT would never check new rows against SELECT policies. Prior to v15, MERGE was not supported at all. Author: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Viktor Holmberg <v@viktorh.net> Reviewed-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CAEZATCWqnfeChjK=n1V_dYZT4rt4mnq+ybf9c0qXDYTVMsy8pg@mail.gmail.com Backpatch-through: 14
…e mode.
Previously, when pgbench ran a custom script that triggered retriable errors
(e.g., deadlocks) followed by multiple \syncpipeline commands in pipeline mode,
the following assertion failure could occur:
Assertion failed: (res == ((void*)0)), function discardUntilSync, file pgbench.c, line 3594.
The issue was that discardUntilSync() assumed a pipeline sync result
(PGRES_PIPELINE_SYNC) would always be followed by either another sync result
or NULL. This assumption was incorrect: when multiple sync requests were sent,
a sync result could instead be followed by another result type. In such cases,
discardUntilSync() mishandled the results, leading to the assertion failure.
This commit fixes the issue by making discardUntilSync() correctly handle cases
where a pipeline sync result is followed by other result types. It now continues
discarding results until another pipeline sync followed by NULL is reached.
Backpatched to v17, where support for \syncpipeline command in pgbench was
introduced.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Chao Li <lic@highgo.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/20251111105037.f3fc554616bc19891f926c5b@sraoss.co.jp
Backpatch-through: 17
Commit 5e4fcbe531 added a check_rights parameter to this function for use by ALTER TABLE commands that re-create statistics objects. However, we intentionally ignore check_rights when verifying relation ownership because this function's lookup could return a different answer than the caller's. This commit adds a note to this effect so that we remember it down the road. Reviewed-by: Noah Misch <noah@leadboat.com> Backpatch-through: 14
All settings in this file should be commented out. In addition to fixing that, also fix the indentation for this line. Oversight in commit c758119. Reported-by: Daniel Gustafsson <daniel@yesql.se> Author: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/19727040-3EE4-4719-AF4F-2548544113D7%40yesql.se Backpatch-through: 18
Backpatch to 15, where MERGE was introduced. Reported-by: <emorgunov@mail.ru> Author: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/176278494385.770.15550176063450771532@wrigleys.postgresql.org Backpatch-through: 15
When instrumenting a MERGE command containing both WHEN NOT MATCHED BY SOURCE and WHEN NOT MATCHED BY TARGET actions using EXPLAIN ANALYZE, a concurrent update of the target relation could lead to an Assert failure in show_modifytable_info(). In a non-assert build, this would lead to an incorrect value for "skipped" tuples in the EXPLAIN output, rather than a crash. This could happen if the concurrent update caused a matched row to no longer match, in which case ExecMerge() treats the single originally matched row as a pair of not matched rows, and potentially executes 2 not-matched actions for the single source row. This could then lead to a state where the number of rows processed by the ModifyTable node exceeds the number of rows produced by its source node, causing "skipped_path" in show_modifytable_info() to be negative, triggering the Assert. Fix this in ExecMergeMatched() by incrementing the instrumentation tuple count on the source node whenever a concurrent update of this kind is detected, if both kinds of merge actions exist, so that the number of source rows matches the number of actions potentially executed, and the "skipped" tuple count is correct. Back-patch to v17, where support for WHEN NOT MATCHED BY SOURCE actions was introduced. Bug: #19111 Reported-by: Dilip Kumar <dilipbalaut@gmail.com> Author: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Discussion: https://postgr.es/m/19111-5b06624513d301b3@postgresql.org Backpatch-through: 17
Until d2ea2d3, the PS_USE_PS_STRINGS option was used on the GNU/Hurd. As this option got removed and PS_USE_CLOBBER_ARGV appears to work fine nowadays on the Hurd, define this one to re-enable process title changes on this platform. In the 14 and 15 branches, the existing test for __hurd__ (added 25 years ago by commit 209aa77, removed in 16 by the above commit) is left unchanged for now as it was activating slightly different code paths and would need investigation by a Hurd user. Author: Michael Banck <mbanck@debian.org> Discussion: https://postgr.es/m/CA%2BhUKGJMNGUAqf27WbckYFrM-Mavy0RKJvocfJU%3DJ2XcAZyv%2Bw%40mail.gmail.com Backpatch-through: 16
PostgreSQL 18 deprecated password_encryption='md5', but the comments for this GUC in the sample configuration file did not mention the deprecation. Update comments with a notice to make as many users as possible aware of it. Also add a comment to the related md5_password_warnings GUC while there. Author: Michael Banck <mbanck@gmx.net> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Robert Treat <rob@xzilla.net> Backpatch-through: 18
Remove bogus stripping of RelabelTypes: that can result in building an output SAOP tree with incorrect exposed exprType for the operands, which might confuse polymorphic operators. Moreover it demonstrably prevents folding some OR-trees to SAOPs when the RHS expressions have different base types that were coerced to the same type by RelabelTypes. Reduce prohibition on type_is_rowtype to just disallow type RECORD. We need that because otherwise we would happily fold multiple RECORD Consts into a RECORDARRAY Const even if they aren't the same record type. (We could allow that perhaps, if we checked that they all have the same typmod, but the case doesn't seem worth that much effort.) However, there is no reason at all to disallow the transformation for named composite types, nor domains over them: as long as we can find a suitable array type we're good. Remove some assertions that seem rather out of place (it's not this code's duty to verify that the RestrictInfo structure is sane). Rewrite some comments. The issues with RelabelType stripping seem severe enough to back-patch this into v18 where the code was introduced. Author: Tender Wang <tndrwang@gmail.com> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAHewXN=aH7GQBk4fXU-WaEeVmQWUmBAeNyBfJ3VKzPphyPKUkQ@mail.gmail.com Backpatch-through: 18
As noted in the commit message for 5e4fcbe531, the addition of a second parameter to CreateStatistics() breaks ABI compatibility, but we are unaware of any impacted third-party code. This commit updates .abi-compliance-history accordingly. Backpatch-through: 14-18
If you go back as far as the RHEL7 era, <sys/auxv.h> does not provide the HWCAPxxx macros needed with elf_aux_info or getauxval, so you need to get those from the kernel header <asm/hwcap.h> instead. We knew that for the 32-bit case but failed to extrapolate to the 64-bit case. Oversight in commit aac831c. Reported-by: GaoZengqi <pgf00a@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAFmBtr3Av62-jBzdhFkDHXJF9vQmNtSnH2upwODjnRcsgdTytw@mail.gmail.com Backpatch-through: 18
Reported-by: Rambabu V Author: Robert Treat Discussion: https://postgr.es/m/CADtiZxrUzRRX6edyN2y-7U5HA8KSXttee7K=EFTLXjwG1SCE4A@mail.gmail.com Backpatch-through: 18
The fix for bug #19055 (commit b0cc0a71e) allowed CTE references in sub-selects within aggregate functions to affect the semantic levels assigned to such aggregates. It turns out this broke some related cases, leading to assertion failures or strange planner errors such as "unexpected outer reference in CTE query". After experimenting with some alternative rules for assigning the semantic level in such cases, we've come to the conclusion that changing the level is more likely to break things than be helpful. Therefore, this patch undoes what b0cc0a71e changed, and instead installs logic to throw an error if there is any reference to a CTE that's below the semantic level that standard SQL rules would assign to the aggregate based on its contained Var and Aggref nodes. (The SQL standard disallows sub-selects within aggregate functions, so it can't reach the troublesome case and hence has no rule for what to do.) Perhaps someone will come along with a legitimate query that this logic rejects, and if so probably the example will help us craft a level-adjustment rule that works better than what b0cc0a71e did. I'm not holding my breath for that though, because the previous logic had been there for a very long time before bug #19055 without complaints, and that bug report sure looks to have originated from fuzzing not from real usage. Like b0cc0a71e, back-patch to all supported branches, though sadly that no longer includes v13. Bug: #19106 Reported-by: Kamil Monicz <kamil@monicz.dev> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/19106-9dd3668a0734cd72@postgresql.org Backpatch-through: 14
Like commit 6d969ca68, except here we are mopping up after 519338a. (There are no other uses of <sys/auxv.h> in the tree, so we should be done now.) Reported-by: GaoZengqi <pgf00a@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAFmBtr3Av62-jBzdhFkDHXJF9vQmNtSnH2upwODjnRcsgdTytw@mail.gmail.com Backpatch-through: 18
Replace "overlow" with "overflow". Author: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/CAHewXNnzFjAjYLTkP78HE2PQ17MjBqFdQQg+0X6Wo7YMUb68xA@mail.gmail.com
Oversight in commit 06eae9e. Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/aRODeqFUVkGDJSPP%40nathan Backpatch-through: 18
Commit 74cf7d4 added the --oldest-transaction-id option to pg_resetwal, but forgot to update the code that prints all the new values that are being set. Fix that. Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Discussion: https://www.postgresql.org/message-id/5461bc85-e684-4531-b4d2-d2e57ad18cba@iki.fi Backpatch-through: 14
These errors are very unlikely going to show up, but in the event that they happen, some incorrect information would have been provided: - In pg_rewind, a stat() failure was reported as an open() failure. - In pg_combinebackup, a check for the new directory of a tablespace mapping was referred as the old directory. - In pg_combinebackup, a failure in reading a source file when copying blocks referred to the destination file. The changes for pg_combinebackup affect v17 and newer versions. For pg_rewind, all the stable branches are affected. Author: Man Zeng <zengman@halodbtech.com> Discussion: https://postgr.es/m/tencent_1EE1430B1E6C18A663B8990F@qq.com Backpatch-through: 14
This routine's internals directly used MyProcNumber to choose which object ID to assign for the hash key of a backend's stats entry, while the value to use is given as input argument of the function. The original intention was to pass MyProcNumber as an argument of pgstat_create_backend() when called in pgstat_bestart_final(), pgstat_beinit() ensuring that MyProcNumber has been set, not use it directly in the function. This commit addresses this inconsistency by using the procnum given by the caller of pgstat_create_backend(), not MyProcNumber. This issue is not a cause of bugs currently. However, let's keep the code in sync across all the branches where this code exists, as it could matter in a future backpatch. Oversight in 4feba03. Reported-by: Ryo Matsumura <matsumura.ryo@fujitsu.com> Discussion: https://postgr.es/m/TYCPR01MB11316AD8150C8F470319ACCAEE866A@TYCPR01MB11316.jpnprd01.prod.outlook.com Backpatch-through: 18
As usual, the release notes for other branches will be made by cutting these down, but put them up for community review first.
It's not really an ABI break if you change the layout/size of an object with incomplete type, as commit f94e9141 did, so advance the ABI compliance reference commit in 16-18 to satisfy build farm animal crake. Backpatch-through: 16-18 Discussion: https://www.postgresql.org/message-id/1871492.1770409863%40sss.pgh.pa.us
Further fix of error message changed in commit 74a116a79b4. The initial fix was not quite correct. Discussion: https://www.postgresql.org/message-id/flat/tencent_1EE1430B1E6C18A663B8990F%40qq.com
This thinko caused us to not substitute our own getopt() code, which results in failing to parse long options for the postmaster since Solaris' getopt() doesn't do what we expect. This can be seen in the results of buildfarm member icarus, which is the only one trying to build via meson on Solaris. Per consultation with pgsql-release, it seems okay to fix this now even though we're in release freeze. The fix visibly won't affect any other platforms, and it can't break Solaris/meson builds any worse than they're already broken. Discussion: https://postgr.es/m/2471229.1770499291@sss.pgh.pa.us Backpatch-through: 16
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: bdee668bac7ab3256b6f922c0b6fb663a3b03e16
pgp_pub_decrypt_bytea() was missing a safeguard for the session key length read from the message data, that can be given in input of pgp_pub_decrypt_bytea(). This can result in the possibility of a buffer overflow for the session key data, when the length specified is longer than PGP_MAX_KEY, which is the maximum size of the buffer where the session data is copied to. A script able to rebuild the message and key data that can trigger the overflow is included in this commit, based on some contents provided by the reporter, heavily editted by me. A SQL test is added, based on the data generated by the script. Reported-by: Team Xint Code as part of zeroday.cloud Author: Michael Paquier <michael@paquier.xyz> Reviewed-by: Noah Misch <noah@leadboat.com> Security: CVE-2026-2005 Backpatch-through: 14
The function assumed that if charlen == bytelen, there are no multibyte characters in the string. That's sensible, but the callers were a little careless in how they calculated the lengths. The callers converted the string to lowercase before calling make_trigram(), and the 'charlen' value was calculated *before* the conversion to lowercase while 'bytelen' was calculated after the conversion. If the lowercased string had a different number of characters than the original, make_trigram() might incorrectly apply the fastpath and treat all the bytes as single-byte characters, or fail to apply the fastpath (which is harmless), or it might hit the "Assert(bytelen == charlen)" assertion. I'm not aware of any locale / character combinations where you could hit that assertion in practice, i.e. where a string converted to lowercase would have fewer characters than the original, but it seems best to avoid making that assumption. To fix, remove the 'charlen' argument. To keep the performance when there are no multibyte characters, always try the fast path first, but check the input for multibyte characters as we go. The check on each byte adds some overhead, but it's close enough. And to compensate, the find_word() function no longer needs to count the characters. This fixes one small bug in make_trigrams(): in the multibyte codepath, it peeked at the byte just after the end of the input string. When compiled with IGNORECASE, that was harmless because there is always a NUL byte or blank after the input string. But with !IGNORECASE, the call from generate_wildcard_trgm() doesn't guarantee that. Backpatch to v18, but no further. In previous versions lower-casing was done character by character, and thus the assumption that lower-casing doesn't change the character length was valid. That was changed in v18, commit fb1a188. Security: CVE-2026-2007 Reviewed-by: Noah Misch <noah@leadboat.com>
The code made a subtle assumption that the lower-cased version of a
string never has more characters than the original. That is not always
true. For example, in a database with the latin9 encoding:
latin9db=# select lower(U&'\00CC' COLLATE "lt-x-icu");
lower
-----------
i\x1A\x1A
(1 row)
In this example, lower-casing expands the single input character into
three characters.
The generate_trgm_only() function relied on that assumption in two
ways:
- It used "slen * pg_database_encoding_max_length() + 4" to allocate
the buffer to hold the lowercased and blank-padded string. That
formula accounts for expansion if the lower-case characters are
longer (in bytes) than the originals, but it's still not enough if
the lower-cased string contains more *characters* than the original.
- Its callers sized the output array to hold the trigrams extracted
from the input string with the formula "(slen / 2 + 1) * 3", where
'slen' is the input string length in bytes. (The formula was
generous to account for the possibility that RPADDING was set to 2.)
That's also not enough if one input byte can turn into multiple
characters.
To fix, introduce a growable trigram array and give up on trying to
choose the correct max buffer sizes ahead of time.
Backpatch to v18, but no further. In previous versions lower-casing was
done character by character, and thus the assumption that lower-casing
doesn't change the character length was valid. That was changed in v18,
commit fb1a188.
Security: CVE-2026-2007
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Jeff Davis <pgsql@j-davis.com>
While EUC_CN supports only 1- and 2-byte sequences (CS0, CS1), the mb<->wchar conversion functions allow 3-byte sequences beginning SS2, SS3. Change pg_encoding_max_length() to return 3, not 2, to close a hypothesized buffer overrun if a corrupted string is converted to wchar and back again in a newly allocated buffer. We might reconsider that in master (ie harmonizing in a different direction), but this change seems better for the back-branches. Also change pg_euccn_mblen() to report SS2 and SS3 characters as having length 3 (following the example of EUC_KR). Even though such characters would not pass verification, it's remotely possible that invalid bytes could be used to compute a buffer size for use in wchar conversion. Security: CVE-2026-2006 Backpatch-through: 14 Author: Thomas Munro <thomas.munro@gmail.com> Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
When converting multibyte to pg_wchar, the UTF-8 implementation would silently ignore an incomplete final character, while the other implementations would cast a single byte to pg_wchar, and then repeat for the remaining byte sequence. While it didn't overrun the buffer, it was surely garbage output. Make all encodings behave like the UTF-8 implementation. A later change for master only will convert this to an error, but we choose not to back-patch that behavior change on the off-chance that someone is relying on the existing UTF-8 behavior. Security: CVE-2026-2006 Backpatch-through: 14 Author: Thomas Munro <thomas.munro@gmail.com> Reported-by: Noah Misch <noah@leadboat.com> Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
A corrupted string could cause code that iterates with pg_mblen() to overrun its buffer. Fix, by converting all callers to one of the following: 1. Callers with a null-terminated string now use pg_mblen_cstr(), which raises an "illegal byte sequence" error if it finds a terminator in the middle of the sequence. 2. Callers with a length or end pointer now use either pg_mblen_with_len() or pg_mblen_range(), for the same effect, depending on which of the two seems more convenient at each site. 3. A small number of cases pre-validate a string, and can use pg_mblen_unbounded(). The traditional pg_mblen() function and COPYCHAR macro still exist for backward compatibility, but are no longer used by core code and are hereby deprecated. The same applies to the t_isXXX() functions. Security: CVE-2026-2006 Backpatch-through: 14 Co-authored-by: Thomas Munro <thomas.munro@gmail.com> Co-authored-by: Noah Misch <noah@leadboat.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reported-by: Paul Gerste (as part of zeroday.cloud) Reported-by: Moritz Sanft (as part of zeroday.cloud)
A security patch changed them today, so close the coverage gap now. Test that buffer overrun is avoided when pg_mblen*() requires more than the number of bytes remaining. This does not cover the calls in dict_thesaurus.c or in dict_synonym.c. That code is straightforward. To change that code's input, one must have access to modify installed OS files, so low-privilege users are not a threat. Testing this would likewise require changing installed share/postgresql/tsearch_data, which was enough of an obstacle to not bother. Security: CVE-2026-2006 Backpatch-through: 14 Co-authored-by: Thomas Munro <thomas.munro@gmail.com> Co-authored-by: Noah Misch <noah@leadboat.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
pgp_sym_decrypt() and pgp_pub_decrypt() will raise such errors, while bytea variants will not. The existing "dat3" test decrypted to non-UTF8 text, so switch that query to bytea. The long-term intent is for type "text" to always be valid in the database encoding. pgcrypto has long been known as a source of exceptions to that intent, but a report about exploiting invalid values of type "text" brought this module to the forefront. This particular exception is straightforward to fix, with reasonable effect on user queries. Back-patch to v14 (all supported versions). Reported-by: Paul Gerste (as part of zeroday.cloud) Reported-by: Moritz Sanft (as part of zeroday.cloud) Author: shihao zhong <zhong950419@gmail.com> Reviewed-by: cary huang <hcary328@gmail.com> Discussion: https://postgr.es/m/CAGRkXqRZyo0gLxPJqUsDqtWYBbgM14betsHiLRPj9mo2=z9VvA@mail.gmail.com Backpatch-through: 14 Security: CVE-2026-2006
These data types are represented like full-fledged arrays, but functions that deal specifically with these types assume that the array is 1-dimensional and contains no nulls. However, there are cast pathways that allow general oid[] or int2[] arrays to be cast to these types, allowing these expectations to be violated. This can be exploited to cause server memory disclosure or SIGSEGV. Fix by installing explicit checks in functions that accept these types. Reported-by: Altan Birler <altan.birler@tum.de> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Noah Misch <noah@leadboat.com> Security: CVE-2026-2003 Backpatch-through: 14
Selectivity estimators come in two flavors: those that make specific assumptions about the data types they are working with, and those that don't. Most of the built-in estimators are of the latter kind and are meant to be safely attachable to any operator. If the operator does not behave as the estimator expects, you might get a poor estimate, but it won't crash. However, estimators that do make datatype assumptions can malfunction if they are attached to the wrong operator, since then the data they get from pg_statistic may not be of the type they expect. This can rise to the level of a security problem, even permitting arbitrary code execution by a user who has the ability to create SQL objects. To close this hole, establish a rule that built-in estimators are required to protect themselves against being called on the wrong type of data. It does not seem practical however to expect estimators in extensions to reach a similar level of security, at least not in the near term. Therefore, also establish a rule that superuser privilege is required to attach a non-built-in estimator to an operator. We expect that this restriction will have little negative impact on extensions, since estimators generally have to be written in C and thus superuser privilege is required to create them in the first place. This commit changes the privilege checks in CREATE/ALTER OPERATOR to enforce the rule about superuser privilege, and fixes a couple of built-in estimators that were making datatype assumptions without sufficiently checking that they're valid. Reported-by: Daniel Firer as part of zeroday.cloud Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Noah Misch <noah@leadboat.com> Security: CVE-2026-2004 Backpatch-through: 14
While the preceding commit prevented such attachments from occurring in future, this one aims to prevent further abuse of any already- created operator that exposes _int_matchsel to the wrong data types. (No other contrib module has a vulnerable selectivity estimator.) We need only check that the Const we've found in the query is indeed of the type we expect (query_int), but there's a difficulty: as an extension type, query_int doesn't have a fixed OID that we could hard-code into the estimator. Therefore, the bulk of this patch consists of infrastructure to let an extension function securely look up the OID of a datatype belonging to the same extension. (Extension authors have requested such functionality before, so we anticipate that this code will have additional non-security uses, and may soon be extended to allow looking up other kinds of SQL objects.) This is done by first finding the extension that owns the calling function (there can be only one), and then thumbing through the objects owned by that extension to find a type that has the desired name. This is relatively expensive, especially for large extensions, so a simple cache is put in front of these lookups. Reported-by: Daniel Firer as part of zeroday.cloud Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Noah Misch <noah@leadboat.com> Security: CVE-2026-2004 Backpatch-through: 14
Backpatch-through: 14 Security: CVE-2026-2006
tristan957
approved these changes
Feb 10, 2026
9f23ff3 to
b01f66b
Compare
97e4281 to
a027103
Compare
b01f66b to
9f23ff3
Compare
See https://www.postgresql.org/docs/release/18.2/ for release notes.
9f23ff3 to
35466b9
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Update PG 18 to stamped 18.2 release