summaryrefslogtreecommitdiff
path: root/src/pl/plpython/plpy_exec.c
AgeCommit message (Collapse)Author
2025-08-21PL/Python: Add event trigger supportPeter Eisentraut
Allow event triggers to be written in PL/Python. It provides a TD dictionary with some information about the event trigger. Author: Euler Taveira <euler@eulerto.com> Co-authored-by: Dimitri Fontaine <dimitri@2ndQuadrant.fr> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/03f03515-2068-4f5b-b357-8fb540883c38%40app.fastmail.com
2025-08-21PL/Python: Refactor for event trigger supportPeter Eisentraut
Change is_trigger type from boolean to enum. That's a preparation for adding event trigger support. Author: Euler Taveira <euler@eulerto.com> Co-authored-by: Dimitri Fontaine <dimitri@2ndQuadrant.fr> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/03f03515-2068-4f5b-b357-8fb540883c38%40app.fastmail.com
2025-04-27Remove circular #include's between plpython.h and plpy_util.h.Tom Lane
plpython.h included plpy_util.h, simply on the grounds that "it's easier to just include it everywhere". However, plpy_util.h must include plpython.h, or it won't pass headerscheck. While the resulting circularity doesn't have any immediate bad effect, it's poor design. We have seen serious messes arise in the past from overly-broad inclusion footprints created by such circularities, so let's establish a project policy against it. To fix, just replace *.c files' inclusions of plpython.h with plpy_util.h. They'll pull in plpython.h indirectly; indeed, almost all have already done so via inclusions of other plpy_xxx.h headers. (Any extensions using plpython.h can do likewise without breaking the compatibility of their code with prior Postgres versions.) Reported-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Discussion: https://postgr.es/m/aAxQ6fcY5QQV1lo3@ip-10-97-1-34.eu-west-3.compute.internal
2025-02-25Remove obsolete Python version checkPeter Eisentraut
The checked version is already the current minimum supported version (3.2). Discussion: https://www.postgresql.org/message-id/flat/ee410de1-1e0b-4770-b125-eeefd4726a24@eisentraut.org
2024-11-28Remove useless casts to (void *)Peter Eisentraut
Many of them just seem to have been copied around for no real reason. Their presence causes (small) risks of hiding actual type mismatches or silently discarding qualifiers Discussion: https://www.postgresql.org/message-id/flat/461ea37c-8b58-43b4-9736-52884e862820@eisentraut.org
2024-10-28Remove unused #include's from contrib, pl, test .c filesPeter Eisentraut
as determined by IWYU Similar to commit dbbca2cf299, but for contrib, pl, and src/test/. Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://www.postgresql.org/message-id/flat/0df1d5b1-8ca8-4f84-93be-121081bde049%40eisentraut.org
2024-05-09Fix recursive RECORD-returning plpython functions.Tom Lane
If we recursed to a new call of the same function, with a different coldeflist (AS clause), it would fail because the inner call would overwrite the outer call's idea of what to return. This is vaguely like 1d2fe56e4 and c5bec5426, but it's not due to any API decisions: it's just that we computed the actual output rowtype at the start of the call, and saved it in the per-procedure data structure. We can fix it at basically zero cost by doing the computation at the end of each call instead of the start. It's not clear that there's any real-world use-case for such a function, but given that it doesn't cost anything to fix, it'd be silly not to. Per report from Andreas Karlsson. Back-patch to all supported branches. Discussion: https://postgr.es/m/1651a46d-3c15-4028-a8c1-d74937b54e19@proxel.se
2024-05-07Don't corrupt plpython's "TD" dictionary in a recursive trigger call.Tom Lane
If a plpython-language trigger caused another one to be invoked, the "TD" dictionary created for the inner one would overwrite the outer one's "TD" dictionary. This is more or less the same problem that 1d2fe56e4 fixed for ordinary functions in plpython, so fix it the same way, by saving and restoring "TD" during a recursive invocation. This fix makes an ABI-incompatible change in struct PLySavedArgs. I'm not too worried about that because it seems highly unlikely that any extension is messing with those structs. We could imagine doing something weird to preserve nominal ABI compatibility in the back branches, like keeping the saved TD object in an extra element of namedargs[]. However, that would only be very nominal compatibility: if anything *is* touching PLySavedArgs, it would likely do the wrong thing due to not knowing about the additional value. So I judge it not worth the ugliness to do something different there. (I also changed struct PLyProcedure, but its added field fits into formerly-padding space, so that should be safe.) Per bug #18456 from Jacques Combrink. This bug is very ancient, so back-patch to all supported branches. Discussion: https://postgr.es/m/3008982.1714853799@sss.pgh.pa.us
2024-04-01Avoid possible longjmp-induced logic error in PLy_trigger_build_args.Tom Lane
The "pltargs" variable wasn't marked volatile, which makes it unsafe to change its value within the PG_TRY block. It looks like the worst outcome would be to fail to release a refcount on Py_None during an (improbable) error exit, which would likely go unnoticed in the field. Still, it's a bug. A one-liner fix could be to mark pltargs volatile, but on the whole it seems cleaner to arrange things so that we don't change its value within PG_TRY. Per report from Xing Guo. This has been there for quite awhile, so back-patch to all supported branches. Discussion: https://postgr.es/m/CACpMh+DLrk=fDv07MNpBT4J413fDAm+gmMXgi8cjPONE+jvzuw@mail.gmail.com
2023-05-04Move return statements out of PG_TRY blocks.Nathan Bossart
If we exit a PG_TRY block early via "continue", "break", "goto", or "return", we'll skip unwinding its exception stack. This change moves a couple of such "return" statements in PL/Python out of PG_TRY blocks. This was introduced in d0aa965c0a and affects all supported versions. We might also be able to add compile-time checks to prevent recurrence, but that is left as a future exercise. Reported-by: Mikhail Gribkov, Xing Guo Author: Xing Guo Reviewed-by: Michael Paquier, Andres Freund, Tom Lane Discussion: https://postgr.es/m/CAMEv5_v5Y%2B-D%3DCO1%2Bqoe16sAmgC4sbbQjz%2BUtcHmB6zcgS%2B5Ew%40mail.gmail.com Discussion: https://postgr.es/m/CACpMh%2BCMsGMRKFzFMm3bYTzQmMU5nfEEoEDU2apJcc4hid36AQ%40mail.gmail.com Backpatch-through: 11 (all supported versions)
2022-10-07Fix final warnings produced by -Wshadow=compatible-localDavid Rowley
I thought I had these in d8df67bb1, but per report from Andres Freund, I missed some. Reviewed-by: Andres Freund Discussion: https://postgr.es/m/20221005214052.c4tkudawyp5wxt3c@awork3.anarazel.de
2022-03-07plpython: Code cleanup related to removal of Python 2 support.Andres Freund
Since 19252e8ec93 we reject Python 2 during build configuration. Now that the dust on the buildfarm has settled, remove Python 2 specific code, including the "Python 2/3 porting layer". The code to detect conflicts between plpython using Python 2 and 3 is not removed, in case somebody creates an out-of-tree version adding back support for Python 2. Reviewed-By: Peter Eisentraut <peter@eisentraut.org> Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/20211031184548.g4sxfe47n2kyi55r@alap3.anarazel.de
2019-11-25Make the order of the header file includes consistent.Amit Kapila
Similar to commits 14aec03502, 7e735035f2 and dddf4cdc33, this commit makes the order of header file inclusion consistent in more places. Author: Vignesh C Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CALDaNm2Sznv8RR6Ex-iJO6xAdsxgWhCoETkaYX=+9DW3q0QCfA@mail.gmail.com
2019-11-04Fix some compiler warnings on older compilersPeter Eisentraut
Some older compilers appear to not understand the recently introduced PG_FINALLY code structure that well in some circumstances and complain about possibly uninitialized variables. So to fix, initialize the variables explicitly in the cases complained about. Discussion: https://www.postgresql.org/message-id/flat/95a822c3-728b-af0e-d7e5-71890507ae0c%402ndquadrant.com
2019-11-01PG_FINALLYPeter Eisentraut
This gives an alternative way of catching exceptions, for the common case where the cleanup code is the same in the error and non-error cases. So instead of PG_TRY(); { ... code that might throw ereport(ERROR) ... } PG_CATCH(); { cleanup(); PG_RE_THROW(); } PG_END_TRY(); cleanup(); one can write PG_TRY(); { ... code that might throw ereport(ERROR) ... } PG_FINALLY(); { cleanup(); } PG_END_TRY(); Discussion: https://www.postgresql.org/message-id/flat/95a822c3-728b-af0e-d7e5-71890507ae0c%402ndquadrant.com
2019-05-22Phase 2 pgindent run for v12.Tom Lane
Switch to 2.1 version of pg_bsd_indent. This formats multiline function declarations "correctly", that is with additional lines of parameter declarations indented to match where the first line's left parenthesis is. Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com
2019-05-22Initial pgindent run for v12.Tom Lane
This is still using the 2.0 version of pg_bsd_indent. I thought it would be good to commit this separately, so as to document the differences between 2.0 and 2.1 behavior. Discussion: https://postgr.es/m/16296.1558103386@sss.pgh.pa.us
2019-03-30Generated columnsPeter Eisentraut
This is an SQL-standard feature that allows creating columns that are computed from expressions rather than assigned, similar to a view or materialized view but on a column basis. This implements one kind of generated column: stored (computed on write). Another kind, virtual (computed on read), is planned for the future, and some room is left for it. Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/b151f851-4019-bdb1-699e-ebab07d2f40a@2ndquadrant.com
2019-01-26Change function call information to be variable length.Andres Freund
Before this change FunctionCallInfoData, the struct arguments etc for V1 function calls are stored in, always had space for FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two arrays. For nearly every function call 100 arguments is far more than needed, therefore wasting memory. Arg and argnull being two separate arrays also guarantees that to access a single argument, two cachelines have to be touched. Change the layout so there's a single variable-length array with pairs of value / isnull. That drastically reduces memory consumption for most function calls (on x86-64 a two argument function now uses 64bytes, previously 936 bytes), and makes it very likely that argument value and its nullness are on the same cacheline. Arguments are stored in a new NullableDatum struct, which, due to padding, needs more memory per argument than before. But as usually far fewer arguments are stored, and individual arguments are cheaper to access, that's still a clear win. It's likely that there's other places where conversion to NullableDatum arrays would make sense, e.g. TupleTableSlots, but that's for another commit. Because the function call information is now variable-length allocations have to take the number of arguments into account. For heap allocations that can be done with SizeForFunctionCallInfoData(), for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro that helps to allocate an appropriately sized and aligned variable. Some places with stack allocation function call information don't know the number of arguments at compile time, and currently variably sized stack allocations aren't allowed in postgres. Therefore allow for FUNC_MAX_ARGS space in these cases. They're not that common, so for now that seems acceptable. Because of the need to allocate FunctionCallInfo of the appropriate size, older extensions may need to update their code. To avoid subtle breakages, the FunctionCallInfoData struct has been renamed to FunctionCallInfoBaseData. Most code only references FunctionCallInfo, so that shouldn't cause much collateral damage. This change is also a prerequisite for more efficient expression JIT compilation (by allocating the function call information on the stack, allowing LLVM to optimize it away); previously the size of the call information caused problems inside LLVM's optimizer. Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
2018-04-26Post-feature-freeze pgindent run.Tom Lane
Discussion: https://postgr.es/m/15719.1523984266@sss.pgh.pa.us
2018-03-14Support INOUT arguments in proceduresPeter Eisentraut
In a top-level CALL, the values of INOUT arguments will be returned as a result row. In PL/pgSQL, the values are assigned back to the input arguments. In other languages, the same convention as for return a record from a function is used. That does not require any code changes in the PL implementations. Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
2017-12-03Fix uninitialized-variable compiler warning induced by commit e4128ee76.Tom Lane
I'm a little bit astonished that anyone's compiler would have failed to complain about this. The compiler surely does not know that is_procedure means the function return value will be ignored.
2017-11-30SQL proceduresPeter Eisentraut
This adds a new object type "procedure" that is similar to a function but does not have a return type and is invoked by the new CALL statement instead of SELECT or similar. This implementation is aligned with the SQL standard and compatible with or similar to other SQL implementations. This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well as ALTER/DROP ROUTINE that can refer to either a function or a procedure (or an aggregate function, as an extension to SQL). There is also support for procedures in various utility commands such as COMMENT and GRANT, as well as support in pg_dump and psql. Support for defining procedures is available in all the languages supplied by the core distribution. While this commit is mainly syntax sugar around existing functionality, future features will rely on having procedures as a separate object type. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2017-11-29PL/Python: Fix remaining scan-build warningsPeter Eisentraut
Apparently, scan-build thinks that proc->is_setof can change during PLy_exec_function(). To make it clearer, save the value in a local variable. Also add an assertion to clear another warning. Reviewed-by: John Naylor <jcnaylor@gmail.com>
2017-11-18Consistently catch errors from Python _New() functionsPeter Eisentraut
Python Py*_New() functions can fail and return NULL in out-of-memory conditions. The previous code handled that inconsistently or not at all. This change organizes that better. If we are in a function that is called from Python, we just check for failure and return NULL ourselves, which will cause any exception information to be passed up. If we are called from PostgreSQL, we consistently create an "out of memory" error. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
2017-11-16Make PL/Python handle domain-type conversions correctly.Tom Lane
Fix PL/Python so that it can handle domains over composite, and so that it enforces domain constraints correctly in other cases that were not always done properly before. Notably, it didn't do arrays of domains right (oversight in commit c12d570fa), and it failed to enforce domain constraints when returning a composite type containing a domain field, and if a transform function is being used for a domain's base type then it failed to enforce domain constraints on the result. Also, in many places it missed checking domain constraints on null values, because the plpy_typeio code simply wasn't called for Py_None. Rather than try to band-aid these problems, I made a significant refactoring of the plpy_typeio logic. The existing design of recursing for array and composite members is extended to also treat domains as containers requiring recursion, and the APIs for the module are cleaned up and simplified. The patch also modifies plpy_typeio to rely on the typcache more than it did before (which was pretty much not at all). This reduces the need for repetitive lookups, and lets us get rid of an ad-hoc scheme for detecting changes in composite types. I added a couple of small features to typcache to help with that. Although some of this is fixing bugs that long predate v11, I don't think we should risk a back-patch: it's a significant amount of code churn, and there've been no complaints from the field about the bugs. Tom Lane, reviewed by Anthony Bykov Discussion: https://postgr.es/m/24449.1509393613@sss.pgh.pa.us
2017-08-20Change tupledesc->attrs[n] to TupleDescAttr(tupledesc, n).Andres Freund
This is a mechanical change in preparation for a later commit that will change the layout of TupleDesc. Introducing a macro to abstract the details of where attributes are stored will allow us to change that in separate step and revise it in future. Author: Thomas Munro, editorialized by Andres Freund Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com
2017-06-21Phase 3 of pgindent updates.Tom Lane
Don't move parenthesized lines to the left, even if that means they flow past the right margin. By default, BSD indent lines up statement continuation lines that are within parentheses so that they start just to the right of the preceding left parenthesis. However, traditionally, if that resulted in the continuation line extending to the right of the desired right margin, then indent would push it left just far enough to not overrun the margin, if it could do so without making the continuation line start to the left of the current statement indent. That makes for a weird mix of indentations unless one has been completely rigid about never violating the 80-column limit. This behavior has been pretty universally panned by Postgres developers. Hence, disable it with indent's new -lpl switch, so that parenthesized lines are always lined up with the preceding left paren. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21Phase 2 of pgindent updates.Tom Lane
Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21Initial pgindent run with pg_bsd_indent version 2.0.Tom Lane
The new indent version includes numerous fixes thanks to Piotr Stefaniak. The main changes visible in this commit are: * Nicer formatting of function-pointer declarations. * No longer unexpectedly removes spaces in expressions using casts, sizeof, or offsetof. * No longer wants to add a space in "struct structname *varname", as well as some similar cases for const- or volatile-qualified pointers. * Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely. * Fixes bug where comments following declarations were sometimes placed with no space separating them from the code. * Fixes some odd decisions for comments following case labels. * Fixes some cases where comments following code were indented to less than the expected column 33. On the less good side, it now tends to put more whitespace around typedef names that are not listed in typedefs.list. This might encourage us to put more effort into typedef name collection; it's not really a bug in indent itself. There are more changes coming after this round, having to do with comment indentation and alignment of lines appearing within parentheses. I wanted to limit the size of the diffs to something that could be reviewed without one's eyes completely glazing over, so it seemed better to split up the changes as much as practical. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-05-17Post-PG 10 beta1 pgindent runBruce Momjian
perltidy run not included.
2017-04-04Follow-on cleanup for the transition table patch.Kevin Grittner
Commit 59702716 added transition table support to PL/pgsql so that SQL queries in trigger functions could access those transient tables. In order to provide the same level of support for PL/perl, PL/python and PL/tcl, refactor the relevant code into a new function SPI_register_trigger_data. Call the new function in the trigger handler of all four PLs, and document it as a public SPI function so that authors of out-of-tree PLs can do the same. Also get rid of a second QueryEnvironment object that was maintained by PL/pgsql. That was previously used to deal with cursors, but the same approach wasn't appropriate for PLs that are less tangled up with core code. Instead, have SPI_cursor_open install the connection's current QueryEnvironment, as already happens for SPI_execute_plan. While in the docs, remove the note that transition tables were only supported in C and PL/pgSQL triggers, and correct some ommissions. Thomas Munro with some work by Kevin Grittner (mostly docs)
2016-11-08Simplify code by getting rid of SPI_push, SPI_pop, SPI_restore_connection.Tom Lane
The idea behind SPI_push was to allow transitioning back into an "unconnected" state when a SPI-using procedure calls unrelated code that might or might not invoke SPI. That sounds good, but in practice the only thing it does for us is to catch cases where a called SPI-using function forgets to call SPI_connect --- which is a highly improbable failure mode, since it would be exposed immediately by direct testing of said function. As against that, we've had multiple bugs induced by forgetting to call SPI_push/SPI_pop around code that might invoke SPI-using functions; these are much harder to catch and indeed have gone undetected for years in some cases. And we've had to band-aid around some problems of this ilk by introducing conditional push/pop pairs in some places, which really kind of defeats the purpose altogether; if we can't draw bright lines between connected and unconnected code, what's the point? Hence, get rid of SPI_push[_conditional], SPI_pop[_conditional], and the underlying state variable _SPI_curid. It turns out SPI_restore_connection can go away too, which is a nice side benefit since it was never more than a kluge. Provide no-op macros for the deleted functions so as to avoid an API break for external modules. A side effect of this removal is that SPI_palloc and allied functions no longer permit being called when unconnected; they'll throw an error instead. The apparent usefulness of the previous behavior was a mirage as well, because it was depended on by only a few places (which I fixed in preceding commits), and it posed a risk of allocations being unexpectedly long-lived if someone forgot a SPI_push call. Discussion: <20808.1478481403@sss.pgh.pa.us>
2016-11-08Use heap_modify_tuple not SPI_modifytuple in pl/python triggers.Tom Lane
The code here would need some change anyway given planned change in SPI_modifytuple semantics, since this executes after we've exited the SPI environment. But really it's better to just use heap_modify_tuple. While at it, normalize use of SPI_fnumber: make error messages distinguish no-such-column from can't-set-system-column, and remove test for deleted column which is going to migrate into SPI_fnumber. The lack of a check for system column names is actually a pre-existing bug here, and might even qualify as a security bug except that we don't have any trusted version of plpython.
2016-10-26Give a hint, when [] is incorrectly used for a composite type in array.Heikki Linnakangas
That used to be accepted, so let's try to give a hint to users on why their PL/python functions no longer work. Reviewed by Pavel Stehule. Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
2016-04-05Fix PL/Python for recursion and interleaved set-returning functions.Tom Lane
PL/Python failed if a PL/Python function was invoked recursively via SPI, since arguments are passed to the function in its global dictionary (a horrible decision that's far too ancient to undo) and it would delete those dictionary entries on function exit, leaving the outer recursion level(s) without any arguments. Not deleting them would be little better, since the outer levels would then see the innermost level's arguments. Since PL/Python uses ValuePerCall mode for evaluating set-returning functions, it's possible for multiple executions of the same SRF to be interleaved within a query. PL/Python failed in such a case, because it stored only one iterator per function, directly in the function's PLyProcedure struct. Moreover, one interleaved instance of the SRF would see argument values that should belong to another. Hence, invent code for saving and restoring the argument entries. To fix the recursion case, we only need to save at recursive entry and restore at recursive exit, so the overhead in non-recursive cases is negligible. To fix the SRF case, we have to save when suspending a SRF and restore when resuming it, which is potentially not negligible; but fortunately this is mostly a matter of manipulating Python object refcounts and should not involve much physical data copying. Also, store the Python iterator and saved argument values in a structure associated with the SRF call site rather than the function itself. This requires adding a memory context deletion callback to ensure that the SRF state is cleaned up if the calling query exits before running the SRF to completion. Without that we'd leak a refcount to the iterator object in such a case, resulting in session-lifespan memory leakage. (In the pre-existing code, there was no memory leak because there was only one iterator pointer, but what would happen is that the previous iterator would be resumed by the next query attempting to use the SRF. Hardly the semantics we want.) We can buy back some of whatever overhead we've added by getting rid of PLy_function_delete_args(), which seems a useless activity: there is no need to delete argument entries from the global dictionary on exit, since the next time anyone would see the global dict is on the next fresh call of the PL/Python function, at which time we'd overwrite those entries with new arg values anyway. Also clean up some really ugly coding in the SRF implementation, including such gems as returning directly out of a PG_TRY block. (The only reason that failed to crash hard was that all existing call sites immediately exited their own PG_TRY blocks, popping the dangling longjmp pointer before there was any chance of it being used.) In principle this is a bug fix; but it seems a bit too invasive relative to its value for a back-patch, and besides the fix depends on memory context callbacks so it could not go back further than 9.5 anyway. Alexey Grishchenko and Tom Lane
2015-11-05Fix memory leaks in PL/Python.Tom Lane
Previously, plpython was in the habit of allocating a lot of stuff in TopMemoryContext, and it was very slipshod about making sure that stuff got cleaned up; in particular, use of TopMemoryContext as fn_mcxt for function calls represents an unfixable leak, since we generally don't know what the called function might have allocated in fn_mcxt. This results in session-lifespan leakage in certain usage scenarios, as for example in a case reported by Ed Behn back in July. To fix, get rid of all the retail allocations in TopMemoryContext. All long-lived allocations are now made in sub-contexts that are associated with specific objects (either pl/python procedures, or Python-visible objects such as cursors and plans). We can clean these up when the associated object is deleted. I went so far as to get rid of PLy_malloc completely. There were a couple of places where it could still have been used safely, but on the whole it was just an invitation to bad coding. Haribabu Kommi, based on a draft patch by Heikki Linnakangas; some further work by me
2015-08-02Fix a number of places that produced XX000 errors in the regression tests.Tom Lane
It's against project policy to use elog() for user-facing errors, or to omit an errcode() selection for errors that aren't supposed to be "can't happen" cases. Fix all the violations of this policy that result in ERRCODE_INTERNAL_ERROR log entries during the standard regression tests, as errors that can reliably be triggered from SQL surely should be considered user-facing. I also looked through all the files touched by this commit and fixed other nearby problems of the same ilk. I do not claim to have fixed all violations of the policy, just the ones in these files. In a few places I also changed existing ERRCODE choices that didn't seem particularly appropriate; mainly replacing ERRCODE_SYNTAX_ERROR by something more specific. Back-patch to 9.5, but no further; changing ERRCODE assignments in stable branches doesn't seem like a good idea.
2015-01-06Fix thinko in plpython error messageAlvaro Herrera
2014-07-03Improve support for composite types in PL/Python.Tom Lane
Allow PL/Python functions to return arrays of composite types. Also, fix the restriction that plpy.prepare/plpy.execute couldn't handle query parameters or result columns of composite types. In passing, adopt a saner arrangement for where to release the tupledesc reference counts acquired via lookup_rowtype_tupdesc. The callers of PLyObject_ToCompositeDatum were doing the lookups, but then the releases happened somewhere down inside subroutines of PLyObject_ToCompositeDatum, which is bizarre and bug-prone. Instead release in the same function that acquires the refcount. Ed Behn and Ronan Dunklau, reviewed by Abhijit Menon-Sen
2014-05-06pgindent run for 9.4Bruce Momjian
This includes removing tabs after periods in C comments, which was applied to back branches, so this change should not effect backpatching.
2014-03-26Fix refcounting bug in PLy_modify_tuple().Tom Lane
We must increment the refcount on "plntup" as soon as we have the reference, not sometime later. Otherwise, if an error is thrown in between, the Py_XDECREF(plntup) call in the PG_CATCH block removes a refcount we didn't add, allowing the object to be freed even though it's still part of the plpython function's parsetree. This appears to be the cause of crashes seen on buildfarm member prairiedog. It's a bit surprising that we've not seen it fail repeatably before, considering that the regression tests have been exercising the faulty code path since 2009. The real-world impact is probably minimal, since it's unlikely anyone would be provoking the "TD["new"] is not a dictionary" error in production, and that's the only case that is actually wrong. Still, it's a bug affecting the regression tests, so patch all supported branches. In passing, remove dead variable "plstr", and demote "platt" to a local variable inside the PG_TRY block, since we don't need to clean it up in the PG_CATCH path.
2012-08-30Split tuple struct defs from htup.h to htup_details.hAlvaro Herrera
This reduces unnecessary exposure of other headers through htup.h, which is very widely included by many files. I have chosen to move the function prototypes to the new file as well, because that means htup.h no longer needs to include tupdesc.h. In itself this doesn't have much effect in indirect inclusion of tupdesc.h throughout the tree, because it's also required by execnodes.h; but it's something to explore in the future, and it seemed best to do the htup.h change now while I'm busy with it.
2012-06-10Run pgindent on 9.2 source tree in preparation for first 9.3Bruce Momjian
commit-fest.
2012-04-26PL/Python: Accept strings in functions returning composite typesPeter Eisentraut
Before 9.1, PL/Python functions returning composite types could return a string and it would be parsed using record_in. The 9.1 changes made PL/Python only expect dictionaries, tuples, or objects supporting getattr as output of composite functions, resulting in a regression and a confusing error message, as the strings were interpreted as sequences and the code for transforming lists to database tuples was used. Fix this by treating strings separately as before, before checking for the other types. The reason why it's important to support string to database tuple conversion is that trigger functions on tables with composite columns get the composite row passed in as a string (from record_out). Without supporting converting this back using record_in, this makes it impossible to implement pass-through behavior for these columns, as PL/Python no longer accepts strings for composite values. A better solution would be to fix the code that transforms composite inputs into Python objects to produce dictionaries that would then be correctly interpreted by the Python->PostgreSQL counterpart code. But that would be too invasive to backpatch to 9.1, and it is too late in the 9.2 cycle to attempt it. It should be revisited in the future, though. Reported as bug #6559 by Kirill Simonov. Jan Urbański
2012-03-13Create a stack of pl/python "execution contexts".Tom Lane
This replaces the former global variable PLy_curr_procedure, and provides a place to stash per-call-level information. In particular we create a per-call-level scratch memory context. For the moment, the scratch context is just used to avoid leaking memory from datatype output function calls in PLyDict_FromTuple. There probably will be more use-cases in future. Although this is a fix for a pre-existing memory leakage bug, it seems sufficiently invasive to not want to back-patch; it feels better as part of the major rearrangement of plpython code that we've already done as part of 9.2. Jan Urbański
2011-12-29PL/Python: Add argument names to function declarationsPeter Eisentraut
For easier source reading
2011-12-18Split plpython.c into smaller piecesPeter Eisentraut
This moves the code around from one huge file into hopefully logical and more manageable modules. For the most part, the code itself was not touched, except: PLy_function_handler and PLy_trigger_handler were renamed to PLy_exec_function and PLy_exec_trigger, because they were not actually handlers in the PL handler sense, and it makes the naming more similar to the way PL/pgSQL is organized. The initialization of the procedure caches was separated into a new function init_procedure_caches to keep the hash tables private to plpy_procedures.c. Jan Urbański and Peter Eisentraut