summaryrefslogtreecommitdiff
path: root/py/objfun.c
AgeCommit message (Collapse)Author
2022-09-19py/obj: Convert make_new into a mp_obj_type_t slot.Jim Mussared
Instead of being an explicit field, it's now a slot like all the other methods. This is a marginal code size improvement because most types have a make_new (100/138 on PYBV11), however it improves consistency in how types are declared, removing the special case for make_new. Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
2022-09-19all: Fix #if inside MP_DEFINE_CONST_OBJ_TYPE for msvc.Jim Mussared
Changes: MP_DEFINE_CONST_OBJ_TYPE( ... #if FOO ... #endif ... ); to: MP_DEFINE_CONST_OBJ_TYPE( ... FOO_TYPE_ATTR ... ); Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
2022-09-19all: Make all mp_obj_type_t defs use MP_DEFINE_CONST_OBJ_TYPE.Jim Mussared
In preparation for upcoming rework of mp_obj_type_t layout. Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
2022-07-18py/obj: Add static safety checks to mp_obj_is_type().Yonatan Goldschmidt
Commit d96cfd13e3a464862c introduced a regression by breaking existing users of mp_obj_is_type(.., &mp_obj_bool). This function (and associated helpers like mp_obj_is_int()) have some specific nuances, and mistakes like this one can happen again. This commit adds mp_obj_is_exact_type() which behaves like the the old mp_obj_is_type(). The new mp_obj_is_type() has the same prototype but it attempts to statically assert that it's not called with types which should be checked using mp_obj_is_type(). If called with any of these types: int, str, bool, NoneType - it will cause a compilation error. Additional checked types (e.g function types) can be added in the future. Existing users of mp_obj_is_type() with the now "invalid" types, were translated to use mp_obj_is_exact_type(). The use of MP_STATIC_ASSERT() is not bulletproof - usually GCC (and other compilers) can't statically check conditions that are only known during link-time (like variables' addresses comparison). However, in this case, GCC is able to statically detect these conditions, probably because it's the exact same object - `&mp_type_int == &mp_type_int` is detected. Misuses of this function with runtime-chosen types (e.g: `mp_obj_type_t *x = ...; mp_obj_is_type(..., x);` won't be detected. MSC is unable to detect this, so we use MP_STATIC_ASSERT_NOT_MSC(). Compiling with this commit and without the fix for d96cfd13e3a464862c shows that it detects the problem. Signed-off-by: Yonatan Goldschmidt <yon.goldschmidt@gmail.com>
2022-06-25py/objfun: Support function attributes on native functions.Damien George
Native functions can just reuse the bytecode function attribute code. Signed-off-by: Damien George <damien@micropython.org>
2022-05-17py/bc: Provide separate code-state setup funcs for bytecode and native.Damien George
mpy-cross will now generate native code based on the size of mp_code_state_native_t, and the runtime will use this struct to calculate the offset of the .state field. This makes native code generation and execution (which rely on this struct) independent to the settings MICROPY_STACKLESS and MICROPY_PY_SYS_SETTRACE, both of which change the size of the mp_code_state_t struct. Fixes issue #5059. Signed-off-by: Damien George <damien@micropython.org>
2022-05-03all: Use mp_obj_malloc everywhere it's applicable.Jim Mussared
This replaces occurences of foo_t *foo = m_new_obj(foo_t); foo->base.type = &foo_type; with foo_t *foo = mp_obj_malloc(foo_t, &foo_type); Excludes any places where base is a sub-field or when new0/memset is used. Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
2022-02-24py: Rework bytecode and .mpy file format to be mostly static data.Damien George
Background: .mpy files are precompiled .py files, built using mpy-cross, that contain compiled bytecode functions (and can also contain machine code). The benefit of using an .mpy file over a .py file is that they are faster to import and take less memory when importing. They are also smaller on disk. But the real benefit of .mpy files comes when they are frozen into the firmware. This is done by loading the .mpy file during compilation of the firmware and turning it into a set of big C data structures (the job of mpy-tool.py), which are then compiled and downloaded into the ROM of a device. These C data structures can be executed in-place, ie directly from ROM. This makes importing even faster because there is very little to do, and also means such frozen modules take up much less RAM (because their bytecode stays in ROM). The downside of frozen code is that it requires recompiling and reflashing the entire firmware. This can be a big barrier to entry, slows down development time, and makes it harder to do OTA updates of frozen code (because the whole firmware must be updated). This commit attempts to solve this problem by providing a solution that sits between loading .mpy files into RAM and freezing them into the firmware. The .mpy file format has been reworked so that it consists of data and bytecode which is mostly static and ready to run in-place. If these new .mpy files are located in flash/ROM which is memory addressable, the .mpy file can be executed (mostly) in-place. With this approach there is still a small amount of unpacking and linking of the .mpy file that needs to be done when it's imported, but it's still much better than loading an .mpy from disk into RAM (although not as good as freezing .mpy files into the firmware). The main trick to make static .mpy files is to adjust the bytecode so any qstrs that it references now go through a lookup table to convert from local qstr number in the module to global qstr number in the firmware. That means the bytecode does not need linking/rewriting of qstrs when it's loaded. Instead only a small qstr table needs to be built (and put in RAM) at import time. This means the bytecode itself is static/constant and can be used directly if it's in addressable memory. Also the qstr string data in the .mpy file, and some constant object data, can be used directly. Note that the qstr table is global to the module (ie not per function). In more detail, in the VM what used to be (schematically): qst = DECODE_QSTR_VALUE; is now (schematically): idx = DECODE_QSTR_INDEX; qst = qstr_table[idx]; That allows the bytecode to be fixed at compile time and not need relinking/rewriting of the qstr values. Only qstr_table needs to be linked when the .mpy is loaded. Incidentally, this helps to reduce the size of bytecode because what used to be 2-byte qstr values in the bytecode are now (mostly) 1-byte indices. If the module uses the same qstr more than two times then the bytecode is smaller than before. The following changes are measured for this commit compared to the previous (the baseline): - average 7%-9% reduction in size of .mpy files - frozen code size is reduced by about 5%-7% - importing .py files uses about 5% less RAM in total - importing .mpy files uses about 4% less RAM in total - importing .py and .mpy files takes about the same time as before The qstr indirection in the bytecode has only a small impact on VM performance. For stm32 on PYBv1.0 the performance change of this commit is: diff of scores (higher is better) N=100 M=100 baseline -> this-commit diff diff% (error%) bm_chaos.py 371.07 -> 357.39 : -13.68 = -3.687% (+/-0.02%) bm_fannkuch.py 78.72 -> 77.49 : -1.23 = -1.563% (+/-0.01%) bm_fft.py 2591.73 -> 2539.28 : -52.45 = -2.024% (+/-0.00%) bm_float.py 6034.93 -> 5908.30 : -126.63 = -2.098% (+/-0.01%) bm_hexiom.py 48.96 -> 47.93 : -1.03 = -2.104% (+/-0.00%) bm_nqueens.py 4510.63 -> 4459.94 : -50.69 = -1.124% (+/-0.00%) bm_pidigits.py 650.28 -> 644.96 : -5.32 = -0.818% (+/-0.23%) core_import_mpy_multi.py 564.77 -> 581.49 : +16.72 = +2.960% (+/-0.01%) core_import_mpy_single.py 68.67 -> 67.16 : -1.51 = -2.199% (+/-0.01%) core_qstr.py 64.16 -> 64.12 : -0.04 = -0.062% (+/-0.00%) core_yield_from.py 362.58 -> 354.50 : -8.08 = -2.228% (+/-0.00%) misc_aes.py 429.69 -> 405.59 : -24.10 = -5.609% (+/-0.01%) misc_mandel.py 3485.13 -> 3416.51 : -68.62 = -1.969% (+/-0.00%) misc_pystone.py 2496.53 -> 2405.56 : -90.97 = -3.644% (+/-0.01%) misc_raytrace.py 381.47 -> 374.01 : -7.46 = -1.956% (+/-0.01%) viper_call0.py 576.73 -> 572.49 : -4.24 = -0.735% (+/-0.04%) viper_call1a.py 550.37 -> 546.21 : -4.16 = -0.756% (+/-0.09%) viper_call1b.py 438.23 -> 435.68 : -2.55 = -0.582% (+/-0.06%) viper_call1c.py 442.84 -> 440.04 : -2.80 = -0.632% (+/-0.08%) viper_call2a.py 536.31 -> 532.35 : -3.96 = -0.738% (+/-0.06%) viper_call2b.py 382.34 -> 377.07 : -5.27 = -1.378% (+/-0.03%) And for unix on x64: diff of scores (higher is better) N=2000 M=2000 baseline -> this-commit diff diff% (error%) bm_chaos.py 13594.20 -> 13073.84 : -520.36 = -3.828% (+/-5.44%) bm_fannkuch.py 60.63 -> 59.58 : -1.05 = -1.732% (+/-3.01%) bm_fft.py 112009.15 -> 111603.32 : -405.83 = -0.362% (+/-4.03%) bm_float.py 246202.55 -> 247923.81 : +1721.26 = +0.699% (+/-2.79%) bm_hexiom.py 615.65 -> 617.21 : +1.56 = +0.253% (+/-1.64%) bm_nqueens.py 215807.95 -> 215600.96 : -206.99 = -0.096% (+/-3.52%) bm_pidigits.py 8246.74 -> 8422.82 : +176.08 = +2.135% (+/-3.64%) misc_aes.py 16133.00 -> 16452.74 : +319.74 = +1.982% (+/-1.50%) misc_mandel.py 128146.69 -> 130796.43 : +2649.74 = +2.068% (+/-3.18%) misc_pystone.py 83811.49 -> 83124.85 : -686.64 = -0.819% (+/-1.03%) misc_raytrace.py 21688.02 -> 21385.10 : -302.92 = -1.397% (+/-3.20%) The code size change is (firmware with a lot of frozen code benefits the most): bare-arm: +396 +0.697% minimal x86: +1595 +0.979% [incl +32(data)] unix x64: +2408 +0.470% [incl +800(data)] unix nanbox: +1396 +0.309% [incl -96(data)] stm32: -1256 -0.318% PYBV10 cc3200: +288 +0.157% esp8266: -260 -0.037% GENERIC esp32: -216 -0.014% GENERIC[incl -1072(data)] nrf: +116 +0.067% pca10040 rp2: -664 -0.135% PICO samd: +844 +0.607% ADAFRUIT_ITSYBITSY_M4_EXPRESS As part of this change the .mpy file format version is bumped to version 6. And mpy-tool.py has been improved to provide a good visualisation of the contents of .mpy files. In summary: this commit changes the bytecode to use qstr indirection, and reworks the .mpy file format to be simpler and allow .mpy files to be executed in-place. Performance is not impacted too much. Eventually it will be possible to store such .mpy files in a linear, read-only, memory- mappable filesystem so they can be executed from flash/ROM. This will essentially be able to replace frozen code for most applications. Signed-off-by: Damien George <damien@micropython.org>
2022-01-23all: Fix MICROPY_OBJ_REPR_D compilation with msvc.stijn
2021-06-25py: Mark unused arguments from bytecode decoding macros.Damien George
Signed-off-by: Damien George <damien@micropython.org>
2021-01-29py/objfun: Support fun.__globals__ attribute.Damien George
This returns a reference to the globals dict associated with the function, ie the global scope that the function was defined in. This attribute is read-only but the dict itself is modifiable, per CPython behaviour. Signed-off-by: Damien George <damien@micropython.org>
2020-06-30py: Rework mp_convert_member_lookup to properly handle built-ins.Damien George
This commit fixes lookups of class members to make it so that built-in functions that are used as methods/functions of a class work correctly. The mp_convert_member_lookup() function is pretty much completely changed by this commit, but for the most part it's just reorganised and the indenting changed. The functional changes are: - staticmethod and classmethod checks moved to later in the if-logic, because they are less common and so should be checked after the more common cases. - The explicit mp_obj_is_type(member, &mp_type_type) check is removed because it's now subsumed by other, more general tests in this function. - MP_TYPE_FLAG_BINDS_SELF and MP_TYPE_FLAG_BUILTIN_FUN type flags added to make the checks in this function much simpler (now they just test this bit in type->flags). - An extra check is made for mp_obj_is_instance_type(type) to fix lookup of built-in functions. Fixes #1326 and #6198. Signed-off-by: Damien George <damien@micropython.org>
2020-06-14tools/codeformat.py: Remove sizeof fixup.David Lechner
Formatting for `* sizeof` was fixed in uncrustify v0.71, so we no longer need the fixups for it. Also, there was one file where the updated uncrustify caught a problem that the regex didn't pick up, which is updated in this commit. Signed-off-by: David Lechner <david@pybricks.com>
2020-03-11tools/codeformat.py: Eliminate need for sizeof fixup.David Lechner
This eliminates the need for the sizeof regex fixup by rearranging things a bit. All other bitfields already use the parentheses around expressions with sizeof, so one case is fixed by following this convention. VM_MAX_STATE_ON_STACK is the only remaining problem and it can be worked around by changing the order of the operands.
2020-02-28all: Reformat C and Python source code with tools/codeformat.py.Damien George
This is run with uncrustify 0.70.1, and black 19.10b0.
2020-02-28py: Removing dangling "else" to improve code format consistency.Damien George
2020-01-09py: Make mp_obj_get_type() return a const ptr to mp_obj_type_t.Damien George
Most types are in rodata/ROM, and mp_obj_base_t.type is a constant pointer, so enforce this const-ness throughout the code base. If a type ever needs to be modified (eg a user type) then a simple cast can be used.
2019-10-01py: Rework and compress second part of bytecode prelude.Damien George
This patch compresses the second part of the bytecode prelude which contains the source file name, function name, source-line-number mapping and cell closure information. This part of the prelude now begins with a single varible length unsigned integer which encodes 2 numbers, being the byte-size of the following 2 sections in the header: the "source info section" and the "closure section". After decoding this variable unsigned integer it's possible to skip over one or both of these sections very easily. This scheme saves about 2 bytes for most functions compared to the original format: one in the case that there are no closure cells, and one because padding was eliminated.
2019-10-01py: Compress first part of bytecode prelude.Damien George
The start of the bytecode prelude contains 6 numbers telling the amount of stack needed for the Python values and exceptions, and the signature of the function. Prior to this patch these numbers were all encoded one after the other (2x variable unsigned integers, then 4x bytes), but using so many bytes is unnecessary. An entropy analysis of around 150,000 bytecode functions from the CPython standard library showed that the optimal Shannon coding would need about 7.1 bits on average to encode these 6 numbers, compared to the existing 48 bits. This patch attempts to get close to this optimal value by packing the 6 numbers into a single, varible-length unsigned integer via bit-wise interleaving. The interleaving scheme is chosen to minimise the average number of bytes needed, and at the same time keep the scheme simple enough so it can be implemented without too much overhead in code size or speed. The scheme requires about 10.5 bits on average to store the 6 numbers. As a result most functions which originally took 6 bytes to encode these 6 numbers now need only 1 byte (in 80% of cases).
2019-10-01py: Add n_state to mp_code_state_t struct.Damien George
This value is used often enough that it is better to cache it instead of decode it each time.
2019-08-06py: Allow to pass in read-only buffers to viper and inline-asm funcs.Damien George
Fixes #4936.
2019-05-06py: remove "if (0)" and "if (false)" branches.Jun Wu
Prior to this commit, building the unix port with `DEBUG=1` and `-finstrument-functions` the compilation would fail with an error like "control reaches end of non-void function". This change fixes this by removing the problematic "if (0)" branches. Not all branches affect compilation, but they are all removed for consistency.
2019-03-14py/nativeglue: Rename native convert funs to match other native helpers.Damien George
2019-02-20py/objfun: Make fun_data arg of mp_obj_new_fun_asm() a const pointer.Damien George
2019-02-12py: Downcase all MP_OBJ_IS_xxx macros to make a more consistent C API.Damien George
These macros could in principle be (inline) functions so it makes sense to have them lower case, to match the other C API functions. The remaining macros that are upper case are: - MP_OBJ_TO_PTR, MP_OBJ_FROM_PTR - MP_OBJ_NEW_SMALL_INT, MP_OBJ_SMALL_INT_VALUE - MP_OBJ_NEW_QSTR, MP_OBJ_QSTR_VALUE - MP_OBJ_FUN_MAKE_SIG - MP_DECLARE_CONST_xxx - MP_DEFINE_CONST_xxx These must remain macros because they are used when defining const data (at least, MP_OBJ_NEW_SMALL_INT is so it makes sense to have MP_OBJ_SMALL_INT_VALUE also a macro). For those macros that have been made lower case, compatibility macros are provided for the old names so that users do not need to change their code immediately.
2019-01-04py: Fix location of VM returned exception in invalid opcode and commentsDamien George
The location for a returned exception was changed to state[0] in d95947b48a30f818638c3619b92110ce6d07f5e3
2019-01-04py: Get optional VM stack overflow check compiling and working again.Damien George
Changes to the layout of the bytecode header meant that this debug code was no longer compiling. This is now fixed and a new compile-time option is introduced, MICROPY_DEBUG_VM_STACK_OVERFLOW, to turn on this feature (which is disabled by default). This option is needed because more than one file needs to cooperate to make this check work.
2018-10-01py/emitnative: Implement yield and yield-from in native emitter.Damien George
This commit adds first class support for yield and yield-from in the native emitter, including send and throw support, and yields enclosed in exception handlers (which requires pulling down the NLR stack before yielding, then rebuilding it when resuming). This has been fully tested and is working on unix x86 and x86-64, and stm32. Also basic tests have been done with the esp8266 port. Performance of existing native code is unchanged.
2018-09-29py/vm: When VM raises exception put exc obj at beginning of func state.Damien George
Instead of at end of state, n_state - 1. It was originally (way back in v1.0) put at the end of the state because the VM didn't have a pointer to the start. But now that the VM takes a mp_code_state_t pointer it does have a pointer to the start of the state so can put the exception object there. This commit saves about 30 bytes of code on all architectures, and, more importantly, reduces C-stack usage by a couple of words (8 bytes on Thumb2 and 16 bytes on x86-64) for every (non-generator) call of a bytecode function because fun_bc_call no longer needs to remember the n_state variable.
2018-09-15py: Make viper functions have the same entry signature as native.Damien George
This commit makes viper functions have the same signature as native functions, at the level of the emitter/assembler. This means that viper functions can now be wrapped in the same uPy object as native functions. Viper functions are now responsible for parsing their arguments (before it was done by the runtime), and this makes calling them more efficient (in most cases) because the viper entry code can be custom generated to suit the signature of the function. This change also opens the way forward for viper functions to take arbitrary numbers of arguments, and for them to handle globals correctly, among other things.
2018-09-14py: Optimise call to mp_arg_check_num by compressing fun signature.Damien George
With 5 arguments to mp_arg_check_num(), some architectures need to pass values on the stack. So compressing n_args_min, n_args_max, takes_kw into a single word and passing only 3 arguments makes the call more efficient, because almost all calls to this function pass in constant values. Code size is also reduced by a decent amount: bare-arm: -116 minimal x86: -64 unix x64: -256 unix nanbox: -112 stm32: -324 cc3200: -192 esp8266: -192 esp32: -144
2018-08-02py: Fix compiling with debug enabled and make more use of DEBUG_printf.Damien George
DEBUG_printf and MICROPY_DEBUG_PRINTER is now used instead of normal printf, and a fault is fixed in mp_obj_class_lookup with debugging enabled; see issue #3999. Debugging can now be enabled on all ports including when nan-boxing is used.
2018-07-10py/objgenerator: Implement __name__ with normal fun attr accessor code.Damien George
With the recent change b488a4a8480533a6a3c9468c2f8bd359c94d4d02, a generating function now has the same layout in memory as a normal bytecode function, and so can reuse the latter's attribute accessor code to implement __name__.
2018-05-17py/objfun: Fix variable name in DECODE_CODESTATE_SIZE() macro.Tom Collins
This patch fixes the macro so you can pass any name in, and the macro will make more sense if you're reading it on its own. It worked previously because n_state is always passed in as n_state_out_var.
2017-12-11py: Convert all uses of alloca() to use new scoped allocation API.Damien George
2017-12-09py/objfun: Factor out macro for initializing codestate.Paul Sokolovsky
This is second part of fun_bc_call() vs mp_obj_fun_bc_prepare_codestate() common code refactor. This factors out code to initialize codestate object. After this patch, mp_obj_fun_bc_prepare_codestate() is effectively DECODE_CODESTATE_SIZE() followed by allocation followed by INIT_CODESTATE(), and fun_bc_call() starts with that too.
2017-12-09py/objfun, vm: Add comments on codestate allocation in stackless mode.Paul Sokolovsky
2017-12-09py/objfun: Factor out macro for decoding codestate size.Paul Sokolovsky
fun_bc_call() starts with almost the same code as mp_obj_fun_bc_prepare_codestate(), the only difference is a way to allocate the codestate object (heap vs stack with heap fallback). Still, would be nice to avoid code duplication to make further refactoring easier. So, this commit factors out the common code before the allocation - decoding and calculating codestate size. It produces two values, so structured as a macro which writes to 2 variables passed as arguments.
2017-10-04all: Remove inclusion of internal py header files.Damien George
Header files that are considered internal to the py core and should not normally be included directly are: py/nlr.h - internal nlr configuration and declarations py/bc0.h - contains bytecode macro definitions py/runtime0.h - contains basic runtime enums Instead, the top-level header files to include are one of: py/obj.h - includes runtime0.h and defines everything to use the mp_obj_t type py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h, and defines everything to use the general runtime support functions Additional, specific headers (eg py/objlist.h) can be included if needed.
2017-08-15py: Add verbose debug compile-time flag MICROPY_DEBUG_VERBOSE.Stefan Naumann
It enables all the DEBUG_printf outputs in the py/ source code.
2017-07-31all: Use the name MicroPython consistently in commentsAlexander Steffen
There were several different spellings of MicroPython present in comments, when there should be only one.
2017-06-09py: Provide mp_decode_uint_skip() to help reduce stack usage.Damien George
Taking the address of a local variable leads to increased stack usage, so the mp_decode_uint_skip() function is added to reduce the need for taking addresses. The changes in this patch reduce stack usage of a Python call by 8 bytes on ARM Thumb, by 16 bytes on non-windowing Xtensa archs, and by 16 bytes on x86-64. Code size is also slightly reduced on most archs by around 32 bytes.
2017-03-29py: Change mp_uint_t to size_t for mp_obj_str_get_data len arg.Damien George
2017-03-29py: Convert mp_uint_t to size_t for tuple/list accessors.Damien George
This patch changes mp_uint_t to size_t for the len argument of the following public facing C functions: mp_obj_tuple_get mp_obj_list_get mp_obj_get_array These functions take a pointer to the len argument (to be filled in by the function) and callers of these functions should update their code so the type of len is changed to size_t. For ports that don't use nan-boxing there should be no change in generate code because the size of the type remains the same (word sized), and in a lot of cases there won't even be a compiler warning if the type remains as mp_uint_t. The reason for this change is to standardise on the use of size_t for variables that count memory (or memory related) sizes/lengths. It helps builds that use nan-boxing.
2017-03-17py: Reduce size of mp_code_state_t structure.Damien George
Instead of caching data that is constant (code_info, const_table and n_state), store just a pointer to the underlying function object from which this data can be derived. This helps reduce stack usage for the case when the mp_code_state_t structure is stored on the stack, as well as heap usage when it's stored on the heap. The downside is that the VM becomes a little more complex because it now needs to derive the data from the underlying function object. But this doesn't impact the performance by much (if at all) because most of the decoding of data is done outside the main opcode loop. Measurements using pystone show that little to no performance is lost. This patch also fixes a nasty bug whereby the bytecode can be reclaimed by the GC during execution. With this patch there is always a pointer to the function object held by the VM during execution, since it's stored in the mp_code_state_t structure.
2017-03-07py: Use mp_obj_get_array where sequence may be a tuple or a list.Krzysztof Blazewicz
2017-02-16py/objfun: Convert mp_uint_t to size_t where appropriate.Damien George
2016-12-09py: Allow inline-assembler emitter to be generic.Damien George
This patch refactors some code so that it is easier to integrate new inline assemblers for different architectures other than ARM Thumb.
2016-10-21py: Specialise builtin funcs to use separate type for fixed arg count.Damien George
Builtin functions with a fixed number of arguments (0, 1, 2 or 3) are quite common. Before this patch the wrapper for such a function cost 3 machine words. After this patch it only takes 2, which can reduce the code size by quite a bit (and pays off even more, the more functions are added). It also makes function dispatch slightly more efficient in CPU usage, and furthermore reduces stack usage for these cases. On x86 and Thumb archs the dispatch functions are now tail-call optimised by the compiler. The bare-arm port has its code size increase by 76 bytes, but stmhal drops by 904 bytes. Stack usage by these builtin functions is decreased by 48 bytes on Thumb2 archs.
2016-09-27py/objfun: Use if instead of switch to check return value of VM execute.Damien George
It's simpler and improves code coverage.