summaryrefslogtreecommitdiff
path: root/py/bc.c
AgeCommit message (Collapse)Author
2024-03-07all: Remove the "STATIC" macro and just use "static" instead.Angus Gratton
The STATIC macro was introduced a very long time ago in commit d5df6cd44a433d6253a61cb0f987835fbc06b2de. The original reason for this was to have the option to define it to nothing so that all static functions become global functions and therefore visible to certain debug tools, so one could do function size comparison and other things. This STATIC feature is rarely (if ever) used. And with the use of LTO and heavy inline optimisation, analysing the size of individual functions when they are not static is not a good representation of the size of code when fully optimised. So the macro does not have much use and it's simpler to just remove it. Then you know exactly what it's doing. For example, newcomers don't have to learn what the STATIC macro is and why it exists. Reading the code is also less "loud" with a lowercase static. One other minor point in favour of removing it, is that it stops bugs with `STATIC inline`, which should always be `static inline`. Methodology for this commit was: 1) git ls-files | egrep '\.[ch]$' | \ xargs sed -Ei "s/(^| )STATIC($| )/\1static\2/" 2) Do some manual cleanup in the diff by searching for the word STATIC in comments and changing those back. 3) "git-grep STATIC docs/", manually fixed those cases. 4) "rg -t python STATIC", manually fixed codegen lines that used STATIC. This work was funded through GitHub Sponsors. Signed-off-by: Angus Gratton <angus@redyak.com.au>
2024-02-20py/emitnative: Simplify layout and loading of native function prelude.Damien George
Now native functions and native generators have similar behaviour: the first machine-word of their code is an index to get to the prelude. This simplifies the handling of these types of functions, and also reduces the size of the emitted native machine code by no longer requiring special code at the start of the function to load a pointer to the prelude. Signed-off-by: Damien George <damien@micropython.org>
2022-11-28py/bc: Fix checking for duplicate **kwargs.David Lechner
The code was already checking for duplicate kwargs for named parameters but if `**kwargs` was given as a parameter, it did not check for multiples of the same argument name. This fixes the issue by adding an addition test to catch duplicates and adds a test to exercise the code. Fixes issue #10083. Signed-off-by: David Lechner <david@pybricks.com>
2022-06-07py/bc: Remove unused mp_opcode_format function.Damien George
This was made redundant by f2040bfc7ee033e48acef9f289790f3b4e6b74e5, which also did not update this function for the change to qstr-opcode encoding, so it does not work correctly anyway. Signed-off-by: Damien George <damien@micropython.org>
2022-05-17py/bc: Provide separate code-state setup funcs for bytecode and native.Damien George
mpy-cross will now generate native code based on the size of mp_code_state_native_t, and the runtime will use this struct to calculate the offset of the .state field. This makes native code generation and execution (which rely on this struct) independent to the settings MICROPY_STACKLESS and MICROPY_PY_SYS_SETTRACE, both of which change the size of the mp_code_state_t struct. Fixes issue #5059. Signed-off-by: Damien George <damien@micropython.org>
2022-03-28py: Change jump opcodes to emit 1-byte jump offset when possible.Damien George
This commit introduces changes: - All jump opcodes are changed to have variable length arguments, of either 1 or 2 bytes (previously they were fixed at 2 bytes). In most cases only 1 byte is needed to encode the short jump offset, saving bytecode size. - The bytecode emitter now selects 1 byte jump arguments when the jump offset is guaranteed to fit in 1 byte. This is achieved by checking if the code size changed during the last pass and, if it did (if it shrank), then requesting that the compiler make another pass to get the correct offsets of the now-smaller code. This can continue multiple times until the code stabilises. The code can only ever shrink so this iteration is guaranteed to complete. In most cases no extra passes are needed, the original 4 passes are enough to get it right by the 4th pass (because the 2nd pass computes roughly the correct labels and the 3rd pass computes the correct size for the jump argument). This change to the jump opcode encoding reduces .mpy files and RAM usage (when bytecode is in RAM) by about 2% on average. The performance of the VM is not impacted, at least within measurment of the performance benchmark suite. Code size is reduced for builds that include a decent amount of frozen bytecode. ARM Cortex-M builds without any frozen code increase by about 350 bytes. Signed-off-by: Damien George <damien@micropython.org>
2022-02-24py: Rework bytecode and .mpy file format to be mostly static data.Damien George
Background: .mpy files are precompiled .py files, built using mpy-cross, that contain compiled bytecode functions (and can also contain machine code). The benefit of using an .mpy file over a .py file is that they are faster to import and take less memory when importing. They are also smaller on disk. But the real benefit of .mpy files comes when they are frozen into the firmware. This is done by loading the .mpy file during compilation of the firmware and turning it into a set of big C data structures (the job of mpy-tool.py), which are then compiled and downloaded into the ROM of a device. These C data structures can be executed in-place, ie directly from ROM. This makes importing even faster because there is very little to do, and also means such frozen modules take up much less RAM (because their bytecode stays in ROM). The downside of frozen code is that it requires recompiling and reflashing the entire firmware. This can be a big barrier to entry, slows down development time, and makes it harder to do OTA updates of frozen code (because the whole firmware must be updated). This commit attempts to solve this problem by providing a solution that sits between loading .mpy files into RAM and freezing them into the firmware. The .mpy file format has been reworked so that it consists of data and bytecode which is mostly static and ready to run in-place. If these new .mpy files are located in flash/ROM which is memory addressable, the .mpy file can be executed (mostly) in-place. With this approach there is still a small amount of unpacking and linking of the .mpy file that needs to be done when it's imported, but it's still much better than loading an .mpy from disk into RAM (although not as good as freezing .mpy files into the firmware). The main trick to make static .mpy files is to adjust the bytecode so any qstrs that it references now go through a lookup table to convert from local qstr number in the module to global qstr number in the firmware. That means the bytecode does not need linking/rewriting of qstrs when it's loaded. Instead only a small qstr table needs to be built (and put in RAM) at import time. This means the bytecode itself is static/constant and can be used directly if it's in addressable memory. Also the qstr string data in the .mpy file, and some constant object data, can be used directly. Note that the qstr table is global to the module (ie not per function). In more detail, in the VM what used to be (schematically): qst = DECODE_QSTR_VALUE; is now (schematically): idx = DECODE_QSTR_INDEX; qst = qstr_table[idx]; That allows the bytecode to be fixed at compile time and not need relinking/rewriting of the qstr values. Only qstr_table needs to be linked when the .mpy is loaded. Incidentally, this helps to reduce the size of bytecode because what used to be 2-byte qstr values in the bytecode are now (mostly) 1-byte indices. If the module uses the same qstr more than two times then the bytecode is smaller than before. The following changes are measured for this commit compared to the previous (the baseline): - average 7%-9% reduction in size of .mpy files - frozen code size is reduced by about 5%-7% - importing .py files uses about 5% less RAM in total - importing .mpy files uses about 4% less RAM in total - importing .py and .mpy files takes about the same time as before The qstr indirection in the bytecode has only a small impact on VM performance. For stm32 on PYBv1.0 the performance change of this commit is: diff of scores (higher is better) N=100 M=100 baseline -> this-commit diff diff% (error%) bm_chaos.py 371.07 -> 357.39 : -13.68 = -3.687% (+/-0.02%) bm_fannkuch.py 78.72 -> 77.49 : -1.23 = -1.563% (+/-0.01%) bm_fft.py 2591.73 -> 2539.28 : -52.45 = -2.024% (+/-0.00%) bm_float.py 6034.93 -> 5908.30 : -126.63 = -2.098% (+/-0.01%) bm_hexiom.py 48.96 -> 47.93 : -1.03 = -2.104% (+/-0.00%) bm_nqueens.py 4510.63 -> 4459.94 : -50.69 = -1.124% (+/-0.00%) bm_pidigits.py 650.28 -> 644.96 : -5.32 = -0.818% (+/-0.23%) core_import_mpy_multi.py 564.77 -> 581.49 : +16.72 = +2.960% (+/-0.01%) core_import_mpy_single.py 68.67 -> 67.16 : -1.51 = -2.199% (+/-0.01%) core_qstr.py 64.16 -> 64.12 : -0.04 = -0.062% (+/-0.00%) core_yield_from.py 362.58 -> 354.50 : -8.08 = -2.228% (+/-0.00%) misc_aes.py 429.69 -> 405.59 : -24.10 = -5.609% (+/-0.01%) misc_mandel.py 3485.13 -> 3416.51 : -68.62 = -1.969% (+/-0.00%) misc_pystone.py 2496.53 -> 2405.56 : -90.97 = -3.644% (+/-0.01%) misc_raytrace.py 381.47 -> 374.01 : -7.46 = -1.956% (+/-0.01%) viper_call0.py 576.73 -> 572.49 : -4.24 = -0.735% (+/-0.04%) viper_call1a.py 550.37 -> 546.21 : -4.16 = -0.756% (+/-0.09%) viper_call1b.py 438.23 -> 435.68 : -2.55 = -0.582% (+/-0.06%) viper_call1c.py 442.84 -> 440.04 : -2.80 = -0.632% (+/-0.08%) viper_call2a.py 536.31 -> 532.35 : -3.96 = -0.738% (+/-0.06%) viper_call2b.py 382.34 -> 377.07 : -5.27 = -1.378% (+/-0.03%) And for unix on x64: diff of scores (higher is better) N=2000 M=2000 baseline -> this-commit diff diff% (error%) bm_chaos.py 13594.20 -> 13073.84 : -520.36 = -3.828% (+/-5.44%) bm_fannkuch.py 60.63 -> 59.58 : -1.05 = -1.732% (+/-3.01%) bm_fft.py 112009.15 -> 111603.32 : -405.83 = -0.362% (+/-4.03%) bm_float.py 246202.55 -> 247923.81 : +1721.26 = +0.699% (+/-2.79%) bm_hexiom.py 615.65 -> 617.21 : +1.56 = +0.253% (+/-1.64%) bm_nqueens.py 215807.95 -> 215600.96 : -206.99 = -0.096% (+/-3.52%) bm_pidigits.py 8246.74 -> 8422.82 : +176.08 = +2.135% (+/-3.64%) misc_aes.py 16133.00 -> 16452.74 : +319.74 = +1.982% (+/-1.50%) misc_mandel.py 128146.69 -> 130796.43 : +2649.74 = +2.068% (+/-3.18%) misc_pystone.py 83811.49 -> 83124.85 : -686.64 = -0.819% (+/-1.03%) misc_raytrace.py 21688.02 -> 21385.10 : -302.92 = -1.397% (+/-3.20%) The code size change is (firmware with a lot of frozen code benefits the most): bare-arm: +396 +0.697% minimal x86: +1595 +0.979% [incl +32(data)] unix x64: +2408 +0.470% [incl +800(data)] unix nanbox: +1396 +0.309% [incl -96(data)] stm32: -1256 -0.318% PYBV10 cc3200: +288 +0.157% esp8266: -260 -0.037% GENERIC esp32: -216 -0.014% GENERIC[incl -1072(data)] nrf: +116 +0.067% pca10040 rp2: -664 -0.135% PICO samd: +844 +0.607% ADAFRUIT_ITSYBITSY_M4_EXPRESS As part of this change the .mpy file format version is bumped to version 6. And mpy-tool.py has been improved to provide a good visualisation of the contents of .mpy files. In summary: this commit changes the bytecode to use qstr indirection, and reworks the .mpy file format to be simpler and allow .mpy files to be executed in-place. Performance is not impacted too much. Eventually it will be possible to store such .mpy files in a linear, read-only, memory- mappable filesystem so they can be executed from flash/ROM. This will essentially be able to replace frozen code for most applications. Signed-off-by: Damien George <damien@micropython.org>
2021-09-16all: Remove MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE.Jim Mussared
This commit removes all parts of code associated with the existing MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE optimisation option, including the -mcache-lookup-bc option to mpy-cross. This feature originally provided a significant performance boost for Unix, but wasn't able to be enabled for MCU targets (due to frozen bytecode), and added significant extra complexity to generating and distributing .mpy files. The equivalent performance gain is now provided by the combination of MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE (which has been enabled on the unix port in the previous commit). It's hard to provide precise performance numbers, but tests have been run on a wide variety of architectures (x86-64, ARM Cortex, Aarch64, RISC-V, xtensa) and they all generally agree on the qualitative improvements seen by the combination of MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE. For example, on a "quiet" Linux x64 environment (i3-5010U @ 2.10GHz) the change from CACHE_MAP_LOOKUP_IN_BYTECODE, to LOAD_ATTR_FAST_PATH combined with MAP_LOOKUP_CACHE is: diff of scores (higher is better) N=2000 M=2000 bccache -> attrmapcache diff diff% (error%) bm_chaos.py 13742.56 -> 13905.67 : +163.11 = +1.187% (+/-3.75%) bm_fannkuch.py 60.13 -> 61.34 : +1.21 = +2.012% (+/-2.11%) bm_fft.py 113083.20 -> 114793.68 : +1710.48 = +1.513% (+/-1.57%) bm_float.py 256552.80 -> 243908.29 : -12644.51 = -4.929% (+/-1.90%) bm_hexiom.py 521.93 -> 625.41 : +103.48 = +19.826% (+/-0.40%) bm_nqueens.py 197544.25 -> 217713.12 : +20168.87 = +10.210% (+/-3.01%) bm_pidigits.py 8072.98 -> 8198.75 : +125.77 = +1.558% (+/-3.22%) misc_aes.py 17283.45 -> 16480.52 : -802.93 = -4.646% (+/-0.82%) misc_mandel.py 99083.99 -> 128939.84 : +29855.85 = +30.132% (+/-5.88%) misc_pystone.py 83860.10 -> 82592.56 : -1267.54 = -1.511% (+/-2.27%) misc_raytrace.py 21490.40 -> 22227.23 : +736.83 = +3.429% (+/-1.88%) This shows that the new optimisations are at least as good as the existing inline-bytecode-caching, and are sometimes much better (because the new ones apply caching to a wider variety of map lookups). The new optimisations can also benefit code generated by the native emitter, because they apply to the runtime rather than the generated code. The improvement for the native emitter when LOAD_ATTR_FAST_PATH and MAP_LOOKUP_CACHE are enabled is (same Linux environment as above): diff of scores (higher is better) N=2000 M=2000 native -> nat-attrmapcache diff diff% (error%) bm_chaos.py 14130.62 -> 15464.68 : +1334.06 = +9.441% (+/-7.11%) bm_fannkuch.py 74.96 -> 76.16 : +1.20 = +1.601% (+/-1.80%) bm_fft.py 166682.99 -> 168221.86 : +1538.87 = +0.923% (+/-4.20%) bm_float.py 233415.23 -> 265524.90 : +32109.67 = +13.756% (+/-2.57%) bm_hexiom.py 628.59 -> 734.17 : +105.58 = +16.796% (+/-1.39%) bm_nqueens.py 225418.44 -> 232926.45 : +7508.01 = +3.331% (+/-3.10%) bm_pidigits.py 6322.00 -> 6379.52 : +57.52 = +0.910% (+/-5.62%) misc_aes.py 20670.10 -> 27223.18 : +6553.08 = +31.703% (+/-1.56%) misc_mandel.py 138221.11 -> 152014.01 : +13792.90 = +9.979% (+/-2.46%) misc_pystone.py 85032.14 -> 105681.44 : +20649.30 = +24.284% (+/-2.25%) misc_raytrace.py 19800.01 -> 23350.73 : +3550.72 = +17.933% (+/-2.79%) In summary, compared to MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE, the new MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE options: - are simpler; - take less code size; - are faster (generally); - work with code generated by the native emitter; - can be used on embedded targets with a small and constant RAM overhead; - allow the same .mpy bytecode to run on all targets. See #7680 for further discussion. And see also #7653 for a discussion about simplifying mpy-cross options. Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
2021-04-27py: Add option to compile without any error messages at all.Damien George
This introduces a new option, MICROPY_ERROR_REPORTING_NONE, which completely disables all error messages. To be used in cases where MicroPython needs to fit in very limited systems. Signed-off-by: Damien George <damien@micropython.org>
2020-04-05all: Use MP_ERROR_TEXT for all error messages.Jim Mussared
2020-04-05py: Use preprocessor to detect error reporting level (terse/detailed).Jim Mussared
Instead of compiler-level if-logic. This is necessary to know what error strings are included in the build at the preprocessor stage, so that string compression can be implemented.
2020-02-28all: Reformat C and Python source code with tools/codeformat.py.Damien George
This is run with uncrustify 0.70.1, and black 19.10b0.
2020-02-13py: Add mp_raise_msg_varg helper and use it where appropriate.Damien George
This commit adds mp_raise_msg_varg(type, fmt, ...) as a helper for nlr_raise(mp_obj_new_exception_msg_varg(type, fmt, ...)). It makes the C-level API for raising exceptions more consistent, and reduces code size on most ports: bare-arm: +28 +0.042% minimal x86: +100 +0.067% unix x64: -56 -0.011% unix nanbox: -300 -0.068% stm32: -204 -0.054% PYBV10 cc3200: +0 +0.000% esp8266: -64 -0.010% GENERIC esp32: -104 -0.007% GENERIC nrf: -136 -0.094% pca10040 samd: +0 +0.000% ADAFRUIT_ITSYBITSY_M4_EXPRESS
2019-10-01py/bc: Don't include mp_decode_uint funcs when not needed.Damien George
These are now only needed when persistent code is disabled.
2019-10-01py: Rework and compress second part of bytecode prelude.Damien George
This patch compresses the second part of the bytecode prelude which contains the source file name, function name, source-line-number mapping and cell closure information. This part of the prelude now begins with a single varible length unsigned integer which encodes 2 numbers, being the byte-size of the following 2 sections in the header: the "source info section" and the "closure section". After decoding this variable unsigned integer it's possible to skip over one or both of these sections very easily. This scheme saves about 2 bytes for most functions compared to the original format: one in the case that there are no closure cells, and one because padding was eliminated.
2019-10-01py: Compress first part of bytecode prelude.Damien George
The start of the bytecode prelude contains 6 numbers telling the amount of stack needed for the Python values and exceptions, and the signature of the function. Prior to this patch these numbers were all encoded one after the other (2x variable unsigned integers, then 4x bytes), but using so many bytes is unnecessary. An entropy analysis of around 150,000 bytecode functions from the CPython standard library showed that the optimal Shannon coding would need about 7.1 bits on average to encode these 6 numbers, compared to the existing 48 bits. This patch attempts to get close to this optimal value by packing the 6 numbers into a single, varible-length unsigned integer via bit-wise interleaving. The interleaving scheme is chosen to minimise the average number of bytes needed, and at the same time keep the scheme simple enough so it can be implemented without too much overhead in code size or speed. The scheme requires about 10.5 bits on average to store the 6 numbers. As a result most functions which originally took 6 bytes to encode these 6 numbers now need only 1 byte (in 80% of cases).
2019-10-01py/bc: Change mp_code_state_t.exc_sp to exc_sp_idx.Damien George
Change from a pointer to an index, to make space in mp_code_state_t.
2019-09-26py/bc: Replace big opcode format table with simple macro.Damien George
2019-09-02py/bc: Fix size calculation of UNWIND_JUMP opcode in mp_opcode_format.Damien George
Prior to this patch mp_opcode_format would calculate the incorrect size of the MP_BC_UNWIND_JUMP opcode, missing the additional byte. But, because opcodes below 0x10 are unused and treated as bytes in the .mpy load/save and freezing code, this bug did not show any symptoms, since nested unwind jumps would rarely (if ever) reach a depth of 16 (so the extra byte of this opcode would be between 0x01 and 0x0f and be correctly loaded/saved/frozen simply as an undefined opcode). This patch fixes this bug by correctly accounting for the additional byte. .
2019-08-30py: Integrate sys.settrace feature into the VM and runtime.Milan Rossa
This commit adds support for sys.settrace, allowing to install Python handlers to trace execution of Python code. The interface follows CPython as closely as possible. The feature is disabled by default and can be enabled via MICROPY_PY_SYS_SETTRACE.
2019-03-05py/persistentcode: Pack qstrs directly in bytecode to reduce mpy size.Damien George
Instead of emitting two bytes in the bytecode for where the linked qstr should be written to, it is now replaced by the actual qstr data, or a reference into the qstr window. Reduces mpy file size by about 10%.
2019-03-05py: Replace POP_BLOCK and POP_EXCEPT opcodes with POP_EXCEPT_JUMP.Damien George
POP_BLOCK and POP_EXCEPT are now the same, and are always followed by a JUMP. So this optimisation reduces code size, and RAM usage of bytecode by two bytes for each try-except handler.
2018-12-13py/bc: Fix calculation of opcode size for opcodes with map caching.Damien George
All 4 opcodes that can have caching bytes also have qstrs, so the test for them must go in the qstr part of the code. The reason this incorrect calculation of the opcode size did not lead to a bug is because the caching byte is at the end of the opcode (byte, qstr, qstr, cache) and is always 0x00 when saving/loading, so was just treated as a single byte no-op opcode. Hence these opcodes were being saved/loaded/decoded correctly. Thanks to @malinah for finding the problem and providing the initial patch.
2017-10-10py/bc: Update opcode_format_table to match the bytecode.Damien George
2017-10-04all: Remove inclusion of internal py header files.Damien George
Header files that are considered internal to the py core and should not normally be included directly are: py/nlr.h - internal nlr configuration and declarations py/bc0.h - contains bytecode macro definitions py/runtime0.h - contains basic runtime enums Instead, the top-level header files to include are one of: py/obj.h - includes runtime0.h and defines everything to use the mp_obj_t type py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h, and defines everything to use the general runtime support functions Additional, specific headers (eg py/objlist.h) can be included if needed.
2017-08-15py: Add verbose debug compile-time flag MICROPY_DEBUG_VERBOSE.Stefan Naumann
It enables all the DEBUG_printf outputs in the py/ source code.
2017-07-31all: Use the name MicroPython consistently in commentsAlexander Steffen
There were several different spellings of MicroPython present in comments, when there should be only one.
2017-06-09py: Provide mp_decode_uint_skip() to help reduce stack usage.Damien George
Taking the address of a local variable leads to increased stack usage, so the mp_decode_uint_skip() function is added to reduce the need for taking addresses. The changes in this patch reduce stack usage of a Python call by 8 bytes on ARM Thumb, by 16 bytes on non-windowing Xtensa archs, and by 16 bytes on x86-64. Code size is also slightly reduced on most archs by around 32 bytes.
2017-04-22py: Add LOAD_SUPER_METHOD bytecode to allow heap-free super meth calls.Damien George
This patch allows the following code to run without allocating on the heap: super().foo(...) Before this patch such a call would allocate a super object on the heap and then load the foo method and call it right away. The super object is only needed to perform the lookup of the method and not needed after that. This patch makes an optimisation to allocate the super object on the C stack and discard it right after use. Changes in code size due to this patch are: bare-arm: +128 minimal: +232 unix x64: +416 unix nanbox: +364 stmhal: +184 esp8266: +340 cc3200: +128
2017-03-28py: Use mp_raise_TypeError/mp_raise_ValueError helpers where possible.Damien George
Saves 168 bytes on bare-arm.
2017-03-22py/bc: Provide better error message for an unexpected keyword argument.Damien George
Now, passing a keyword argument that is not expected will correctly report that fact. If normal or detailed error messages are enabled then the name of the unexpected argument will be reported. This patch decreases the code size of bare-arm and stmhal by 12 bytes, and cc3200 by 8 bytes. Other ports (minimal, unix, esp8266) remain the same in code size. For terse error message configuration this is because the new message is shorter than the old one. For normal (and detailed) error message configuration this is because the new error message already exists in py/objnamedtuple.c so there's no extra space in ROM needed for the string.
2017-03-17py: Provide mp_decode_uint_value to help optimise stack usage.Damien George
This has a noticeable improvement on x86-64 and Thumb2 archs, where stack usage is reduced by 2 machine words in the VM.
2017-03-17py: Reduce size of mp_code_state_t structure.Damien George
Instead of caching data that is constant (code_info, const_table and n_state), store just a pointer to the underlying function object from which this data can be derived. This helps reduce stack usage for the case when the mp_code_state_t structure is stored on the stack, as well as heap usage when it's stored on the heap. The downside is that the VM becomes a little more complex because it now needs to derive the data from the underlying function object. But this doesn't impact the performance by much (if at all) because most of the decoding of data is done outside the main opcode loop. Measurements using pystone show that little to no performance is lost. This patch also fixes a nasty bug whereby the bytecode can be reclaimed by the GC during execution. With this patch there is always a pointer to the function object held by the VM during execution, since it's stored in the mp_code_state_t structure.
2016-10-17py: Use mp_raise_msg helper function where appropriate.Damien George
Saves the following number of bytes of code space: 176 for bare-arm, 352 for minimal, 272 for unix x86-64, 140 for stmhal, 120 for esp8266.
2016-09-23py: Update opcode format table because 3 opcodes were removed, 1 added.Damien George
LIST_APPEND, MAP_ADD and SET_ADD have been removed, and STORE_COMP has been added in adaf0d865cd6c81fb352751566460506392ed55f.
2016-08-27py: Rename struct mp_code_state to mp_code_state_t.Damien George
Also at _t to mp_exc_stack pre-declaration in struct typedef.
2016-04-21py: Fix bug passing a string as a keyword arg in a dict.Damien George
Addresses issue #1998.
2016-01-28py/bc: Update opcode format table now that MP_BC_NOT opcode is gone.Damien George
MP_BC_NOT was removed and the "not" operation made a proper unary operator, and the opcode format table needs to be updated to reflect this change (but actually the change is only cosmetic).
2015-12-17py/bc: Use size_t instead of mp_uint_t to count size of state and args.Damien George
2015-11-29py: Wrap all obj-ptr conversions in MP_OBJ_TO_PTR/MP_OBJ_FROM_PTR.Damien George
This allows the mp_obj_t type to be configured to something other than a pointer-sized primitive type. This patch also includes additional changes to allow the code to compile when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of mp_uint_t, and various casts.
2015-11-29py: Make mp_setup_code_state take concrete pointer for func arg.Damien George
2015-11-13py: Add MICROPY_PERSISTENT_CODE_LOAD/SAVE to load/save bytecode.Damien George
MICROPY_PERSISTENT_CODE must be enabled, and then enabling MICROPY_PERSISTENT_CODE_LOAD/SAVE (either or both) will allow loading and/or saving of code (at the moment just bytecode) from/to a .mpy file.
2015-11-13py: Add constant table to bytecode.Damien George
Contains just argument names at the moment but makes it easy to add arbitrary constants.
2015-11-13py: Put all bytecode state (arg count, etc) in bytecode.Damien George
2015-11-13py: Reorganise bytecode layout so it's more structured, easier to edit.Damien George
2015-09-04py: Eliminate some cases which trigger unused parameter warnings.Damien George
2015-04-16py: Add %q format support to mp_[v]printf, and use it.Damien George
2015-04-07py: Implement full func arg passing for native emitter.Damien George
This patch gets full function argument passing working with native emitter. Includes named args, keyword args, default args, var args and var keyword args. Fully Python compliant. It reuses the bytecode mp_setup_code_state function to do all the hard work. This function is slightly adjusted to accommodate native calls, and the native emitter is forced a bit to emit similar prelude and code-info as bytecode.
2015-04-07py: Simplify bytecode prelude when encoding closed over variables.Damien George
2015-04-03vm: Initial support for calling bytecode functions w/o C stack ("stackless").Paul Sokolovsky