summaryrefslogtreecommitdiff
path: root/py/emitnative.c
AgeCommit message (Collapse)Author
9 dayspy/emitnative: Generate shorter RV32 code for exception handling.Alessandro Gatti
This commit lets the native emitter generate shorter code when clearing exception objects on RV32. Since there are no direct generic ASM functions to set a specific immediate to a local variable, the native emitter usually generates an immediate assignment to a temporary register and then a store of that register into the chosen local variable. This pattern is also followed when clearing certain local variables related to exception handling, using MP_OBJ_NULL as the immediate value to set. Given that at the moment MP_OBJ_NULL is defined to be 0 (with some other spots in the native emitter that leverage that fact when checking the state of the variables mentioned earlier), and that the RV32 CPU has a dedicated register that is hardwired to read 0, a new method to set local variables to MP_OBJ_NULL is introduced. When generating RV32 code, the new macro will skip the intermediate register assignment and directly uses the X0/ZERO register to set the chosen local variable to MP_OBJ_NULL. Other platforms will still generate the same code sequence as before this change. This is a followup to 40585eaa8f1b603f0094b73764e8ce5623812ecf. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-07-04py/emitnative: Let emitters know the compiled entity's name.Alessandro Gatti
This commit introduces an optional feature to provide to native emitters the fully qualified name of the entity they are compiling. This is achieved by altering the generic ASM API to provide a third argument to the entry function, containing the name of the entity being compiled. Currently only the debug emitter uses this feature, as it is not really useful for other emitters for the time being; in fact the macros in question just strip the name away. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-07-01py/asmxtensa: Implement the full set of Viper load/store operations.Alessandro Gatti
This commit expands the implementation of Viper load/store operations that are optimised for the Xtensa platform. Now both load and store emitters should generate the shortest possible sequence in all cases. Redundant specialised operation emitters have been aliased to the general case implementation - this was the case of integer-indexed load/store operations with a fixed offset of zero. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-07-01py/asmarm: Implement the full set of Viper load/store operations.Alessandro Gatti
This commit expands the implementation of Viper load/store operations that are optimised for the Arm platform. Now both load and store emitters should generate the shortest possible sequence in all cases. Redundant specialised operation emitters have been folded into the general case implementation - this was the case of integer-indexed load/store operations with a fixed offset of zero. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-07-01py/asmrv32: Implement the full set of Viper load/store operations.Alessandro Gatti
This commit expands the implementation of Viper load/store operations that are optimised for the RV32 platform. Given the way opcodes are encoded, all value sizes are implemented with only two functions - one for loads and one for stores. This should reduce duplication with existing operations and should, in theory, save space as some code is removed. Both load and store emitters will generate the shortest possible sequence (as long as the stack pointer is not involved), using compressed opcodes when possible. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-06-10py/asmarm: Extend int-indexed 32-bit load/store offset ranges.Alessandro Gatti
This commit extends the range for int-indexed load/store opcode generators, making them emit correct code sequences for offsets that span more than 12 bits. This is necessary due to those generator bits being also used in the Viper emitter, where it's more probable to reference offsets that can not be embedded in the LDR/STR opcodes. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-06-10py/emitnative: Remove redundant RV32 Viper int-indexed code.Alessandro Gatti
This commit removes redundant RV32 implementations of certain int-indexed code generation operations (32-bit load/store and 16-bit load). Those operations were already available as part of the native emitter API but were not exposed to the Viper code generator. As part of the introduction of more specialised load and store API calls to int-indexed Viper load/store generator bits, the existing native emitter implementations are reused, thus making the Viper implementations redundant. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-06-10py/asmxtensa: Extend existing specialised load/store operations range.Alessandro Gatti
This commit updates the existing specialised implementations for int-indexed 32-bit load and store operations, and adds a specialised implementation for int-indexed 16-bit load. The 32-bit operations relied on the fact that their applicability was limited to a specific range, falling back on a generic implementation otherwise. Introducing a single entry point for each int-indexed load/store operation size would break that assumption. Now those two operations contain fallback code to generate working code by themselves instead of raising an exception. The 16-bit operation instead simply did not have any range check, but it was not exposed directly to the Viper emitter. When a 16-bit int-indexed load entry point was introduced, the existing implementation would fail when accessing memory outside its 0..255 halfwords range. A specialised implementation is now present, performing fewer operations than the existing Viper emitter equivalent. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-06-10py/asmthumb: Extend load/store generators with ARMv7-M opcodes.Alessandro Gatti
This commit lets the Thumb native code generator backend emit ARMv7-M specific opcodes for indexed load/store operations if possible. Now T3 opcode encodings are used if the generator backend is configured to allow emitting ARMv7-M opcodes and if the (unsigned) scaled index fits in 12 bits. Or, in other words, LDR{B,H}.W and STR{B,H}.W opcodes are now emitted if possible. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-06-10py/emitnative: Let Viper int-indexed code use appropriate operands.Alessandro Gatti
This commit extends the generic ASM API by adding the rest of the ASM_{LOAD,STORE}[size]_REG_REG_OFFSET macros whenever applicable. The Viper int-indexed load/store code generator was changed to use those API functions if they are available, falling back to backend-specific implementations if possible and ultimately to a generic implementation. Right now all backends except for x64 implement load16, load32, and store32 operations (x64 only implements load16). Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-05-21py/emitnative: Clean up int-indexed Viper load/store code.Alessandro Gatti
This commit performs some minor clean up for the code involved in Viper load/store operations when said operations have an integer index. Most platform-specific code blocks were able to generate correct opcodes even when the index is 0, but they would still fall back to the general case. The general case would still emit a shortened opcode sequence so this commit does not alter the overall behaviour, but makes it easier to extend platform-specific code whenever the full index range is going to be handled rather than a subset of indices as it is now. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-05-21py/emitnative: Refactor Viper register-indexed load/stores.Alessandro Gatti
This commit cleans up the Viper code generation blocks for register-indexed load and store operations. An attempt is made to simplify the code in the common code generator code block, by moving architecture-specific code to the appropriate native generation backends whenever possible. This should make that specific bit of code in the Viper generator clearer and easier to maintain in the long term. To achieve this, six generic assembler meta-opcodes have been introduced, named `ASM_{LOAD,STORE}{8,16,32}_REG_REG_REG`. A platform-independent implementation for those operations is provided, so backends that cannot emit a shorter sequence for the requested operation or are fine with the platform-independent implementation can just not provide said meta-opcodes. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-05-21py/emitnative: Improve Viper register-indexed code for Thumb.Alessandro Gatti
This commit lets the Viper code generator use optimised code sequence for register-indexed load and store operations when generating Thumb code. Register-indexed load and store operations for Thumb now can take at most two machine opcodes for halfword and word values, and just a single machine opcode for byte values. The original implementation could generate up to four opcodes in the worst case (dealing with word values). Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-05-21py/emitnative: Improve Viper register-indexed code for Arm.Alessandro Gatti
This commit lets the Viper code generator use optimised code sequences for register-indexed load and store operations when generating Arm code. The existing code defaulted to generic multi-operations code sequences for Arm code on most cases. Now optimised implementations are provided for register-indexed loads and stores of all data sizes, taking at most two machine opcodes for each operation. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-02-07py/emitnative: Load and store words just once for Viper code.Alessandro Gatti
This commit fixes two Xtensa sequences in order to terminate early when loading and storing word values via an immediate index. This was meant to be part of 55ca3fd67512555707304c6b68b836eb89f09d1c but whilst it was part of the code being tested, it didn't end up in the commit. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-02-07py/emitnative: Mark condition code tables as const.Alessandro Gatti
This commit marks as const the condition code tables used when figuring out which opcode sequence must be emitted depending on the requested comparison type. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-01-26py/emitnative: Optimise Viper immediate offset load/stores on Xtensa.Alessandro Gatti
This commit introduces the ability to emit optimised code paths on Xtensa for load/store operations indexed via an immediate offset. If an immediate offset for a load/store operation is within a certain range that allows it to be embedded into an available opcode then said opcode is emitted instead of the generic code sequence. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-01-26py/emitnative: Emit shorter exception handler entry code on RV32.Alessandro Gatti
This commit improves the RV32 code sequence that is emitted if a function needs to set up an exception handler as its prologue. The old code would clear a temporary register and then copy that value to places that needed to be initialised with zero values. On RV32 there's a dedicated register that's hardwired to be equal to zero, which allows us to bypass the extra register clear and use the zero register to initialise values. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2025-01-26py/emitnative: Optimise Viper register offset load/stores on Xtensa.Alessandro Gatti
This commit improves the emitted code sequences for address generation in the Viper subsystem when loading/storing 16 and 32 bit values via a register offset. The Xtensa opcodes ADDX2 and ADDX4 are used to avoid performing the extra shifts to align the final operation offset. Those opcodes are available on both xtensa and xtensawin MicroPython architectures. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2024-08-07py/emitnative: Fix case of clobbered REG_TEMP0 when loading const obj.Damien George
The `emit_load_reg_with_object()` helper function will clobber `REG_TEMP0`. This is currently OK on architectures where `REG_RET` and `REG_TEMP0` are the same (all architectures except RV32), because all callers of `emit_load_reg_with_object()` use either `REG_RET` or `REG_TEMP0` as the destination register. But on RV32 these registers are different and so when `REG_RET` is the destination, `REG_TEMP0` is clobbered, leading to incorrectly generated machine code. This commit fixes the issue simply by using `REG_TEMP0` as the destination register for all uses of `emit_load_reg_with_object()`, and adds a comment to make sure the caller of this function is careful. Signed-off-by: Damien George <damien@micropython.org>
2024-06-21py/emitnative: Fix native async with.Damien George
The code generating the entry to the finally handler of an async-with statement was simply wrong for the case of the native emitter. Among other things the layout of the stack was incorrect. This is fixed by this commit. The setup of the async-with finally handler is now put in a dedicated emit function, for both the bytecode and native emitters to implement in their own way (the bytecode emitter is unchanged, just factored to a function). With this fix all of the async-with tests now work when using the native emitter. Signed-off-by: Damien George <damien@micropython.org>
2024-06-21py/emitnative: Place thrown value in dedicated local variable.Damien George
A value thrown/injected into a native generator needs to be stored in a dedicated variable outside `nlr_buf_t`, following the `inject_exc` variable in `py/vm.c`. Signed-off-by: Damien George <damien@micropython.org>
2024-06-21py/emitndebug: Add native debug emitter.Damien George
This emitter prints out pseudo-machine instructions, instead of the usual output of the native emitter. It can be enabled on any port via `MICROPY_EMIT_NATIVE_DEBUG` (make sure other native emitters are disabled) but the easiest way to use it is with mpy-cross: $ mpy-cross -march=debug file.py Signed-off-by: Damien George <damien@micropython.org>
2024-06-21py/emitnative: Add more DEBUG_printf statements.Damien George
Signed-off-by: Damien George <damien@micropython.org>
2024-06-21py/emitnative: Emit better load/store sequences for RISC-V RV32IMC.Alessandro Gatti
Selected load/store code sequences have been optimised for RV32IMC when the chance to use fewer and smaller opcodes was possible. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2024-06-21py/asmrv32: Add RISC-V RV32IMC native code emitter.Alessandro Gatti
This adds a native code generation backend for RISC-V RV32I CPUs, currently limited to the I, M, and C instruction sets. Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
2024-03-19py/emitnative: Implement viper unary ops positive, negative and invert.Damien George
Signed-off-by: Damien George <damien@micropython.org>
2024-03-07all: Remove the "STATIC" macro and just use "static" instead.Angus Gratton
The STATIC macro was introduced a very long time ago in commit d5df6cd44a433d6253a61cb0f987835fbc06b2de. The original reason for this was to have the option to define it to nothing so that all static functions become global functions and therefore visible to certain debug tools, so one could do function size comparison and other things. This STATIC feature is rarely (if ever) used. And with the use of LTO and heavy inline optimisation, analysing the size of individual functions when they are not static is not a good representation of the size of code when fully optimised. So the macro does not have much use and it's simpler to just remove it. Then you know exactly what it's doing. For example, newcomers don't have to learn what the STATIC macro is and why it exists. Reading the code is also less "loud" with a lowercase static. One other minor point in favour of removing it, is that it stops bugs with `STATIC inline`, which should always be `static inline`. Methodology for this commit was: 1) git ls-files | egrep '\.[ch]$' | \ xargs sed -Ei "s/(^| )STATIC($| )/\1static\2/" 2) Do some manual cleanup in the diff by searching for the word STATIC in comments and changing those back. 3) "git-grep STATIC docs/", manually fixed those cases. 4) "rg -t python STATIC", manually fixed codegen lines that used STATIC. This work was funded through GitHub Sponsors. Signed-off-by: Angus Gratton <angus@redyak.com.au>
2024-02-20py/emitnative: Simplify layout and loading of native function prelude.Damien George
Now native functions and native generators have similar behaviour: the first machine-word of their code is an index to get to the prelude. This simplifies the handling of these types of functions, and also reduces the size of the emitted native machine code by no longer requiring special code at the start of the function to load a pointer to the prelude. Signed-off-by: Damien George <damien@micropython.org>
2024-02-16py/emitglue: Introduce mp_proto_fun_t as a more general mp_raw_code_t.Damien George
Allows bytecode itself to be used instead of an mp_raw_code_t in the simple and common cases of a bytecode function without any children. This can be used to further reduce frozen code size, and has the potential to optimise other areas like importing. Signed-off-by: Damien George <damien@micropython.org>
2023-09-29py: Change ifdef DEBUG_PRINT to if DEBUG_PRINT.Ihor Nehrutsa
Signed-off-by: Ihor Nehrutsa <Ihor.Nehrutsa@gmail.com>
2023-04-27all: Fix spelling mistakes based on codespell check.Damien George
Signed-off-by: Damien George <damien@micropython.org>
2023-02-27py/emitnative: Explicitly compare comparison ops in binary_op emitter.Pepijn de Vos
Without this it's possible to get a compiler error about the comparison always being true, because MP_BINARY_OP_LESS is 0. And it seems that gcc optimises these 6 equality comparisons into the same size machine code as before.
2022-12-16py/emitnative: Initialise locals as Python object type for native code.Damien George
In @micropython.native code the types of variables and expressions are always Python objects, so they can be initialised as such. This prevents problems with compiling optimised code like while-loops where a local may be referenced before it is assigned to. Signed-off-by: Damien George <damien@micropython.org>
2022-11-11py/emitnative: Ensure load_subscr does not clobber existing REG_ARG_2.Damien George
Follow up from a similar fix in 426785a19eeb12aef7383fbda4693575d8c4dddf Fixes issue #6314. Signed-off-by: Damien George <damien@micropython.org>
2022-07-12py/emitnative: Fix STORE_ATTR viper code-gen when value is not a pyobj.Jim Mussared
There was a missing call to MP_F_CONVERT_NATIVE_TO_OBJ. Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
2022-06-20py/emit: Suppress unreachable bytecode/native code that follows jump.Damien George
This new logic tracks when an unconditional jump/raise occurs in the emitted code stream (bytecode or native machine code) and suppresses all subsequent code, until a label is assigned. This eliminates a lot of cases of dead code, with relatively simple logic. This commit combined with the previous one (that removed the existing dead-code finding logic) has the following code size change: bare-arm: -16 -0.028% minimal x86: -60 -0.036% unix x64: -368 -0.070% unix nanbox: -80 -0.017% stm32: -204 -0.052% PYBV10 cc3200: +0 +0.000% esp8266: -232 -0.033% GENERIC esp32: -224 -0.015% GENERIC[incl -40(data)] mimxrt: -192 -0.054% TEENSY40 renesas-ra: -200 -0.032% RA6M2_EK nrf: +28 +0.015% pca10040 rp2: -256 -0.050% PICO samd: -12 -0.009% ADAFRUIT_ITSYBITSY_M4_EXPRESS Signed-off-by: Damien George <damien@micropython.org>
2022-06-20py/emit: Remove logic to detect last-emit-was-return-value.Damien George
This optimisation to remove dead code is not as good as it could be. Signed-off-by: Damien George <damien@micropython.org>
2022-05-23py/asmthumb: Make ARMv7-M instruction use dynamically selectable.Damien George
This commit adjusts the asm_thumb_xxx functions so they can be dynamically configured to use ARMv7-M instructions or not. This is available when MICROPY_DYNAMIC_COMPILER is enabled, and then controlled by the value of mp_dynamic_compiler.native_arch. If MICROPY_DYNAMIC_COMPILER is disabled the previous behaviour is retained: the functions emit ARMv7-M instructions only if MICROPY_EMIT_THUMB_ARMV7M is enabled. Signed-off-by: Damien George <damien@micropython.org>
2022-05-23py/emitnative: Access qstr values using indirection table qstr_table.Damien George
This changes the native emitter to access qstr values using the qstr indirection table qstr_table, but only when generating native code that will be saved to a .mpy file. This makes the resulting native code fully static, ie it does not require any fix-ups or rewriting when it is imported. The performance of native code is more or less unchanged. Benchmark results on PYBv1.0 (using --via-mpy and --emit native) are: N=100 M=100 baseline -> this-commit diff diff% (error%) bm_chaos.py 407.16 -> 411.85 : +4.69 = +1.152% (+/-0.01%) bm_fannkuch.py 100.89 -> 101.20 : +0.31 = +0.307% (+/-0.01%) bm_fft.py 3521.17 -> 3441.72 : -79.45 = -2.256% (+/-0.00%) bm_float.py 6707.29 -> 6644.83 : -62.46 = -0.931% (+/-0.00%) bm_hexiom.py 55.91 -> 55.41 : -0.50 = -0.894% (+/-0.00%) bm_nqueens.py 5343.54 -> 5326.17 : -17.37 = -0.325% (+/-0.00%) bm_pidigits.py 603.89 -> 632.79 : +28.90 = +4.786% (+/-0.33%) core_qstr.py 64.18 -> 64.09 : -0.09 = -0.140% (+/-0.01%) core_yield_from.py 313.61 -> 311.11 : -2.50 = -0.797% (+/-0.03%) misc_aes.py 654.29 -> 659.75 : +5.46 = +0.834% (+/-0.02%) misc_mandel.py 4205.10 -> 4272.08 : +66.98 = +1.593% (+/-0.01%) misc_pystone.py 3077.79 -> 3128.39 : +50.60 = +1.644% (+/-0.01%) misc_raytrace.py 388.45 -> 393.71 : +5.26 = +1.354% (+/-0.01%) viper_call0.py 576.83 -> 566.76 : -10.07 = -1.746% (+/-0.05%) viper_call1a.py 550.39 -> 540.12 : -10.27 = -1.866% (+/-0.11%) viper_call1b.py 438.32 -> 432.09 : -6.23 = -1.421% (+/-0.11%) viper_call1c.py 442.96 -> 436.11 : -6.85 = -1.546% (+/-0.08%) viper_call2a.py 536.31 -> 527.37 : -8.94 = -1.667% (+/-0.04%) viper_call2b.py 378.99 -> 377.50 : -1.49 = -0.393% (+/-0.08%) Signed-off-by: Damien George <damien@micropython.org>
2022-05-19py/emitnative: Provide dedicated local for exception unwind handler ptr.Damien George
This eliminates the need to save and restore the exception unwind handler pointer when calling nlr_push. Signed-off-by: Damien George <damien@micropython.org>
2022-05-19py/emitnative: Simplify generation of code that loads prelude pointer.Damien George
It's possible to use REG_PARENT_ARG_1 instead of REG_LOCAL_3. Signed-off-by: Damien George <damien@micropython.org>
2022-05-18py/compile: De-duplicate constant objects in module's constant table.Damien George
The recent rework of bytecode made all constants global with respect to the module (previously, each function had its own constant table). That means the constant table for a module is shared among all functions/methods/etc within the module. This commit add support to the compiler to de-duplicate constants in this module constant table. So if a constant is used more than once -- eg 1.0 or (None, None) -- then the same object is reused for all instances. For example, if there is code like `print(1.0, 1.0)` then the parser will create two independent constants 1.0 and 1.0. The compiler will then (with this commit) notice they are the same and only put one of them in the constant table. The bytecode will then reuse that constant twice in the print expression. That allows the second 1.0 to be reclaimed by the GC, also means the constant table has one less entry so saves a word. Signed-off-by: Damien George <damien@micropython.org>
2022-05-17py/emitnative: Put a pointer to the native prelude in child_table array.Damien George
Some architectures (like esp32 xtensa) cannot read byte-wise from executable memory. This means the prelude for native functions -- which is usually located after the machine code for the native function -- must be placed in separate memory that can be read byte-wise. Prior to this commit this was achieved by enabling N_PRELUDE_AS_BYTES_OBJ for the emitter and MICROPY_EMIT_NATIVE_PRELUDE_AS_BYTES_OBJ for the runtime. The prelude was then placed in a bytes object, pointed to by the module's constant table. This behaviour is changed by this commit so that a pointer to the prelude is stored either in mp_obj_fun_bc_t.child_table, or in mp_obj_fun_bc_t.child_table[num_children] if num_children > 0. The reasons for doing this are: 1. It decouples the native emitter from runtime requirements, the emitted code no longer needs to know if the system it runs on can/can't read byte-wise from executable memory. 2. It makes all ports have the same emitter behaviour, there is no longer the N_PRELUDE_AS_BYTES_OBJ option. 3. The module's constant table is now used only for actual constants in the Python code. This allows further optimisations to be done with the constants (eg constant deduplication). Code size change for those ports that enable the native emitter: unix x64: +80 +0.015% stm32: +24 +0.004% PYBV10 esp8266: +88 +0.013% GENERIC esp32: -20 -0.002% GENERIC[incl -112(data)] rp2: +32 +0.005% PICO Signed-off-by: Damien George <damien@micropython.org>
2022-05-17py/bc: Provide separate code-state setup funcs for bytecode and native.Damien George
mpy-cross will now generate native code based on the size of mp_code_state_native_t, and the runtime will use this struct to calculate the offset of the .state field. This makes native code generation and execution (which rely on this struct) independent to the settings MICROPY_STACKLESS and MICROPY_PY_SYS_SETTRACE, both of which change the size of the mp_code_state_t struct. Fixes issue #5059. Signed-off-by: Damien George <damien@micropython.org>
2022-04-14Revert "py/emitnative: Don't store prelude at end of machine code if..."Damien George
This reverts commit 7e8222ae063d78ae8f901bb0f9fef663edd2edb6. The prelude data must exist somewhere in the native code so load_raw_code and mpy-tool.py can access and parse it. Signed-off-by: Damien George <damien@micropython.org>
2022-03-31py/runtime: Allow multiple **args in a function call.David Lechner
This is a partial implementation of PEP 448 to allow multiple ** unpackings when calling a function or method. The compiler is modified to encode the argument as a None: obj key-value pair (similar to how regular keyword arguments are encoded as str: obj pairs). The extra object that was pushed on the stack to hold a single ** unpacking object is no longer used and is removed. The runtime is modified to decode this new format. Signed-off-by: David Lechner <david@pybricks.com>
2022-03-30py/emitnative: Don't store prelude at end of machine code if not needed.Damien George
Signed-off-by: Damien George <damien@micropython.org>
2022-03-28py: Change jump opcodes to emit 1-byte jump offset when possible.Damien George
This commit introduces changes: - All jump opcodes are changed to have variable length arguments, of either 1 or 2 bytes (previously they were fixed at 2 bytes). In most cases only 1 byte is needed to encode the short jump offset, saving bytecode size. - The bytecode emitter now selects 1 byte jump arguments when the jump offset is guaranteed to fit in 1 byte. This is achieved by checking if the code size changed during the last pass and, if it did (if it shrank), then requesting that the compiler make another pass to get the correct offsets of the now-smaller code. This can continue multiple times until the code stabilises. The code can only ever shrink so this iteration is guaranteed to complete. In most cases no extra passes are needed, the original 4 passes are enough to get it right by the 4th pass (because the 2nd pass computes roughly the correct labels and the 3rd pass computes the correct size for the jump argument). This change to the jump opcode encoding reduces .mpy files and RAM usage (when bytecode is in RAM) by about 2% on average. The performance of the VM is not impacted, at least within measurment of the performance benchmark suite. Code size is reduced for builds that include a decent amount of frozen bytecode. ARM Cortex-M builds without any frozen code increase by about 350 bytes. Signed-off-by: Damien George <damien@micropython.org>
2022-02-24py: Rework bytecode and .mpy file format to be mostly static data.Damien George
Background: .mpy files are precompiled .py files, built using mpy-cross, that contain compiled bytecode functions (and can also contain machine code). The benefit of using an .mpy file over a .py file is that they are faster to import and take less memory when importing. They are also smaller on disk. But the real benefit of .mpy files comes when they are frozen into the firmware. This is done by loading the .mpy file during compilation of the firmware and turning it into a set of big C data structures (the job of mpy-tool.py), which are then compiled and downloaded into the ROM of a device. These C data structures can be executed in-place, ie directly from ROM. This makes importing even faster because there is very little to do, and also means such frozen modules take up much less RAM (because their bytecode stays in ROM). The downside of frozen code is that it requires recompiling and reflashing the entire firmware. This can be a big barrier to entry, slows down development time, and makes it harder to do OTA updates of frozen code (because the whole firmware must be updated). This commit attempts to solve this problem by providing a solution that sits between loading .mpy files into RAM and freezing them into the firmware. The .mpy file format has been reworked so that it consists of data and bytecode which is mostly static and ready to run in-place. If these new .mpy files are located in flash/ROM which is memory addressable, the .mpy file can be executed (mostly) in-place. With this approach there is still a small amount of unpacking and linking of the .mpy file that needs to be done when it's imported, but it's still much better than loading an .mpy from disk into RAM (although not as good as freezing .mpy files into the firmware). The main trick to make static .mpy files is to adjust the bytecode so any qstrs that it references now go through a lookup table to convert from local qstr number in the module to global qstr number in the firmware. That means the bytecode does not need linking/rewriting of qstrs when it's loaded. Instead only a small qstr table needs to be built (and put in RAM) at import time. This means the bytecode itself is static/constant and can be used directly if it's in addressable memory. Also the qstr string data in the .mpy file, and some constant object data, can be used directly. Note that the qstr table is global to the module (ie not per function). In more detail, in the VM what used to be (schematically): qst = DECODE_QSTR_VALUE; is now (schematically): idx = DECODE_QSTR_INDEX; qst = qstr_table[idx]; That allows the bytecode to be fixed at compile time and not need relinking/rewriting of the qstr values. Only qstr_table needs to be linked when the .mpy is loaded. Incidentally, this helps to reduce the size of bytecode because what used to be 2-byte qstr values in the bytecode are now (mostly) 1-byte indices. If the module uses the same qstr more than two times then the bytecode is smaller than before. The following changes are measured for this commit compared to the previous (the baseline): - average 7%-9% reduction in size of .mpy files - frozen code size is reduced by about 5%-7% - importing .py files uses about 5% less RAM in total - importing .mpy files uses about 4% less RAM in total - importing .py and .mpy files takes about the same time as before The qstr indirection in the bytecode has only a small impact on VM performance. For stm32 on PYBv1.0 the performance change of this commit is: diff of scores (higher is better) N=100 M=100 baseline -> this-commit diff diff% (error%) bm_chaos.py 371.07 -> 357.39 : -13.68 = -3.687% (+/-0.02%) bm_fannkuch.py 78.72 -> 77.49 : -1.23 = -1.563% (+/-0.01%) bm_fft.py 2591.73 -> 2539.28 : -52.45 = -2.024% (+/-0.00%) bm_float.py 6034.93 -> 5908.30 : -126.63 = -2.098% (+/-0.01%) bm_hexiom.py 48.96 -> 47.93 : -1.03 = -2.104% (+/-0.00%) bm_nqueens.py 4510.63 -> 4459.94 : -50.69 = -1.124% (+/-0.00%) bm_pidigits.py 650.28 -> 644.96 : -5.32 = -0.818% (+/-0.23%) core_import_mpy_multi.py 564.77 -> 581.49 : +16.72 = +2.960% (+/-0.01%) core_import_mpy_single.py 68.67 -> 67.16 : -1.51 = -2.199% (+/-0.01%) core_qstr.py 64.16 -> 64.12 : -0.04 = -0.062% (+/-0.00%) core_yield_from.py 362.58 -> 354.50 : -8.08 = -2.228% (+/-0.00%) misc_aes.py 429.69 -> 405.59 : -24.10 = -5.609% (+/-0.01%) misc_mandel.py 3485.13 -> 3416.51 : -68.62 = -1.969% (+/-0.00%) misc_pystone.py 2496.53 -> 2405.56 : -90.97 = -3.644% (+/-0.01%) misc_raytrace.py 381.47 -> 374.01 : -7.46 = -1.956% (+/-0.01%) viper_call0.py 576.73 -> 572.49 : -4.24 = -0.735% (+/-0.04%) viper_call1a.py 550.37 -> 546.21 : -4.16 = -0.756% (+/-0.09%) viper_call1b.py 438.23 -> 435.68 : -2.55 = -0.582% (+/-0.06%) viper_call1c.py 442.84 -> 440.04 : -2.80 = -0.632% (+/-0.08%) viper_call2a.py 536.31 -> 532.35 : -3.96 = -0.738% (+/-0.06%) viper_call2b.py 382.34 -> 377.07 : -5.27 = -1.378% (+/-0.03%) And for unix on x64: diff of scores (higher is better) N=2000 M=2000 baseline -> this-commit diff diff% (error%) bm_chaos.py 13594.20 -> 13073.84 : -520.36 = -3.828% (+/-5.44%) bm_fannkuch.py 60.63 -> 59.58 : -1.05 = -1.732% (+/-3.01%) bm_fft.py 112009.15 -> 111603.32 : -405.83 = -0.362% (+/-4.03%) bm_float.py 246202.55 -> 247923.81 : +1721.26 = +0.699% (+/-2.79%) bm_hexiom.py 615.65 -> 617.21 : +1.56 = +0.253% (+/-1.64%) bm_nqueens.py 215807.95 -> 215600.96 : -206.99 = -0.096% (+/-3.52%) bm_pidigits.py 8246.74 -> 8422.82 : +176.08 = +2.135% (+/-3.64%) misc_aes.py 16133.00 -> 16452.74 : +319.74 = +1.982% (+/-1.50%) misc_mandel.py 128146.69 -> 130796.43 : +2649.74 = +2.068% (+/-3.18%) misc_pystone.py 83811.49 -> 83124.85 : -686.64 = -0.819% (+/-1.03%) misc_raytrace.py 21688.02 -> 21385.10 : -302.92 = -1.397% (+/-3.20%) The code size change is (firmware with a lot of frozen code benefits the most): bare-arm: +396 +0.697% minimal x86: +1595 +0.979% [incl +32(data)] unix x64: +2408 +0.470% [incl +800(data)] unix nanbox: +1396 +0.309% [incl -96(data)] stm32: -1256 -0.318% PYBV10 cc3200: +288 +0.157% esp8266: -260 -0.037% GENERIC esp32: -216 -0.014% GENERIC[incl -1072(data)] nrf: +116 +0.067% pca10040 rp2: -664 -0.135% PICO samd: +844 +0.607% ADAFRUIT_ITSYBITSY_M4_EXPRESS As part of this change the .mpy file format version is bumped to version 6. And mpy-tool.py has been improved to provide a good visualisation of the contents of .mpy files. In summary: this commit changes the bytecode to use qstr indirection, and reworks the .mpy file format to be simpler and allow .mpy files to be executed in-place. Performance is not impacted too much. Eventually it will be possible to store such .mpy files in a linear, read-only, memory- mappable filesystem so they can be executed from flash/ROM. This will essentially be able to replace frozen code for most applications. Signed-off-by: Damien George <damien@micropython.org>