summaryrefslogtreecommitdiff
path: root/py/showbc.c
AgeCommit message (Collapse)Author
2022-03-28py: Change jump-if-x-or-pop opcodes to have unsigned offset argument.Damien George
These jumps are always forwards, and it's more efficient in the VM to decode an unsigned argument. These opcodes are already optimised versions of the sequence "dup-top pop-jump-if-x pop" so it doesn't hurt generality to optimise them further. Signed-off-by: Damien George <damien@micropython.org>
2022-03-28py: Change jump opcodes to emit 1-byte jump offset when possible.Damien George
This commit introduces changes: - All jump opcodes are changed to have variable length arguments, of either 1 or 2 bytes (previously they were fixed at 2 bytes). In most cases only 1 byte is needed to encode the short jump offset, saving bytecode size. - The bytecode emitter now selects 1 byte jump arguments when the jump offset is guaranteed to fit in 1 byte. This is achieved by checking if the code size changed during the last pass and, if it did (if it shrank), then requesting that the compiler make another pass to get the correct offsets of the now-smaller code. This can continue multiple times until the code stabilises. The code can only ever shrink so this iteration is guaranteed to complete. In most cases no extra passes are needed, the original 4 passes are enough to get it right by the 4th pass (because the 2nd pass computes roughly the correct labels and the 3rd pass computes the correct size for the jump argument). This change to the jump opcode encoding reduces .mpy files and RAM usage (when bytecode is in RAM) by about 2% on average. The performance of the VM is not impacted, at least within measurment of the performance benchmark suite. Code size is reduced for builds that include a decent amount of frozen bytecode. ARM Cortex-M builds without any frozen code increase by about 350 bytes. Signed-off-by: Damien George <damien@micropython.org>
2022-03-16py/showbc: Remove global variables and make DECODE_PTR work correctly.Damien George
The bytecode state variables mp_showbc_code_start and mp_showbc_constants have been removed and made local variables passed into the various functions. As part of this, the DECODE_PTR macro is fixed so it extracts the relevant pointer from the child_table (a regression introduced in f2040bfc7ee033e48acef9f289790f3b4e6b74e5). Signed-off-by: Damien George <damien@micropython.org>
2022-02-24py: Rework bytecode and .mpy file format to be mostly static data.Damien George
Background: .mpy files are precompiled .py files, built using mpy-cross, that contain compiled bytecode functions (and can also contain machine code). The benefit of using an .mpy file over a .py file is that they are faster to import and take less memory when importing. They are also smaller on disk. But the real benefit of .mpy files comes when they are frozen into the firmware. This is done by loading the .mpy file during compilation of the firmware and turning it into a set of big C data structures (the job of mpy-tool.py), which are then compiled and downloaded into the ROM of a device. These C data structures can be executed in-place, ie directly from ROM. This makes importing even faster because there is very little to do, and also means such frozen modules take up much less RAM (because their bytecode stays in ROM). The downside of frozen code is that it requires recompiling and reflashing the entire firmware. This can be a big barrier to entry, slows down development time, and makes it harder to do OTA updates of frozen code (because the whole firmware must be updated). This commit attempts to solve this problem by providing a solution that sits between loading .mpy files into RAM and freezing them into the firmware. The .mpy file format has been reworked so that it consists of data and bytecode which is mostly static and ready to run in-place. If these new .mpy files are located in flash/ROM which is memory addressable, the .mpy file can be executed (mostly) in-place. With this approach there is still a small amount of unpacking and linking of the .mpy file that needs to be done when it's imported, but it's still much better than loading an .mpy from disk into RAM (although not as good as freezing .mpy files into the firmware). The main trick to make static .mpy files is to adjust the bytecode so any qstrs that it references now go through a lookup table to convert from local qstr number in the module to global qstr number in the firmware. That means the bytecode does not need linking/rewriting of qstrs when it's loaded. Instead only a small qstr table needs to be built (and put in RAM) at import time. This means the bytecode itself is static/constant and can be used directly if it's in addressable memory. Also the qstr string data in the .mpy file, and some constant object data, can be used directly. Note that the qstr table is global to the module (ie not per function). In more detail, in the VM what used to be (schematically): qst = DECODE_QSTR_VALUE; is now (schematically): idx = DECODE_QSTR_INDEX; qst = qstr_table[idx]; That allows the bytecode to be fixed at compile time and not need relinking/rewriting of the qstr values. Only qstr_table needs to be linked when the .mpy is loaded. Incidentally, this helps to reduce the size of bytecode because what used to be 2-byte qstr values in the bytecode are now (mostly) 1-byte indices. If the module uses the same qstr more than two times then the bytecode is smaller than before. The following changes are measured for this commit compared to the previous (the baseline): - average 7%-9% reduction in size of .mpy files - frozen code size is reduced by about 5%-7% - importing .py files uses about 5% less RAM in total - importing .mpy files uses about 4% less RAM in total - importing .py and .mpy files takes about the same time as before The qstr indirection in the bytecode has only a small impact on VM performance. For stm32 on PYBv1.0 the performance change of this commit is: diff of scores (higher is better) N=100 M=100 baseline -> this-commit diff diff% (error%) bm_chaos.py 371.07 -> 357.39 : -13.68 = -3.687% (+/-0.02%) bm_fannkuch.py 78.72 -> 77.49 : -1.23 = -1.563% (+/-0.01%) bm_fft.py 2591.73 -> 2539.28 : -52.45 = -2.024% (+/-0.00%) bm_float.py 6034.93 -> 5908.30 : -126.63 = -2.098% (+/-0.01%) bm_hexiom.py 48.96 -> 47.93 : -1.03 = -2.104% (+/-0.00%) bm_nqueens.py 4510.63 -> 4459.94 : -50.69 = -1.124% (+/-0.00%) bm_pidigits.py 650.28 -> 644.96 : -5.32 = -0.818% (+/-0.23%) core_import_mpy_multi.py 564.77 -> 581.49 : +16.72 = +2.960% (+/-0.01%) core_import_mpy_single.py 68.67 -> 67.16 : -1.51 = -2.199% (+/-0.01%) core_qstr.py 64.16 -> 64.12 : -0.04 = -0.062% (+/-0.00%) core_yield_from.py 362.58 -> 354.50 : -8.08 = -2.228% (+/-0.00%) misc_aes.py 429.69 -> 405.59 : -24.10 = -5.609% (+/-0.01%) misc_mandel.py 3485.13 -> 3416.51 : -68.62 = -1.969% (+/-0.00%) misc_pystone.py 2496.53 -> 2405.56 : -90.97 = -3.644% (+/-0.01%) misc_raytrace.py 381.47 -> 374.01 : -7.46 = -1.956% (+/-0.01%) viper_call0.py 576.73 -> 572.49 : -4.24 = -0.735% (+/-0.04%) viper_call1a.py 550.37 -> 546.21 : -4.16 = -0.756% (+/-0.09%) viper_call1b.py 438.23 -> 435.68 : -2.55 = -0.582% (+/-0.06%) viper_call1c.py 442.84 -> 440.04 : -2.80 = -0.632% (+/-0.08%) viper_call2a.py 536.31 -> 532.35 : -3.96 = -0.738% (+/-0.06%) viper_call2b.py 382.34 -> 377.07 : -5.27 = -1.378% (+/-0.03%) And for unix on x64: diff of scores (higher is better) N=2000 M=2000 baseline -> this-commit diff diff% (error%) bm_chaos.py 13594.20 -> 13073.84 : -520.36 = -3.828% (+/-5.44%) bm_fannkuch.py 60.63 -> 59.58 : -1.05 = -1.732% (+/-3.01%) bm_fft.py 112009.15 -> 111603.32 : -405.83 = -0.362% (+/-4.03%) bm_float.py 246202.55 -> 247923.81 : +1721.26 = +0.699% (+/-2.79%) bm_hexiom.py 615.65 -> 617.21 : +1.56 = +0.253% (+/-1.64%) bm_nqueens.py 215807.95 -> 215600.96 : -206.99 = -0.096% (+/-3.52%) bm_pidigits.py 8246.74 -> 8422.82 : +176.08 = +2.135% (+/-3.64%) misc_aes.py 16133.00 -> 16452.74 : +319.74 = +1.982% (+/-1.50%) misc_mandel.py 128146.69 -> 130796.43 : +2649.74 = +2.068% (+/-3.18%) misc_pystone.py 83811.49 -> 83124.85 : -686.64 = -0.819% (+/-1.03%) misc_raytrace.py 21688.02 -> 21385.10 : -302.92 = -1.397% (+/-3.20%) The code size change is (firmware with a lot of frozen code benefits the most): bare-arm: +396 +0.697% minimal x86: +1595 +0.979% [incl +32(data)] unix x64: +2408 +0.470% [incl +800(data)] unix nanbox: +1396 +0.309% [incl -96(data)] stm32: -1256 -0.318% PYBV10 cc3200: +288 +0.157% esp8266: -260 -0.037% GENERIC esp32: -216 -0.014% GENERIC[incl -1072(data)] nrf: +116 +0.067% pca10040 rp2: -664 -0.135% PICO samd: +844 +0.607% ADAFRUIT_ITSYBITSY_M4_EXPRESS As part of this change the .mpy file format version is bumped to version 6. And mpy-tool.py has been improved to provide a good visualisation of the contents of .mpy files. In summary: this commit changes the bytecode to use qstr indirection, and reworks the .mpy file format to be simpler and allow .mpy files to be executed in-place. Performance is not impacted too much. Eventually it will be possible to store such .mpy files in a linear, read-only, memory- mappable filesystem so they can be executed from flash/ROM. This will essentially be able to replace frozen code for most applications. Signed-off-by: Damien George <damien@micropython.org>
2021-12-15py/showbc: Fix printing of raw bytecode header on nanbox builds.Damien George
Signed-off-by: Damien George <damien@micropython.org>
2021-11-19py/showbc: Print unary-op string when dumping bytecode.Damien George
Signed-off-by: Damien George <damien@micropython.org>
2021-09-16all: Remove MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE.Jim Mussared
This commit removes all parts of code associated with the existing MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE optimisation option, including the -mcache-lookup-bc option to mpy-cross. This feature originally provided a significant performance boost for Unix, but wasn't able to be enabled for MCU targets (due to frozen bytecode), and added significant extra complexity to generating and distributing .mpy files. The equivalent performance gain is now provided by the combination of MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE (which has been enabled on the unix port in the previous commit). It's hard to provide precise performance numbers, but tests have been run on a wide variety of architectures (x86-64, ARM Cortex, Aarch64, RISC-V, xtensa) and they all generally agree on the qualitative improvements seen by the combination of MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE. For example, on a "quiet" Linux x64 environment (i3-5010U @ 2.10GHz) the change from CACHE_MAP_LOOKUP_IN_BYTECODE, to LOAD_ATTR_FAST_PATH combined with MAP_LOOKUP_CACHE is: diff of scores (higher is better) N=2000 M=2000 bccache -> attrmapcache diff diff% (error%) bm_chaos.py 13742.56 -> 13905.67 : +163.11 = +1.187% (+/-3.75%) bm_fannkuch.py 60.13 -> 61.34 : +1.21 = +2.012% (+/-2.11%) bm_fft.py 113083.20 -> 114793.68 : +1710.48 = +1.513% (+/-1.57%) bm_float.py 256552.80 -> 243908.29 : -12644.51 = -4.929% (+/-1.90%) bm_hexiom.py 521.93 -> 625.41 : +103.48 = +19.826% (+/-0.40%) bm_nqueens.py 197544.25 -> 217713.12 : +20168.87 = +10.210% (+/-3.01%) bm_pidigits.py 8072.98 -> 8198.75 : +125.77 = +1.558% (+/-3.22%) misc_aes.py 17283.45 -> 16480.52 : -802.93 = -4.646% (+/-0.82%) misc_mandel.py 99083.99 -> 128939.84 : +29855.85 = +30.132% (+/-5.88%) misc_pystone.py 83860.10 -> 82592.56 : -1267.54 = -1.511% (+/-2.27%) misc_raytrace.py 21490.40 -> 22227.23 : +736.83 = +3.429% (+/-1.88%) This shows that the new optimisations are at least as good as the existing inline-bytecode-caching, and are sometimes much better (because the new ones apply caching to a wider variety of map lookups). The new optimisations can also benefit code generated by the native emitter, because they apply to the runtime rather than the generated code. The improvement for the native emitter when LOAD_ATTR_FAST_PATH and MAP_LOOKUP_CACHE are enabled is (same Linux environment as above): diff of scores (higher is better) N=2000 M=2000 native -> nat-attrmapcache diff diff% (error%) bm_chaos.py 14130.62 -> 15464.68 : +1334.06 = +9.441% (+/-7.11%) bm_fannkuch.py 74.96 -> 76.16 : +1.20 = +1.601% (+/-1.80%) bm_fft.py 166682.99 -> 168221.86 : +1538.87 = +0.923% (+/-4.20%) bm_float.py 233415.23 -> 265524.90 : +32109.67 = +13.756% (+/-2.57%) bm_hexiom.py 628.59 -> 734.17 : +105.58 = +16.796% (+/-1.39%) bm_nqueens.py 225418.44 -> 232926.45 : +7508.01 = +3.331% (+/-3.10%) bm_pidigits.py 6322.00 -> 6379.52 : +57.52 = +0.910% (+/-5.62%) misc_aes.py 20670.10 -> 27223.18 : +6553.08 = +31.703% (+/-1.56%) misc_mandel.py 138221.11 -> 152014.01 : +13792.90 = +9.979% (+/-2.46%) misc_pystone.py 85032.14 -> 105681.44 : +20649.30 = +24.284% (+/-2.25%) misc_raytrace.py 19800.01 -> 23350.73 : +3550.72 = +17.933% (+/-2.79%) In summary, compared to MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE, the new MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE options: - are simpler; - take less code size; - are faster (generally); - work with code generated by the native emitter; - can be used on embedded targets with a small and constant RAM overhead; - allow the same .mpy bytecode to run on all targets. See #7680 for further discussion. And see also #7653 for a discussion about simplifying mpy-cross options. Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
2021-06-24all: Fix signed shifts and NULL access errors from -fsanitize=undefined.Jeff Epler
Fixes the following (the line numbers match commit 0e87459e2bfd07): ../../extmod/crypto-algorithms/sha256.c:49:19: runtime error: left shif... ../../extmod/moduasyncio.c:106:35: runtime error: member access within ... ../../py/binary.c:210:13: runtime error: left shift of negative value -... ../../py/mpz.c:744:16: runtime error: negation of -9223372036854775808 ... ../../py/objint.c:109:22: runtime error: left shift of 1 by 31 places c... ../../py/objint_mpz.c:374:9: runtime error: left shift of 4611686018427... ../../py/objint_mpz.c:374:9: runtime error: left shift of negative valu... ../../py/parsenum.c:106:14: runtime error: left shift of 46116860184273... ../../py/runtime.c:395:33: runtime error: left shift of negative value ... ../../py/showbc.c:177:28: runtime error: left shift of negative value -... ../../py/vm.c:321:36: runtime error: left shift of negative value -1``` Testing was done on an amd64 Debian Buster system using gcc-8.3 and these settings: CFLAGS += -g3 -Og -fsanitize=undefined LDFLAGS += -fsanitize=undefined The introduced TASK_PAIRHEAP macro's conditional (x ? &x->i : NULL) assembles (under amd64 gcc 8.3 -Os) to the same as &x->i, since i is the initial field of the struct. However, for the purposes of undefined behavior analysis the conditional is needed. Signed-off-by: Jeff Epler <jepler@gmail.com>
2020-09-11py/showbc: Pass in an mp_print_t struct to all bytecode-print functions.Damien George
So the output can be redirected if needed. Signed-off-by: Damien George <damien@micropython.org>
2020-02-28all: Reformat C and Python source code with tools/codeformat.py.Damien George
This is run with uncrustify 0.70.1, and black 19.10b0.
2019-10-01py: Rework and compress second part of bytecode prelude.Damien George
This patch compresses the second part of the bytecode prelude which contains the source file name, function name, source-line-number mapping and cell closure information. This part of the prelude now begins with a single varible length unsigned integer which encodes 2 numbers, being the byte-size of the following 2 sections in the header: the "source info section" and the "closure section". After decoding this variable unsigned integer it's possible to skip over one or both of these sections very easily. This scheme saves about 2 bytes for most functions compared to the original format: one in the case that there are no closure cells, and one because padding was eliminated.
2019-10-01py: Compress first part of bytecode prelude.Damien George
The start of the bytecode prelude contains 6 numbers telling the amount of stack needed for the Python values and exceptions, and the signature of the function. Prior to this patch these numbers were all encoded one after the other (2x variable unsigned integers, then 4x bytes), but using so many bytes is unnecessary. An entropy analysis of around 150,000 bytecode functions from the CPython standard library showed that the optimal Shannon coding would need about 7.1 bits on average to encode these 6 numbers, compared to the existing 48 bits. This patch attempts to get close to this optimal value by packing the 6 numbers into a single, varible-length unsigned integer via bit-wise interleaving. The interleaving scheme is chosen to minimise the average number of bytes needed, and at the same time keep the scheme simple enough so it can be implemented without too much overhead in code size or speed. The scheme requires about 10.5 bits on average to store the 6 numbers. As a result most functions which originally took 6 bytes to encode these 6 numbers now need only 1 byte (in 80% of cases).
2019-09-26py: Split RAISE_VARARGS opcode into 3 separate ones.Damien George
From the beginning of this project the RAISE_VARARGS opcode was named and implemented following CPython, where it has an argument (to the opcode) counting how many args the raise takes: raise # 0 args (re-raise previous exception) raise exc # 1 arg raise exc from exc2 # 2 args (chained raise) In the bytecode this operation therefore takes 2 bytes, one for RAISE_VARARGS and one for the number of args. This patch splits this opcode into 3, where each is now a single byte. This reduces bytecode size by 1 byte for each use of raise. Every byte counts! It also has the benefit of reducing code size (on all ports except nanbox).
2019-08-06py/showbc: Fix off-by-one when showing address of unknown opcode.Milan Rossa
2019-03-05py: Replace POP_BLOCK and POP_EXCEPT opcodes with POP_EXCEPT_JUMP.Damien George
POP_BLOCK and POP_EXCEPT are now the same, and are always followed by a JUMP. So this optimisation reduces code size, and RAM usage of bytecode by two bytes for each try-except handler.
2017-10-05py: Clean up unary and binary enum list to keep groups together.Damien George
2 non-bytecode binary ops (NOT_IN and IN_NOT) are moved out of the bytecode group, so this change will change the bytecode format.
2017-09-25py: Clarify which mp_unary_op_t's may appear in the bytecode.Paul Sokolovsky
Not all can, so we don't need to reserve bytecodes for them, and can use free slots for something else later.
2017-07-31all: Use the name MicroPython consistently in commentsAlexander Steffen
There were several different spellings of MicroPython present in comments, when there should be only one.
2017-04-22py: Add LOAD_SUPER_METHOD bytecode to allow heap-free super meth calls.Damien George
This patch allows the following code to run without allocating on the heap: super().foo(...) Before this patch such a call would allocate a super object on the heap and then load the foo method and call it right away. The super object is only needed to perform the lookup of the method and not needed after that. This patch makes an optimisation to allocate the super object on the C stack and discard it right after use. Changes in code size due to this patch are: bare-arm: +128 minimal: +232 unix x64: +416 unix nanbox: +364 stmhal: +184 esp8266: +340 cc3200: +128
2017-02-16py: Allow bytecode/native to put iter_buf on stack for simple for loops.Damien George
So that the "for x in it: ..." statement can now work without using the heap (so long as the iterator argument fits in an iter_buf structure).
2017-01-27py/showbc: Make sure to set the const_table before printing bytecode.Damien George
2016-09-20py/showbc: Make printf's go to the platform print stream.Damien George
The system printf is no longer used by the core uPy code. Instead, the platform print stream or DEBUG_printf is used. Using DEBUG_printf in the showbc functions would mean that the code can't be tested by the test suite, so use the normal output instead. This patch also fixes parsing of bytecode-line-number mappings.
2016-09-19py: Combine 3 comprehension opcodes (list/dict/set) into 1.Damien George
With the previous patch combining 3 emit functions into 1, it now makes sense to also combine the corresponding VM opcodes, which is what this patch does. This eliminates 2 opcodes which simplifies the VM and reduces code size, in bytes: bare-arm:44, minimal:64, unix(NDEBUG,x86-64):272, stmhal:92, esp8266:200. Profiling (with a simple script that creates many list/dict/set comprehensions) shows no measurable change in performance.
2015-12-10py: Make UNARY_OP_NOT a first-class op, to agree with Py not semantics.Damien George
Fixes #1684 and makes "not" match Python semantics. The code is also simplified (the separate MP_BC_NOT opcode is removed) and the patch saves 68 bytes for bare-arm/ and 52 bytes for minimal/. Previously "not x" was implemented as !mp_unary_op(x, MP_UNARY_OP_BOOL), so any given object only needs to implement MP_UNARY_OP_BOOL (and the VM had a special opcode to do the ! bit). With this patch "not x" is implemented as mp_unary_op(x, MP_UNARY_OP_NOT), but this operation is caught at the start of mp_unary_op and dispatched as !mp_obj_is_true(x). mp_obj_is_true has special logic to test for truthness, and is the correct way to handle the not operation.
2015-11-29py: Wrap all obj-ptr conversions in MP_OBJ_TO_PTR/MP_OBJ_FROM_PTR.Damien George
This allows the mp_obj_t type to be configured to something other than a pointer-sized primitive type. This patch also includes additional changes to allow the code to compile when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of mp_uint_t, and various casts.
2015-11-13py: Add MICROPY_PERSISTENT_CODE so code can persist beyond the runtime.Damien George
Main changes when MICROPY_PERSISTENT_CODE is enabled are: - qstrs are encoded as 2-byte fixed width in the bytecode - all pointers are removed from bytecode and put in const_table (this includes const objects and raw code pointers) Ultimately this option will enable persistence for not just bytecode but also native code.
2015-11-13py: Add constant table to bytecode.Damien George
Contains just argument names at the moment but makes it easy to add arbitrary constants.
2015-11-13py: Put all bytecode state (arg count, etc) in bytecode.Damien George
2015-11-13py: Reorganise bytecode layout so it's more structured, easier to edit.Damien George
2015-06-25py: Remove mp_load_const_bytes and instead load precreated bytes object.Damien George
Previous to this patch each time a bytes object was referenced a new instance (with the same data) was created. With this patch a single bytes object is created in the compiler and is loaded directly at execute time as a true constant (similar to loading bignum and float objects). This saves on allocating RAM and means that bytes objects can now be used when the memory manager is locked (eg in interrupts). The MP_BC_LOAD_CONST_BYTES bytecode was removed as part of this. Generated bytecode is slightly larger due to storing a pointer to the bytes object instead of the qstr identifier. Code size is reduced by about 60 bytes on Thumb2 architectures.
2015-06-18py: Make showbc decode UNPACK_EX, and use correct range for unop/binop.Damien George
2015-05-05py: Remove LOAD_CONST_ELLIPSIS bytecode, use LOAD_CONST_OBJ instead.Damien George
Ellipsis constant is rarely used so no point having an extra bytecode for it.
2015-04-07py: Simplify bytecode prelude when encoding closed over variables.Damien George
2015-03-20py: Implement DELETE_GLOBAL in showbc.c.Damien George
2015-02-08py: Parse big-int/float/imag constants directly in parser.Damien George
Previous to this patch, a big-int, float or imag constant was interned (made into a qstr) and then parsed at runtime to create an object each time it was needed. This is wasteful in RAM and not efficient. Now, these constants are parsed straight away in the parser and turned into objects. This allows constants with large numbers of digits (so addresses issue #1103) and takes us a step closer to #722.
2015-01-27py: Specify unary/binary op name in TypeError error message.Damien George
Eg, "() + 1" now tells you that __add__ is not supported for tuple and int types (before it just said the generic "binary operator"). We reuse the table of names for slot lookup because it would be a waste of code space to store the pretty name for each operator.
2015-01-20py, unix, stmhal: Allow to compile with -Wshadow.Damien George
See issue #699.
2015-01-16py, unix: Allow to compile with -Wsign-compare.Damien George
See issue #699.
2015-01-13py/showbc.c: Handle new LOAD_CONST_OBJ opcode, and opcodes with cache.Damien George
2015-01-07showbc: Show conditional jump destination as unsigned value.Paul Sokolovsky
This is consistent with how BC_JUMP was handled before. We never show jumps destinations relative to jump instrucion itself, only relative to beginning of function. Another useful way to show them as absolute (real memory address), and this change makes result expected and consistent with how BC_JUMP is shown.
2015-01-01py: Move to guarded includes, everywhere in py/ core.Damien George
Addresses issue #1022.
2014-12-28showbc: Print operation mnemonic in BINARY_OP.Paul Sokolovsky
2014-12-28showbc: Make code object start pointer semi-public.Paul Sokolovsky
This allows to pring either absolute addresses or relative offsets in jumps and code references.
2014-12-27showbc: Refactor to allow inline instruction printing.Paul Sokolovsky
2014-12-12py: Fix label printing in showbc; print sp in vm trace.Damien George
2014-10-25py: Compress load-int, load-fast, store-fast, unop, binop bytecodes.Damien George
There is a lot potential in compress bytecodes and make more use of the coding space. This patch introduces "multi" bytecodes which have their argument included in the bytecode (by addition). UNARY_OP and BINARY_OP now no longer take a 1 byte argument for the opcode. Rather, the opcode is included in the first byte itself. LOAD_FAST_[0,1,2] and STORE_FAST_[0,1,2] are removed in favour of their multi versions, which can take an argument between 0 and 15 inclusive. The majority of LOAD_FAST/STORE_FAST codes fit in this range and so this saves a byte for each of these. LOAD_CONST_SMALL_INT_MULTI is used to load small ints between -16 and 47 inclusive. Such ints are quite common and now only need 1 byte to store, and now have much faster decoding. In all this patch saves about 2% RAM for typically bytecode (1.8% on 64-bit test, 2.5% on pyboard test). It also reduces the binary size (because bytecodes are simplified) and doesn't harm performance.
2014-10-25py: Store bytecode arg names in bytecode (were in own array).Damien George
This saves a lot of RAM for 2 reasons: 1. For functions that don't have default values, var args or var kw args (which is a large number of functions in the general case), the mp_obj_fun_bc_t type now fits in 1 GC block (previously needed 2 because of the extra pointer to point to the arg_names array). So this saves 16 bytes per function (32 bytes on 64-bit machines). 2. Combining separate memory regions generally saves RAM because the unused bytes at the end of the GC block are saved for 1 of the blocks (since that block doesn't exist on its own anymore). So generally this saves 8 bytes per function. Tested by importing lots of modules: - 64-bit Linux gave about an 8% RAM saving for 86k of used RAM. - pyboard gave about a 6% RAM saving for 31k of used RAM.
2014-10-24py: Fix debug-printing of bytecode line numbers.Damien George
Also move the raw bytecode printing code from emitglue to mp_bytecode_print.
2014-10-03py: Use UINT_FMT instead of %d.Damien George
2014-10-03py: Convert [u]int to mp_[u]int_t where appropriate.Damien George
Addressing issue #50.