Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[INFO] Code execution speed considerations for developers #4206

Open
DedeHai opened this issue Oct 18, 2024 · 7 comments
Open

[INFO] Code execution speed considerations for developers #4206

DedeHai opened this issue Oct 18, 2024 · 7 comments
Labels
discussion keep This issue will never become stale/closed automatically optimization re-working an existing feature to be faster, or use less memory

Comments

@DedeHai
Copy link
Collaborator

DedeHai commented Oct 18, 2024

I want to collect some info here about things I have learned while writing code for the ESP32 family MCUs. Please feel free to add to this.

This is a work in progress.

Comparison of basic operations on the CPU architectures

Operation ESP32@240MHz (MOPS/s) S3@240MHz (MOPS/s) S2@240MHz (MOPS/s) C3@160MHz (MOPS/s)
Integer Addition 237.76 237.99 182.17 127.08
Integer Multiply 237.05 238.06 182.17 120.74
Integer Division 118.94 119.03 101.29 4.63
Integer Multiply-Add 158.49 158.66 136.63 127.22
64bit Integer Addition 19.50 20.81 18.11 36.82
64bit Integer Multiply 27.55 30.22 27.79 15.50
64bit Integer Division 2.71 2.71 2.65 1.02
64bit Integer Multiply-Add 19.80 21.88 19.16 20.30
Float Addition 237.55 238.04 7.77 1.93
Float Multiply 237.69 237.97 4.14 1.24
Float Division 1.42 4.47 0.86 0.79
Float Multiply-Add 474.85 475.91 6.43 1.76
Double Addition 6.50 6.18 6.51 1.51
Double Multiply 2.23 2.37 2.23 0.70
Double Division 0.48 0.54 0.30 0.41
Double Multiply-Add 5.65 5.61 5.65 1.40

This table was generated using code from https://esp32.com/viewtopic.php?p=82090#

Even though the ESP32 and the S3 have hardware floating point units, they still do floating point division in software so it should be avoided in speed critical functions.

Edit (softhack007): "Float Multiply-Add" uses a special CPU instruction that combines addition and multiplication. Its generated by the compiler for expressions like a = a + b * C;

As to why integer divisions on the C3 are so slow is unknown, the datasheet clearly states that it can do 32-bit integer division in hardware.

Bit shifts vs. division

Bit shifts are always faster than doing a division as it is a single-instruction command. The compiler will replace divisions by bit-shifts wherever possible, so var / 256 is equivalent to var >> 8 if var is unsigned. If it is a signed integer, it is only equivalent if the value of var is positive and this fact is known to be always the case at compile time. The reason is: -200/256=0 and -200>>8=-1. So when using signed integers and a bit-shift is possible it is better to do it explicitly instead of leaving it to the compiler. (please correct me if I am wrong here)

Fixed point vs. float

Using fixed point math is less accurate but for most operations it is accurate enough and it runs much faster especially when doing divisions.
When doing mixed-math there is a pitfall: casting negative floats into unsigned integers is undefined and leads to problems on some CPUs. https://embeddeduse.com/2013/08/25/casting-a-negative-float-to-an-unsigned-int/
To avoid this problem, explicitly cast a float into int before assigning it to an unsigned integer.

Modulo Operator: %

The modulo operator uses several instructions. A modulo of 2^i can be replaced with a 'bitwise and' or & operator which is a single instruction. The rule is n % 2^i = n & (2^i - 1). For example n % 2048 = n & 2047

@softhack007
Copy link
Collaborator

softhack007 commented Oct 18, 2024

@DedeHai I was wondering if the tables from https://esp32.com/viewtopic.php?p=82090# are still correct, especially for the float multiply vs. float divide. The table comes from a time when FPU support for esp32 was broken.
espressif/esp-idf#96

It seems correct that "float divide" is a lot slower than multiply by inverse, and I think (please correct me) the compiler can generate this optimization automatically. However, the difference today should be like "8-10 times slower" but not a factor of almost 100x.

EDIT: there was a PR for esp-idf that corrected usage of FPU instructions in esp-idf v4.
Maybe it would be useful to add a column to the table, for comparing "esp32 esp-idf v3.x" vs. "esp32 esp-idf v4.x"

espressif/esp-idf@db6a30b

@softhack007
Copy link
Collaborator

softhack007 commented Oct 18, 2024

There is an additional thing worth mentioning:

floating point "literals"

According to c++ semantics, an expression like "if ( x > 1.0)" (with float x) is first "promoted" to double before evaluation, which makes it SLOW. This can be avoided

  • by appending "f" to the literal --> if ( x > 1.0) --> if ( x > 1.0f), or
  • by casting constants to float x += M_PI --> x += float(M_PI), or
  • by creating your constants with "constexpr float" instead of "#define": #define MY_LIMIT 3.14 --> constexpr float MY_LIMIT = 3.14; (notice that appending "f" is not needed here).

You can check the code for such "double promotions" by adding -Wdouble-promotion to build_flags

https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wdouble-promotion

@softhack007
Copy link
Collaborator

softhack007 commented Oct 18, 2024

use constexpr

Using constexpr is a nice way to optimize without obfuscating the code too much.

In contrast to const which often calculates at runtime, constexpr is guaranteed to be calculated by the compiler, so the calculation will never be part of the binary.

https://en.cppreference.com/w/cpp/language/constexpr

examples

    constexpr float f = 23.0f;
    constexpr float g = 33.0f;
    constexpr float h = f / g; // is computed by the compiler, so it needs ZERO cycles at runtime
    printf("%f\n", h);
}

You can even create functions that are constexpr

// C++11 constexpr functions use recursion rather than iteration
constexpr int factorial(int n)
{
    return n <= 1 ? 1 : (n * factorial(n - 1));
}
static constexpr unsigned getPaletteCount()  { return 13 + GRADIENT_PALETTE_COUNT; }  // zero-cost "getter" function

@softhack007
Copy link
Collaborator

softhack007 commented Oct 18, 2024

...and the classical one:

avoid 8bit and 16bit integers for local variables

uint8_t, int16_t and friends are useful to save RAM for global data, however - in contrast to older 8bit Arduino processors like AVR - these types are slower than the native types int or unsigned.

Update: If 8bit math (with roll-over on 255) is needed, 8bit types should be used - it's still faster than manually checking and adjusting 8bit overflows.

The reason is that esp32 processors have 32bit registers and 32bit instructions, so any calculation on uint8_t requires some extra effort to correctly emulate 8bit (especially for overflow). Technically uint8_t c = a + b; becomes something like uint8_t c = ((a & 0xFF) + (b & 0xFF)) & 0xFF; . And its even more complicated for signed int8_t...

for more info: https://en.cppreference.com/w/cpp/types/integer

@softhack007 softhack007 added optimization re-working an existing feature to be faster, or use less memory keep This issue will never become stale/closed automatically labels Oct 18, 2024
@blazoncek
Copy link
Collaborator

...and the classical one:

avoid 8bit and 16bit integers for local variables

This one is tricky. The code needs not rely on overflows as it does in WLED.

@DedeHai
Copy link
Collaborator Author

DedeHai commented Oct 18, 2024

I was wondering if the tables from https://esp32.com/viewtopic.php?p=82090# are still correct.

They are for current WLED, I generatad this yesterday by inserting the code to 0.15. I can add IDF 4 once we move there.

The 8bit/16bit is a bit more elaborate. In general what you write is true but it has 8bit/16bit instructions too. So yes, avoid 8bit but manually checking and adjusting overflows is slower. So if 8bit math is needed, it should be used.

@softhack007
Copy link
Collaborator

softhack007 commented Oct 18, 2024

I can add IDF 4 once we move there

I'm really curious to see the numbers for the newer V4 framework 😀 . But yeah, it won't be better than -S3 results.

You could use the esp32_wrover buildenv for measuring - I think it will also work with esp32 that does not have PSRAM.

[env:esp32_wrover]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion keep This issue will never become stale/closed automatically optimization re-working an existing feature to be faster, or use less memory
Projects
None yet
Development

No branches or pull requests

3 participants