-
Notifications
You must be signed in to change notification settings - Fork 582
Description
Hi, love the project!
I recently started implementing a ray tracer as an exercise for trying out Codon. After a while, I was curious to try and make the code work with vanilla Python and PyPy, and then found out that my renders are about twice as fast with PyPy compared to Codon.
After trying a few optimizations to no avail, I decided to try and profile the execution of the Codon-made binary:
It appears that more than half the time is spent on some internal gc.alloc_atomic, and also starting threads? (I have zero @par in the whole codebase).
I noticed that when using time, the user time is often twice as much as the real time (something in a thread is doing something). And in turn the real time is still twice as much as PyPy's.
My suspicion is that creating a lot of Vec3 classes all the time is somehow bogging down the GC. Maybe I have a basic misconception of how to use Codon?
Here is an interactive version of the flame graph (unzip and open the SVG in a browser), and the code is available here: https://github.com/Tenchi2xh/RTOW-Codon (check out commit 379d5d0, the master branch now has other types of optimizations). The main entry point is rtow/__main__.py but it's easier to run it from the run.sh script (a preprocessor has to remove python-specific stuff). To run it faster, just reduce samples_per_pixel and max_depth on lines 52-53 (it runs even slower in the profiler)
(Sorry to link to a whole repo, it's not a big codebase, but big enough to make it hard to produce a minimal reproducible example for a Github issue)
I am using the latest dev build of Codon, downloaded from a CI build