Skip to content

Commit

Permalink
Merge branch 'main' into checking
Browse files Browse the repository at this point in the history
  • Loading branch information
pablogsal authored May 29, 2024
2 parents 25189fa + 0e3673d commit ab34848
Show file tree
Hide file tree
Showing 48 changed files with 3,658 additions and 2,343 deletions.
9 changes: 9 additions & 0 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"name": "Memray development",
"build": {
"context": "..",
"dockerfile": "../Dockerfile"
},
"runArgs": ["--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined"],
"onCreateCommand": "pip install -e ."
}
19 changes: 19 additions & 0 deletions .devcontainer/tutorials/devcontainer.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
{
"name": "Memray tutorials",
"build": {
"context": "../../docs/tutorials",
"dockerfile": "../../docs/tutorials/Dockerfile"
},
"customizations": {
"vscode": {
"settings": {
"python.testing.pytestArgs": ["docs/tutorials/tests"],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.defaultInterpreterPath": "/venv/bin/python"
},
"extensions": ["ms-python.python"]
}
},
"runArgs": ["--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined"]
}
41 changes: 5 additions & 36 deletions .github/workflows/build_wheels.yml
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ jobs:
run: |
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
- name: Build wheels
uses: pypa/cibuildwheel@v2.17.0
uses: pypa/cibuildwheel@v2.18.1
env:
CIBW_BUILD: "cp3{7..12}-${{ matrix.wheel_type }}"
CIBW_ARCHS_LINUX: auto aarch64
Expand Down Expand Up @@ -143,7 +143,7 @@ jobs:
tar zxvf dist/*.tar.gz --strip-components=1
- name: Sets env vars for compilation
run: |
echo "LZ4_INSTALL_DIR=/tmp/lz4_install/usr/local/" >> $GITHUB_ENV
echo "LZ4_INSTALL_DIR=/tmp/lz4_install" >> $GITHUB_ENV
echo "CFLAGS=-arch ${{matrix.arch}}" >> $GITHUB_ENV
- name: Set x86_64-specific environment variables
if: matrix.arch == 'x86_64'
Expand All @@ -154,17 +154,17 @@ jobs:
run: |
echo "MACOSX_DEPLOYMENT_TARGET=11.0" >> $GITHUB_ENV
- name: Build wheels
uses: pypa/cibuildwheel@v2.17.0
uses: pypa/cibuildwheel@v2.18.1
env:
CIBW_BUILD: "cp3{8..12}-*"
CIBW_PRERELEASE_PYTHONS: True
CIBW_TEST_EXTRAS: test
CIBW_TEST_COMMAND: pytest {package}/tests
CIBW_BUILD_VERBOSITY: 1
MACOSX_DEPLOYMENT_TARGET: "10.14"
CFLAGS: "${{env.CFLAGS}} -I${{env.LZ4_INSTALL_DIR}}/include"
LDFLAGS: "-L${{env.LZ4_INSTALL_DIR}}/lib -Wl,-rpath,${{env.LZ4_INSTALL_DIR}}/lib"
DYLD_LIBRARY_PATH: "${{env.LZ4_INSTALL_DIR}}/lib"
PKG_CONFIG_PATH: "${{env.LZ4_INSTALL_DIR}}/lib/pkgconfig"
CIBW_REPAIR_WHEEL_COMMAND_MACOS: "DYLD_LIBRARY_PATH=${{env.LZ4_INSTALL_DIR}}/lib delocate-wheel --require-archs {delocate_archs} -w {dest_dir} -v {wheel}"
- uses: actions/upload-artifact@v4
with:
name: macosx_${{ matrix.arch }}-wheels
Expand Down Expand Up @@ -200,34 +200,3 @@ jobs:
with:
skip_existing: true
password: ${{ secrets.PYPI_PASSWORD }}

publish_docs:
name: Publish docs
runs-on: ubuntu-latest
if: github.event_name == 'release' && github.event.action == 'published'
permissions:
contents: write
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Set up dependencies
run: |
sudo apt-get update
sudo apt-get install -qy clang-format npm libunwind-dev liblz4-dev pkg-config
- name: Install Python dependencies
run: |
python3 -m pip install -r requirements-extra.txt
- name: Install Package
run: |
python3 -m pip install -e .
- name: Build docs
run: |
make docs
- name: Publish docs to GitHub Pages
uses: JamesIves/github-pages-deploy-action@v4
with:
folder: docs/_build/html
single-commit: true
38 changes: 38 additions & 0 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
name: Docs

on:
push:
branches:
- main

jobs:
publish_docs:
name: Publish docs
runs-on: ubuntu-latest
#if: github.event_name == 'release' && github.event.action == 'published'
permissions:
contents: write
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Set up dependencies
run: |
sudo apt-get update
sudo apt-get install --no-install-recommends -qy libunwind-dev liblz4-dev pkg-config
- name: Install Python dependencies
run: |
python3 -m pip install -r requirements-extra.txt
- name: Install Package
run: |
python3 -m pip install -e .
- name: Build docs
run: |
make docs
- name: Publish docs to GitHub Pages
uses: JamesIves/github-pages-deploy-action@v4
with:
folder: docs/_build/html
single-commit: true
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,6 @@ cython_debug/
memray-*

# VSCode
.devcontainer
.vscode

# NodeJS
Expand Down
13 changes: 6 additions & 7 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM debian:bullseye-slim
FROM debian:bookworm-slim

ARG DEBIAN_FRONTEND=noninteractive

Expand Down Expand Up @@ -32,10 +32,11 @@ ENV VIRTUAL_ENV=/venv \
CC=gcc \
CXX=g++

RUN python3.9 -m venv "$VIRTUAL_ENV"
RUN python3 -m venv "$VIRTUAL_ENV"

ENV PATH="${VIRTUAL_ENV}/bin:${PATH}" \
PYTHON="${VIRTUAL_ENV}/bin/python"
ENV PATH="${VIRTUAL_ENV}/bin:/usr/lib/ccache:${PATH}" \
PYTHON="${VIRTUAL_ENV}/bin/python" \
MEMRAY_MINIMIZE_INLINING="1"

COPY requirements-test.txt requirements-extra.txt requirements-docs.txt /tmp/

Expand All @@ -44,12 +45,10 @@ RUN $PYTHON -m pip install -U \
-r /tmp/requirements-test.txt \
-r /tmp/requirements-docs.txt \
cython \
pkgconfig \
setuptools \
wheel

RUN npm install -g prettier

RUN ln -s /usr/bin/ccache /bin/g++ \
&& ln -s /usr/bin/ccache /bin/gcc

WORKDIR /src
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ If you wish to build Memray from source you need the following binary dependenci
- libunwind (for Linux)
- liblz4

Check your package manager on how to install these dependencies (for example `apt-get install libunwind-dev liblz4-dev` in Debian-based systems
Check your package manager on how to install these dependencies (for example `apt-get install build-essential python3-dev libunwind-dev liblz4-dev` in Debian-based systems
or `brew install lz4` in MacOS). Note that you may need to teach the compiler where to find the header and library files of the dependencies. For
example, in MacOS with `brew` you may need to run:

Expand Down
Binary file added docs/_static/images/codespaces_testing_tab.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_static/images/exercise1_flamegraph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_static/images/exercise2_flamegraph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_static/images/ports_tab.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_static/images/pytest_cli_output.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
13 changes: 10 additions & 3 deletions docs/_templates/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -50,10 +50,17 @@
<a class="nav-link" href="{{ pathto('overview') }}">Documentation</a>
</li>
<li class="nav-item">
<a class="nav-link" href="{{ pathto('getting_started') }}">Installation</a>
<a class="nav-link" href="{{ pathto('getting_started') }}">Getting Started</a>
</li>
<li class="nav-item">
<a class="nav-link" href="{{ pathto('examples/README') }}">Examples</a>
<li class="nav-item dropdown">
<a class="nav-link dropdown-toggle" href="#" id="navbarDropdownMenuLink3" data-toggle="dropdown"
aria-haspopup="true" aria-expanded="false">
Try for yourself
</a>
<div class="dropdown-menu" aria-labelledby="navbarDropdownMenuLink3">
<a class="dropdown-item" href="{{ pathto('tutorials/index') }}">Tutorials</a>
<a class="dropdown-item" href="{{ pathto('examples/README') }}">Examples</a>
</div>
</li>
<li class="nav-item">
<a class="nav-link" href="https://github.com/bloomberg/pytest-memray">Pytest plugin</a>
Expand Down
8 changes: 5 additions & 3 deletions docs/flamegraph.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,8 @@ orientation with this toggle button:
Whichever of these modes you choose, the data shown in the table is the
same, just mirrored vertically.

.. _interpreting flame graphs:

Interpreting flame graphs
-------------------------

Expand All @@ -74,7 +76,7 @@ follows:
allocated memory.

- For quickly identifying the functions that allocated more memory
directly, look for large plateaus along the bottom edge, as these show
directly, look for wide boxes along the bottom edge, as these show
a single stack trace was responsible for a large chunk of the total
memory of the snapshot that the graph represents.

Expand Down Expand Up @@ -104,7 +106,7 @@ follows:
graph by default.

And of course, if you switch from the "icicle" view to the "flame" view,
the root jumps to the bottom of the page, and call stacks grow upwards
the root drops to the bottom of the page, and call stacks grow upwards
from it instead of downwards.

Simple example
Expand Down Expand Up @@ -422,7 +424,7 @@ for understanding its memory usage patterns.
about allocations over time. They also can't be used for finding
:doc:`temporary allocations </temporary_allocations>`.

You can see an example of a temporal flamegraph
You can see an example of a temporal flame graph
`here <_static/flamegraphs/memray-flamegraph-fib.html>`_.

Conclusion
Expand Down
23 changes: 22 additions & 1 deletion docs/getting_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ You can also invoke Memray without version-qualifying it:
The downside to the unqualified ``memray`` script is that it's not immediately
clear what Python interpreter will be used to execute Memray. If you're using
a virtualenv that's not a problem because you know exactly what interpreter is
a virtual environment that's not a problem because you know exactly what interpreter is
in use, but otherwise you need to be careful to ensure that ``memray`` is
running with the interpreter you meant to use.

Expand Down Expand Up @@ -76,3 +76,24 @@ the results file:
This will generate the ``memray-flamegraph-example.py.4131.html`` file in the current directory. See the :doc:`flamegraph`
documentation which explains how to interpret flame graphs.

Next Steps
----------

The "Hands-on Tutorial" section of our sidebar includes :doc:`a set of lessons <tutorials/index>` you can use to
practice working with Memray by debugging example Python applications with surprising memory allocation behavior. You
can also try Memray out on our :doc:`example applications <examples/README>`.

If you instead want to jump directly into debugging one of your own applications, the "Concepts" section of our sidebar
gives background information to help you use Memray more effectively. Reading about :doc:`the run subcommand <run>` will
tell you what options to use for debugging memory leaks, or for seeing the native stack traces corresponding to
allocations. Interpreting the generated memory profiles will be much easier if you understand :doc:`the Python
allocators <python_allocators>` as well as :doc:`some general memory concepts <memory>`.

If you find any bugs, you can `file a bug report`_. If you aren't sure whether something is a bug or expected behavior,
or if you want to suggest an idea or discuss things with the maintainers, you should `start a discussion`_ instead.

Good luck, and happy debugging!

.. _file a bug report: https://github.com/bloomberg/memray/issues/new?&labels=bug&template=---bug-report.yaml
.. _start a discussion: https://github.com/bloomberg/memray/discussions/new/choose
15 changes: 15 additions & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,21 @@

overview
getting_started

.. toctree::
:hidden:
:caption: Hands-on Tutorial

tutorials/index
tutorials/1
tutorials/2
tutorials/3
tutorials/additional_features

.. toctree::
:hidden:
:caption: Concepts

run
python_allocators
memory
Expand Down
36 changes: 36 additions & 0 deletions docs/run.rst
Original file line number Diff line number Diff line change
Expand Up @@ -195,6 +195,42 @@ any other capture file, and can be fed into any reporter of your choosing.
``--follow-fork`` mode can only be used with an output file. It is incompatible with ``--live``
mode and ``--live-remote`` mode, since the TUI can't be attached to multiple processes at once.

Losing capture files after OOM Errors
-------------------------------------

When a process runs out of memory, this commonly causes an Out Of Memory error,
or "OOM Error". That causes the process to be killed by its operating system.
Within orchestrations like Kubernetes the termination of the main process might
immediately lead to the destruction of the container and the loss of the files
that Memray uses to collect its results.

When running ``memray run myprogram.py`` a capture file gets created on the file
system, but the entire file system will be thrown away as soon as the
orchestration cleans up the container. If the program exits unexpectedly,
perhaps because the kernel kills it due to an OOM error, the orchestration might
throw away the capture file before you ever get a chance to use it. Since Memray
is often used to chase memory leaks, this condition might happen more often than
you'd like.

Since Memray is running in the same process as your application, it has no way
to prevent this data loss (by sending it over the network, for example) because
any work it does will be terminated when the process crashes.

Instead of directly calling ``memray run myprogram.py`` you can wrap it in
a script that will run Memray and run post-processing operations on the capture
file. That way, the container won't be destroyed by the orchestration until
after the wrapper script exits, rather than being destroyed as soon as the
Python script being tracked by Memray exits.

.. code-block:: shell
memray run --output /tmp/capture.bin myprogram.py
echo "Program finished"
# Do your post-processing here. This example just logs a summary of what's
# in the capture file, but you might want to generate reports from it and
# copy them over the network to some persistent storage, for instance.
memray summary /tmp/capture.bin
.. _aggregated capture files:

Aggregated capture files
Expand Down
Loading

0 comments on commit ab34848

Please sign in to comment.