A set of multiple notebooks using 8 TeV and 13 TeV ATLAS Open Data datasets.
For more complex needs, such as running multiple notebooks review the Binder Documentation for guidance on setting up a more personalized Binder environment.
Note: before starting running the code in the jupyter notebooks, click on the up right button "not trusted" in order to get "trusted" displayed. This should lead the JavaScript to be executed, that is useful to visualise interactive histograms. If that doesn't work, simply go to the top of the notebook, find the cell that contains the line of code %jsroot
and comment out that.
Much of the functionality is the same as in Binder, but with some subtle differences that are explained in the Colab Documentation.
Docker provides a robust platform for developing, sharing, and running applications within containers.
Steps to run ATLAS open data using Docker:
- Install Docker: Download and install Docker from the official website.
- Open Docker
- Run the Docker Image: Open your terminal and run the Docker image, available on our GitHub registry.
docker run -it -p 8888:8888 -v my_volume:/home/jovyan/work ghcr.io/atlas-outreach-data-tools/notebooks-collection-opendata:latest
- Launch the Notebook: After running the Docker image, use the link provided in the terminal to access the Jupyter interface. The terminal should look like this:
To access the notebook interface, open this file in a browser:
file:///home/jovyan/.local/share/jupyter/runtime/nbserver-15-open.html
Or copy and paste one of these URLs:
http://4c61742ed77c:8888/?token=34b7f124f6783e047e796fea8061c3fca708a062a902c2f9
or http://127.0.0.1:8888/?token=34b7f124f6783e047e796fea8061c3fca708a062a902c2f9
For more, please go to: opendata.atlas.cern/docs/category/choose-your-enviroment
We appreciate the willingness to participate in the ATLAS Open Data project. Below you can find some steps to help you contribute successfully (inspired by The Turing Way contribution guidelines):
Before creating a pull request (PR), discuss the change you want to make. That's the easiest way we can check that the work is not overlapping with current work and that it aligns with the goals of the Open Data project.
In this blog you can find some reasons why putting this work in upfront is so useful to everyone involved.
When opening a new issue be sure to fill all the information necessary. The issue template should help you doing so.
Make a fork of the repository in the upper right corner of the repository page in GitHub. In this new copy, changes won't affect anyone else's work, and you can make as many changes as you want.
Make sure to keep the fork up to date (or develop from a branch based on the main repository's main branch) to avoid conflicts when merging. If you already have conflicts, check GitHub's guide to resolving a merge conflict.
Please keep the changes targeted to what was discussed. While making your changes, commit often and write explanatory commit messages.
Please do not re-write history! That is, please do not use the rebase command to edit previous commit messages, combine multiple commits into one, or delete or revert commits that are no longer necessary.
Once you are done with your changes, open a PR. We encourage you to open a PR as early in your contributing process as possible so that everyone can see what is being worked on.
When submitting your PR you will see a template. It will ask:
- To describe the problem you are trying to fix an reference related issues.
- List the changes you are proposing
- Tell us what we should focus our review on.
Please give us all the details, this makes it easier for us to review the contribution!
@2024