|
1 | | -# Welcome OpenMC Fusion Benchmarks |
| 1 | +# Introduction |
2 | 2 |
|
3 | | -OpenMC Fusion Benchmarks is a platform for V&V of fusion neutronics. It focuses on fusion-relevant integral benchmarks. It relies on an automated workflow for model simulation, data postprocessing, visualization and analysis. It embeds a database of experimental and numerical results for quick comparisons to which users can contribute to. The database contribution workflow is fully automated. |
| 3 | +**OpenMC Fusion Benchmarks** builds on a modular and schema-driven approach to radiation transport benchmarking, with a focus on reproducibility, extensibility, and automation. Each benchmark is fully defined by a standardized `specifications.yaml` file, which captures all aspects of the model — including CAD-based geometry, materials, source definitions, simulation parameters, and reference results. |
4 | 4 |
|
5 | | -```{tableofcontents} |
6 | | -``` |
| 5 | +This approach enables consistent validation, execution, and postprocessing of benchmarks across tools and workflows. The repository is also designed to facilitate the implementation and testing of new neutronics methods in a code-agnostic environment. |
| 6 | + |
| 7 | +--- |
| 8 | + |
| 9 | +## Key Features |
| 10 | + |
| 11 | +- **Standardized benchmark definitions** via `specifications.yaml` |
| 12 | +- **Validation** of `specifications.yaml` against a strict `benchmark_schema.yaml` |
| 13 | +- **CAD-based geometries** and automatic meshing tools |
| 14 | +- **Automated workflow** for benchmark building, running and analysis through Python APIs |
| 15 | +- **Unified results format** for comparing experimental, historical, and simulated data |
| 16 | +- **Embedded Uncertainty Quantification** for _best estimate plus uncertainty_ approach |
| 17 | +- **Benchmark and results libraries** with descriptions, `specifications` and results |
| 18 | + |
| 19 | +--- |
| 20 | + |
| 21 | +## Get Started |
| 22 | + |
| 23 | +- [Quickstart Guide](quickstart.md) |
| 24 | +- [Benchmark Specifications Format](specifications/overview.md) |
| 25 | +- [Available Benchmarks](benchmark_collection/index.md) |
| 26 | +- [Workflows: from definition to analysis]() |
| 27 | +- [Python API]() |
| 28 | +- [Example Notebooks]() |
| 29 | + |
| 30 | +--- |
| 31 | + |
| 32 | +## How to Contribute |
| 33 | + |
| 34 | +We welcome contributions of new benchmarks, improvements to the schema, and extensions to the tools and analysis pipelines. See our [Contributing Guidelines](https://github.com/your-org/your-repo/blob/main/CONTRIBUTING.md) for more. |
| 35 | + |
| 36 | +For questions, ideas, or bug reports, please open an [issue](https://github.com/eepeterson/openmc_fusion_benchmarks/issues) or reach out to the maintainers. |
| 37 | + |
| 38 | +--- |
| 39 | + |
| 40 | +## License and Citation |
| 41 | + |
| 42 | +This project is open source under the [MIT License](https://github.com/eepeterson/openmc_fusion_benchmarks/blob/develop/LICENSE). |
| 43 | + |
| 44 | +If you use this benchmark format or collection in your work, please cite: |
0 commit comments