Skip to content

Commit 012fb00

Browse files
committed
update readme
1 parent dcf8430 commit 012fb00

File tree

1 file changed

+111
-1
lines changed

1 file changed

+111
-1
lines changed

README.md

Lines changed: 111 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,111 @@
1-
# Permanent boost
1+
# Permanent Boost
2+
3+
Efficient, parallelized computation of the matrix permanent and its gradient for photonic quantum computing simulations, supporting both CPU and GPU execution.
4+
5+
[![PyPI version](https://img.shields.io/pypi/v/permanentboost)](https://pypi.org/project/permanentboost/)
6+
[![Build](https://github.com/0xSooki/permanent-boost/actions/workflows/tests.yml/badge.svg)](https://github.com/0xSooki/permanent-boost/actions)
7+
[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
8+
9+
## Overview
10+
11+
**PermanentBoost** is a high-performance Python library for computing the permanent function and its gradient — a key component in simulating photonic quantum circuits like Boson Sampling. The library provides optimized CPU and CUDA-enabled GPU backends and integrates seamlessly with machine learning frameworks like JAX.
12+
13+
It is implemented in C++/CUDA and exposed via Python using `pybind11` and XLA FFI bindings for full interoperability with JAX.
14+
15+
## Features
16+
17+
- 🔬 Efficient matrix permanent computation
18+
- 🔁 Gradient support for use in QML
19+
- ⚡ CPU & GPU acceleration
20+
- 🧪 Comprehensive test suite
21+
- 🧩 PyQt5 GUI included for demonstration
22+
- 📦 Easy installation via pip
23+
- 🔄 JAX-compatible bindings for auto-diff workflows
24+
25+
## Installation
26+
27+
### Requirements
28+
29+
- Python 3.10 / 3.11 / 3.12
30+
- pip
31+
- OpenMP (Linux/macOS) or equivalent
32+
- Optional: NVIDIA GPU with CUDA support
33+
34+
### Quickstart
35+
36+
```bash
37+
git clone https://github.com/0xSooki/permanent-boost
38+
cd permanent-boost
39+
pip install .
40+
```
41+
42+
To use the GPU backend:
43+
44+
```bash
45+
pip install "jax[cuda12]"
46+
```
47+
48+
## Usage
49+
50+
### Python API
51+
52+
```python
53+
from permanent import perm
54+
import numpy as np
55+
56+
A = np.array([...], dtype=np.complex128)
57+
rows = np.array([1, 1, 1], dtype=np.uint64)
58+
cols = np.array([1, 1, 1], dtype=np.uint64)
59+
60+
# Compute permanent
61+
result = perm(A, rows, cols)
62+
```
63+
64+
### Gradient (with JAX)
65+
66+
```python
67+
import jax
68+
from permanent import perm
69+
70+
grad_fn = jax.grad(perm, holomorphic=True)
71+
gradient = grad_fn(A, rows, cols)
72+
```
73+
74+
## GUI (Optional)
75+
76+
Run the PyQt5 interface for demonstration:
77+
78+
```bash
79+
python app/main.py
80+
```
81+
82+
## Benchmark Results
83+
84+
![Benchmark Permanent](docs/permanent_benchmark.png)
85+
![Benchmark Gradient](docs/gradient_benchmark.png)
86+
87+
- GPU implementation outperforms CPU and existing libraries at matrix sizes ≥ 11
88+
- Gradient computation shows significant speedup on CUDA-enabled devices
89+
90+
## Troubleshooting
91+
92+
## Development
93+
94+
```bash
95+
pytest # Run tests
96+
pytest --platform=gpu # Run GPU-specific tests
97+
```
98+
99+
### Build Wheels for Distribution
100+
101+
CI/CD pipelines are configured via GitHub Actions. Wheels are built for macOS, Windows, and Linux.
102+
103+
## License
104+
105+
This project is licensed under the MIT License.
106+
107+
## Citation
108+
109+
If you use this library in academic work, please cite the corresponding bachelor thesis:
110+
111+
> Bence Soóki-Tóth. "Efficient calculation of permanent function gradients in photonic quantum computing simulations", Eötvös Loránd University, 2025.

0 commit comments

Comments
 (0)