You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/userguide/quickstart.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
# Quickstart
2
2
3
3
## 1. Install and import
4
-
Install the library using `pip install pygrank` and import it. Construct a node ranking algorithm from a graph filter by incrementally applying postprocessors using >>. There are many components and parameters available. You can use[autotuning](autotuning.md) to find good configurations.
4
+
Install the library using `pip install pygrank` and import it. Construct a node ranking algorithm from a graph filter by incrementally applying postprocessors using >>. There are many components and parameters available. Use[autotuning](autotuning.md) to find good configurations.
5
5
6
6
```python
7
7
import pygrank as pg
@@ -28,4 +28,4 @@ Evaluate the scores using a stochastic generalization of the unsupervised conduc
28
28
measure = pg.Conductance() # an evaluation measure
f"[pygrank.backend.pytorch] Not enough memory to convert a scipy sparse matrix with shape {M.shape} to a numpy dense matrix before moving it to your device.\nWill create a torch.sparse_coo_tensor instead.\nAdd the option mode=\"sparse\" to the backend's initialization to hide this message,\nbut prefer switching to the torch_sparse backend for a performant implementation.")
102
+
f"[pygrank.backend.pytorch] Not enough memory to convert a scipy sparse matrix with shape {M.shape} "
103
+
f"to a numpy dense matrix before moving it to your device.\nWill create a torch.sparse_coo_tensor instead."
104
+
f'\nAdd the option mode="sparse" to the backend\'s initialization to hide this message,'
105
+
f"\nbut prefer switching to the torch_sparse backend for a performant implementation."
0 commit comments