@@ -45,7 +45,7 @@ We have created channels on various platforms:
45
45
- [ PeriDEM on Gitter] ( https://gitter.im/PeriDEM/community?utm_source=share-link&utm_medium=link&utm_campaign=share-link )
46
46
* Gitter is absolutely open and easy to join.
47
47
- [ PeriDEM on slack] ( peridem.slack.com )
48
- * Send us an email if interested in joining the workspace.
48
+ * Email us if interested in joining the workspace.
49
49
50
50
## Documentation
51
51
[ Doxygen generated documentation] ( https://prashjha.github.io/PeriDEM/ ) details functions and objects in the library.
@@ -275,7 +275,7 @@ We are moving in following key directions:
275
275
- MPI parallelism for PeriDEM simulations. Issue is distributing particles to different
276
276
processors and performing communication efficiently
277
277
- Asynchronous parallelism within MPI? Currently, we use ` Taskflow ` to perform
278
- parallel for loops in a non-mpi simulation. In future, we will be interested in using
278
+ parallel for loops in a non-mpi simulation. In the future, we will be interested in using
279
279
multithreading combined with MPI to further speed-up the simulations
280
280
- GPU parallelism?
281
281
@@ -296,7 +296,7 @@ able to compile PeriDEM in ubuntu (>= 18.04) and mac.
296
296
Feel free to reach out or open an issue. For more open
297
297
discussion of issues and ideas, contact via
298
298
[ PeriDEM on Gitter] ( https://gitter.im/PeriDEM/community?utm_source=share-link&utm_medium=link&utm_campaign=share-link )
299
- or [ PeriDEM on slack] ( peridem.slack.com ) (for slack, send us an email to join).
299
+ or [ PeriDEM on slack] ( peridem.slack.com ) (for slack, email us to join).
300
300
If you like some help, want to contribute, extend the code, or discuss new ideas,
301
301
please do reach out to us.
302
302
0 commit comments