Skip to content

Commit d3db219

Browse files
authored
Merge branch 'main' into hipct
2 parents dce63b7 + 11d69b4 commit d3db219

15 files changed

+322
-52
lines changed

docs/CNAME

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
docs.lincbrain.org

docs/about.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Acknowledgements
44

5-
Thank you to the DANDI Archive project for setting up the documentation framework that is utilized here. See the [DANDI Handbook](https://www.dandiarchive.org/handbook/) for more information.
5+
Thank you to the DANDI Archive project for setting up the documentation framework that is utilized here. See the [DANDI Docs](https://docs.dandiarchive.org) for more information.
66

77
## License
88

docs/engaging.md

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
# MIT Engaging High Performance Compute Cluster
2+
3+
The Engaging High Performance Compute Cluster is available to LINC team members to process their jobs at scale, including with the use of GPUs.
4+
5+
## Create an account
6+
7+
In order to access the Engaging Cluster, you will need a MIT Sponsored Account.
8+
9+
1. Please contact Kabi at [email protected] with your organization name, date of birth, and phone number.
10+
2. Once the sponsored account is approved, you will receive an email to complete account registration and establish your MIT Kerberos identity.
11+
3. Please send your Kerberos ID to Kabi so that he can add you to the WebMoira group (`orcd_ug_pg_linc_all`) so that you can access the Engaging Cluster.
12+
13+
## Documentation overview
14+
15+
The MIT Office of Research Computing and Data (ORCD) manage the Engaging Cluster. Most of the information you will need is in the first link below but there are additional resources:
16+
17+
1. [Engaging Cluster docs](https://engaging-web.mit.edu/eofe-wiki/)
18+
1. [ORCD Docs](https://orcd-docs.mit.edu/)
19+
1. [MGHPCC OpenMind GitHub wiki](https://github.mit.edu/MGHPCC/OpenMind/wiki)
20+
1. [Slurm docs](https://slurm.schedmd.com/overview.html)
21+
22+
## Access the cluster and run jobs
23+
24+
The Engaging Cluster has head/login nodes to access the cluster and submit jobs to the compute nodes which run your resource intensive scripts. Job orchestration is performed with the Slurm Workload Manager. The [Engaging Cluster Documentation](https://engaging-web.mit.edu/eofe-wiki/) provides details on these operations, including:
25+
26+
1. [Logging into the cluster](https://engaging-web.mit.edu/eofe-wiki/logging_in/)
27+
1. [Cluster architecture including information on the head/login nodes versus compute nodes](https://engaging-web.mit.edu/eofe-wiki/slurm/cluster_workflow/)
28+
1. [Common commands to interact with the Slurm Job Scheduler](https://engaging-web.mit.edu/eofe-wiki/slurm/slurm/)
29+
1. [Run multiple jobs in parallel with `sbatch`](https://engaging-web.mit.edu/eofe-wiki/slurm/sbatch/)
30+
1. [Run interactive jobs on a single compute node with `srun` or `salloc`](https://engaging-web.mit.edu/eofe-wiki/slurm/srun/)
31+
1. [Access installed software](https://engaging-web.mit.edu/eofe-wiki/software/load_modules/)
32+
1. [Determining resources for your job](https://engaging-web.mit.edu/eofe-wiki/slurm/resources/)
33+
34+
Slurm is a common workload manager so you can also refer to the official [Slurm documentation](https://slurm.schedmd.com/overview.html).
35+
36+
## Compute nodes
37+
38+
The Engaging Cluster has several CPU-only compute nodes and GPU compute nodes. The nodes are categorized according to partitions to control which groups have access to the nodes.
39+
40+
See [Determining resources for your job](https://engaging-web.mit.edu/eofe-wiki/slurm/resources/) for details on selecting the nodes and resources for your jobs. Briefly, the `sinfo` command shows the partitions where you can submit jobs and you can submit jobs to a certain partition by including `#SBATCH --partition=<partition_name>` in your sbatch script.
41+
42+
The GPU nodes are available through the `ou_bcs_high` and `ou_bcs_low` partitions. For more details, see the [BCS computing resources on Engaging - Slurm configuration](https://github.mit.edu/MGHPCC/OpenMind/wiki/User-guide-for-BCS-computing-resources-on-Engaging#slurm-configuration) wiki.
43+
44+
## Data storage
45+
46+
Data can be stored under the following path: `/orcd/data/linc/`. We will be working to create an organization strategy for the LINC project data but for now please store your data under a subdirectory (e.g. `/orcd/data/linc/<username>` or `/orcd/data/linc/<projectname>`). There are additional locations to store your data including the use of scratch space (`/orcd/scratch/bcs/001`, `/orcd/scratch/bcs/002`, `/pool001/<username>`) which can be found under the [Storage](https://engaging-web.mit.edu/eofe-wiki/storage/) page and the [BCS computing resources on Engaging](https://github.mit.edu/MGHPCC/OpenMind/wiki/User-guide-for-BCS-computing-resources-on-Engaging) wiki.
47+
48+
## Best practices
49+
50+
1. Please be respectful of these resources as they are used by many groups.
51+
1. Only run resource intensive scripts on the compute nodes and not on the login/head nodes.
52+
1. Only run the steps in your script on a GPU compute node if those steps require a GPU. All other steps should be run on a CPU-only compute node.
53+
1. Monitor your jobs frequently (`squeue -u <username>`).
Loading

docs/img/webknossos_add_dataset.png

9.83 KB
Loading

docs/img/webknossos_asset.png

12.6 KB
Loading

docs/img/webknossos_dataset_name.png

114 KB
Loading

docs/img/webknossos_name_field.png

51.2 KB
Loading
50.4 KB
Loading

docs/img/webknossos_uri.png

18.7 KB
Loading

docs/index.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ motor and psychiatric disorders.
1717

1818
The LINC documentation is meant to share information across the LINC project investigators. If you are new to the LINC project, you can start on the [Upload data](upload.md) page for a description of how to interact with the LINC data sharing platform.
1919

20+
Since the LINC infrastructure is a fork of the DANDI Archive, please refer to the [DANDI Docs](https://docs.dandiarchive.org) for comprehensive documentation.
21+
2022
## Quick Links
2123

2224
- [LINC Homepage](https://connects.mgh.harvard.edu/)
@@ -29,4 +31,4 @@ For questions, bug reports, and feature requests, please:
2931

3032
- File an issue on the relevant [GitHub repository](https://github.com/lincbrain)
3133
- Reach out on the [LINC Slack](https://mit-lincbrain.slack.com/)
32-
- Send an email to kabi@mit.edu
34+
- Send an email to lincbrain@mit.edu

docs/neuroglancer.md

Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
# Using Neuroglancer to Visualize Orientation Vector in Zarr Format
2+
3+
## 1. Ensure the vector dimension is in a single chunk
4+
5+
Neuroglancer requires that the dimension along the channels (where your vector data resides) must be within a single chunk. Specifically, the chunk size along the channel dimension must be **3**.
6+
7+
## 2. Load the Zarr data into Neuroglancer
8+
9+
Once you have your data chunked correctly, you can load the Zarr dataset into Neuroglancer. Typically, you will:
10+
11+
## 2.1 Start Local Neuroglancer Server
12+
1. Start Neuroglancer.
13+
2. Open Neuroglancer in a browser.
14+
3. Load Zarr if it presents locally `load zarr:///example.path.zarr`
15+
16+
## 2.2 Start Neuroglancer on lincbrain
17+
1. Find the dataset you want to visualize
18+
2. Under `Open With` button, select Neuroglancer
19+
20+
## 3. Rename the channel dimension from `c'` to `c^`
21+
22+
Neuroglancer may label your dimension as `c'` or something else automatically.
23+
24+
Renaming this dimension helps Neuroglancer interpret the data as a vector field (dimension of 3).
25+
26+
27+
## 4. Apply a custom shader for visualization
28+
29+
To actually see your vector data meaningfully rendered, you need a custom shader. Neuroglancer supports a small GLSL-like language for defining how each voxel is colored.
30+
31+
### 4.1 If the vectors already have unit norm
32+
33+
If you know that each voxel’s vector is already normalized (i.e., \(\|\mathbf{v}\| = 1\)), you can use the following simple shader to visualize the absolute value of each component (as RGB channels):
34+
35+
```glsl
36+
void main() {
37+
vec3 color;
38+
color.r = abs(getDataValue(0));
39+
color.g = abs(getDataValue(1));
40+
color.b = abs(getDataValue(2));
41+
emitRGB(color);
42+
}
43+
```
44+
45+
### 4.2 If the vectors do not have unit norm
46+
47+
If your vectors do not have unit norm, and the magnitude of each vector encodes additional information (like reliability of the measurement), you can normalize the color in the shader and use the magnitude as alpha:
48+
49+
```glsl
50+
void main() {
51+
vec4 color;
52+
color.r = abs(getDataValue(0));
53+
color.g = abs(getDataValue(1));
54+
color.b = abs(getDataValue(2));
55+
56+
// Compute the norm (magnitude) of the vector
57+
color.a = sqrt(color.r * color.r + color.g * color.g + color.b * color.b);
58+
59+
// Normalize the color by the magnitude
60+
color.r = color.r / color.a;
61+
color.g = color.g / color.a;
62+
color.b = color.b / color.a;
63+
64+
// Scale alpha by some maximum norm if desired. If you have a known maximum,
65+
// replace MAX_NORM with that value; otherwise you can leave the alpha
66+
// as-is or adjust as needed.
67+
float MAX_NORM = 1.0; // modify this as appropriate
68+
color.a = color.a / MAX_NORM;
69+
70+
emitRGBA(color);
71+
}
72+
```
73+
74+
In Neuroglancer, you can paste this shader code in the layer’s Shader Editor (usually found under the “Layer” panel).
75+
76+
77+
## 5. Example
78+
79+
An example dataset can be found at [here](https://lincbrain.org/dandiset/000010/draft/files?location=sourcedata%2Fderivatives&page=2). The file `sample18_st_filtered.ome.zarr`. Once we change the dimension name to `c^` and apply the shader. We can see the result as following.
80+
![](img/neuroglancer_orientation_vector_example.jpeg)
81+

0 commit comments

Comments
 (0)