Skip to content

Commit

Permalink
move misspellings
Browse files Browse the repository at this point in the history
  • Loading branch information
codeallthethingz committed Jan 12, 2025
1 parent 9e04b53 commit 9e785a6
Show file tree
Hide file tree
Showing 7 changed files with 13 additions and 17 deletions.
3 changes: 3 additions & 0 deletions .vale.ini
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,6 @@ Vocab = TBP

[{docs/*.md,README.md}]
BasedOnStyles = Docs

[*.md]
BlockIgnores = (?s) *\[block:embed\].*?\[/block\]
13 changes: 3 additions & 10 deletions .vale/styles/config/vocabularies/TBP/accept.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Bluesky
\bBluesky\b
neocortex
Mountcastle
\bMountcastle\b
\bconfigs\b
\bConfig\b
\bconfig\b
Expand All @@ -12,7 +12,6 @@ hippocampal
\bheterarchy\b
saccading
\bHTM\b
html
\bANNs*\b
\bCNNs*\b
\bGNNs*\b
Expand Down Expand Up @@ -69,20 +68,17 @@ fullscreen
favicon
href
dataclass
specifid
Pretrained
pretrained
runtimes
neurophysiologist
mixin
distingush
Conda
Miniconda
zsh
wanbd
wandb
utils
matplotlib
programatically
timeseries
pretrain
misclassification
Expand All @@ -95,7 +91,6 @@ Leyman
Sampath
Shen
Neuromorphic
succesive
interpretability
Walkthrough
presynaptic
Expand Down Expand Up @@ -143,9 +138,7 @@ prechecks
repo
rdme
cli
Exapmle
semver
attriubtes
callouts
discretized
discretize
Expand Down
2 changes: 1 addition & 1 deletion docs/contributing/documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ title: 'New Placeholder Example Doc'
> 🚧 Quotes
>
>Please put the title in single quotes and, if applicable, escape any single quotes using two single quotes in a row.
Exapmle: `title: 'My New Doc''s'`
Example: `title: 'My New Doc''s'`

> 🚧 Your title must match the url-safe slug
>
Expand Down
2 changes: 1 addition & 1 deletion docs/contributing/style-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ Images use `snake_case.ext`
Images should generally be `png` or `svg` formats. Use `jpg` if the file is actually a photograph.
Upload high quality images as people can click on the image to see the larger version. You can add style attriubtes after the image path with `#width=300px` or similar.
Upload high quality images as people can click on the image to see the larger version. You can add style attributes after the image path with `#width=300px` or similar.
For example, the following markdown creates the image below:
Expand Down
2 changes: 1 addition & 1 deletion docs/how-to-use-monty/customizing-monty.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,4 @@ my_custom_config.update(
)
```

For simplicity we inherit all other default values from the `base_config_10distinctobj_dist_agent` config in `benchmarks/configs/ycb_experiments.py` and use the `monty_config` specifid in the `PatchAndViewSOTAMontyConfig` dataclass.
For simplicity we inherit all other default values from the `base_config_10distinctobj_dist_agent` config in `benchmarks/configs/ycb_experiments.py` and use the `monty_config` specified in the `PatchAndViewSOTAMontyConfig` dataclass.
6 changes: 3 additions & 3 deletions docs/how-to-use-monty/logging-and-analysis.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,13 @@ To manage the logging for an experiment you can specify the handlers that should
| **BasicCSVStatsHandler** | Log a .csv file with one row per episode that contains the results and performance of this episode. |
| **ReproduceEpisodeHandler** | Logs action sequence and target such that an episode can be exactly reproduced. |
| **BasicWandbTableStatsHandler** | Logs a table similar to the .csv table to wandb. |
| **BasicWandbChartStatsHandler** | Logs episode stats to wandb charts. When running in parallel this is done at the end of a run. Otherwise one can follow the run stats live on wanbd. |
| **BasicWandbChartStatsHandler** | Logs episode stats to wandb charts. When running in parallel this is done at the end of a run. Otherwise one can follow the run stats live on wandb. |
| **DetailedWandbHandler** | Logs animations of raw observations to wandb. |
| **DetailedWandbMarkedObsHandler** | Same as previous but marks the view-finder observation with a square indicating where the patch is. |

## Logging to Wandb

When logging to wanbd we recommend to run`export WANDB_DIR=~/tbp/results/monty/wandb`so the wandb logs are not stored in the repository folder.
When logging to wandb we recommend to run`export WANDB_DIR=~/tbp/results/monty/wandb`so the wandb logs are not stored in the repository folder.

The first time you run experiments that log to wandb you will need to set your WANDB_API key using `export WANDB_API_KEY=your_key`

Expand Down Expand Up @@ -122,7 +122,7 @@ plt.show()

> 📘 Plotting in 3D
>
> Most plots shown here use the 3D projection feature of matplotlib. The plots can be viewed interactively by dragging the mouse over them to zoom and rotate. When you want to save figures with 3D plots programatically, it can be useful to set the `rotation` parameter in the `plot_graph` function such that the POV provides a good view of the 3D structure of the object.
> Most plots shown here use the 3D projection feature of matplotlib. The plots can be viewed interactively by dragging the mouse over them to zoom and rotate. When you want to save figures with 3D plots programmatically, it can be useful to set the `rotation` parameter in the `plot_graph` function such that the POV provides a good view of the 3D structure of the object.
### Plotting Matching Animations

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Multiple Learning Modules
---
# Introduction
Thus far, we have been working with models that use a single agent with a single sensor which connects to a single learning module. In the context of vision, this is analogous to a small patch of retina that picks up a small region of the visual field and relays its information to its downstream target--a single cortical column in the primary visual cortex (V1). In human terms, this is like looking through a straw. While sufficient to recognize objects, one would have to make many succesive eye movements to build up a picture of the environment. In reality, the retina contains many patches that tile the retinal surface, and they all send their information to their respective downstream target columns in V1. If, for example, a few neighboring retinal patches fall on different parts of the same object, then the object may be rapidly recognized once columns have communicated with each other about what they are seeing and where they are seeing it.
Thus far, we have been working with models that use a single agent with a single sensor which connects to a single learning module. In the context of vision, this is analogous to a small patch of retina that picks up a small region of the visual field and relays its information to its downstream target--a single cortical column in the primary visual cortex (V1). In human terms, this is like looking through a straw. While sufficient to recognize objects, one would have to make many successive eye movements to build up a picture of the environment. In reality, the retina contains many patches that tile the retinal surface, and they all send their information to their respective downstream target columns in V1. If, for example, a few neighboring retinal patches fall on different parts of the same object, then the object may be rapidly recognized once columns have communicated with each other about what they are seeing and where they are seeing it.

In this tutorial, we will show how Monty can be used to learn and recognize objects in a multiple sensor, multiple learning module setting. In this regime, we can perform object recognition with fewer steps than single-LM systems by allowing learning modules to communicate with one another through a process called [voting](../../overview/architecture-overview/other-aspects.md#votingconsensus). We will also introduce the distant agent, Monty's sensorimotor system that is most analogous to the human eye. Unlike the surface agent, the distant agent cannot move all around the object like a finger. Rather, it swivels left/right/up/down at a fixed distance from the object.

Expand Down

0 comments on commit 9e785a6

Please sign in to comment.