Skip to content

Commit ed61732

Browse files
removed some misspellings, moved to apa
1 parent 0a9f11d commit ed61732

16 files changed

+27
-59
lines changed
+3-24
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,7 @@
11
extends: capitalization
2-
message: "%s should follow AMA title case."
2+
message: "%s should follow APA title case. (https://apastyle.apa.org/style-grammar-guidelines/capitalization/title-case)"
33
level: error
44
scope: heading
55
match: $title
6-
style: AMA
7-
vocab: false
8-
exceptions:
9-
- \ba\b
10-
- \ban\b
11-
- \band\b
12-
- \bas\b
13-
- \bat\b
14-
- \bfor\b
15-
- \bbut\b
16-
- \bby\b
17-
- \bif\b
18-
- \bin\b
19-
- \bdo\b
20-
- \bwe\b
21-
- \bof\b
22-
- \bon\b
23-
- \bor\b
24-
- \bto\b
25-
- \bup\b
26-
- \bloader\b
27-
- STM
28-
- LTM
6+
style: APA
7+
vocab: false

.vale/styles/config/vocabularies/TBP/accept.txt

+4-16
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,7 @@ worldimages
3636
neocortical
3737
Initializer
3838
resample
39-
retreive
4039
actuations
41-
displacemnts
4240
Calli
4341
Omniglot
4442
iter
@@ -88,7 +86,6 @@ programatically
8886
timeseries
8987
pretrain
9088
misclassification
91-
succint
9289
_all_
9390
args
9491
eval
@@ -102,7 +99,6 @@ succesive
10299
interpretability
103100
Walkthrough
104101
presynaptic
105-
\befference\b
106102
subcortically
107103
Pytorch
108104
Guillery
@@ -136,11 +132,7 @@ Klukas
136132
Purdy
137133
xyz
138134
readme
139-
commande
140-
availabe
141-
triaged
142135
substep
143-
severities
144136
CLA
145137
unmerged
146138
github
@@ -153,9 +145,6 @@ rdme
153145
cli
154146
Exapmle
155147
semver
156-
serverity
157-
truely
158-
managable
159148
attriubtes
160149
callouts
161150
discretized
@@ -164,17 +153,13 @@ discretization
164153
profiler
165154
overconstrained
166155
loopdown
167-
acutation
168156
perceptrons
169157
bool
170158
gaussian
171-
aquired
172-
sequencce
173159
Cui
174160
learnable
175161
Eitan
176162
Azoff
177-
accross
178163
Sync'ing
179164
subdocuments
180165
sync'd
@@ -183,4 +168,7 @@ sync'd
183168
\bLeadholm\b
184169
\bKalman\b
185170
biofilm
186-
\bTolman's\b
171+
\bTolman's\b
172+
\befference\b
173+
\bEfference\b
174+
\bTriaged\b

docs/contributing/contributing.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Welcome, and thank you for your interest in contributing to Monty!
77
We appreciate all of your contributions. Below, you will find a list of ways to get involved and help create AI based on principles of the neocortex.
88

99

10-
# Contribute to Our Code
10+
# Contribute To Our Code
1111

1212
There are many ways in which you can contribute to the code. For some suggestions, see the [Contributing Code Guide](ways-to-contribute-to-code.md).
1313

docs/contributing/guides-for-maintainers/triage.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ The desired cadence for Issue Triage is at least once per business day.
2323

2424
A **Maintainer** will check the Issue for validity.
2525

26-
Do not assign priorities or severities to Issues (see: [RFC 2 PR and Issue Review](https://github.com/thousandbrainsproject/tbp.monty/blob/main/rfcs/0002_pr_and_issue_review.md#issue)).
26+
Do not assign priority or severity to Issues (see: [RFC 2 PR and Issue Review](https://github.com/thousandbrainsproject/tbp.monty/blob/main/rfcs/0002_pr_and_issue_review.md#issue)).
2727

2828
Do not assign **Maintainers** to Issues. Issues remain unassigned so that anyone can work on them (see: [RFC 2 PR and Issue Review](https://github.com/thousandbrainsproject/tbp.monty/blob/main/rfcs/0002_pr_and_issue_review.md#feature-requests-1)).
2929

@@ -81,7 +81,7 @@ First, check if the Pull Request CLA check is passing. If the check is not passi
8181

8282
A **Maintainer** will check the Pull Request for validity.
8383

84-
There are no priorities or severities applied to Pull Requests.
84+
There is no priority or severity applied to Pull Requests.
8585

8686
A valid Pull Request is on-topic, well-formatted, contains expected information, does not depend on an unmerged Pull Request, and does not violate the code of conduct.
8787

docs/contributing/style-guide.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -113,11 +113,11 @@ In general we try and stick to native markdown syntax, if you find yourself need
113113

114114
In a document your first level of headings should be the `#` , then `##` and so on. This is slightly confusing as usually `#` is reserved for the title, but on readme.com the `h1` tag is used for the actual title of the document.
115115

116-
Use headings to split up long text block into managable chunks.
116+
Use headings to split up long text blocks into manageable chunks.
117117

118118
Headings can be referenced in other documents using a hash link `[Headings](doc:style-guide#headings)`. For example [Style Guide - Headings](style-guide.md#headings)
119119

120-
All headings should use capitalization following APA convention. For detailed guidelines see the [APA heading style guide](https://apastyle.apa.org/style-grammar-guidelines/capitalization/title-case).
120+
All headings should use capitalization following APA convention. For detailed guidelines see the [APA heading style guide](https://apastyle.apa.org/style-grammar-guidelines/capitalization/title-case) and this can be tested with the [Vale](https://vale.sh/) tool and running `vale .` in the root of the repo.
121121

122122
## Footnotes
123123

docs/contributing/ways-to-contribute-to-code.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,6 @@ There are many ways in which you can contribute to the code. The list below is n
1616
- **Add to our Benchmarks**: If you have ideas on how to test more capabilities of the system we appreciate if you add to our [benchmark experiments](../overview/benchmark-experiments.md). This could be evaluating different aspects in our current environments or adding completely new environments. Please note that in order to allow us to frequently run all the benchmark experiments, we only add one experiment for each specific capability we test and try to keep the run times reasonable.
1717
- **Work on an open Issue**: If you came to our project and want to contribute code but are unsure of what, the [open Issues](https://github.com/thousandbrainsproject/tbp.monty/issues) are a good place to start. See our guide on [how to identify an issue to work on](ways-to-contribute-to-code/identify-an-issue-to-work-on.md) for more information.
1818

19-
# How to Contribute Code
19+
# How To Contribute Code
2020

2121
Monty integrates code changes using Github Pull Requests. To start contributing code to Monty, please consult the [Contributing Pull Requests](pull-requests.md) guide.

docs/contributing/why-contribute.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ We are excited about all contributors and there may be a wide range of motivatio
1212
- You want to solve a task that requires quick, continuous learning and adaptation.
1313
- You want to better understand the brain and principles underlying our intelligence.
1414
- You want to work on the future of AI.
15-
- You want to be part of a truely unique and special project.
15+
- You want to be part of a truly unique and special project.
1616

1717
Here is a list of concrete output you may get out of working on this project.
1818

docs/future-work/project-roadmap.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Tasks are categorized in two ways:
1919

2020
Tasks that are done have a check mark next to them and are shaded in green. When a task gets checked off, it will add progress to the corresponding capabilities on the right.
2121

22-
# What the TBP Team is Working on
22+
# What the TBP Team is Working On
2323

2424
Some of the tasks are under active development by our team or scheduled to be tackled by us soon. Those are shaded in color. Below the main table, you can find a **list of our past and current milestones** with more detailed descriptions, timeline, and who is working on it. The colors of the milestones correspond to the colors in the main table.
2525

docs/how-monty-works/connecting-lms-into-a-heterarchy.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Lastly, each LM can send motor outputs directly to the motor system. Contrary to
1616

1717
Due to those reasons we call Monty a heterarchical system instead of a hierarchical system. Despite that, we often use terminology as if we did have a conventional hierarchical organization, such as top-down and bottom-up input and lower-level and higher-level LMs.
1818

19-
# Bottom-up Connections
19+
# Bottom-Up Connections
2020

2121
Connection we refer to as bottom-up connections are connections from SMs to LMs and connections between LMs that communicate an LMs output (the current most likely object ID and pose) to the main input channel of another LM (the current sensed feature and pose). **The output object ID of the sending LM then becomes a feature in the models learned in the receiving LM.** For example, the sending LM might be modeling a tire. When the tire model is recognized, it outputs this and the recognized location and orientation of the tire relative to the body. The receiving LM would not get any information about the 3D structure of the tire from the sending LM. It would only receive the object ID (as a feature) and its pose. This LM could then model a car, composed of different parts. Each part, like the tire, is modeled in detail in a lower-level LM and then becomes a feature in the higher-level LMs' model of the car.
2222

docs/how-monty-works/how-learning-modules-work.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ Each learning module has a buffer class which could be compared to a short term
4343

4444
# The Graph Memory (LTM)
4545

46-
Each learning module has one graph memory class which it uses as a long term memory (LTM) of previously aquired knowledge. In the graph learning modules, the memory stores explicit object models in the form of graphs (represented in the ObjectModel class). The graph memory is responsible for storing, updating, and retrieving models from memory.
46+
Each learning module has one graph memory class which it uses as a long term memory (LTM) of previously acquired knowledge. In the graph learning modules, the memory stores explicit object models in the form of graphs (represented in the ObjectModel class). The graph memory is responsible for storing, updating, and retrieving models from memory.
4747

4848
![Graph memory classes and their relationships.](../figures/how-monty-works/gm_classes.png)
4949

docs/how-monty-works/learning-module-outputs.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Finally, the LM can also **suggest an action in the form of a goal state**. This
3030

3131
There are currently **three flavors of graph matching** implemented: Matching using **displacements**, matching using **features at locations**, and matching using features at locations but with **continuous evidence values for each hypothesis** instead of a binary decision. They all have strengths and weaknesses but are generally successive improvements. They were introduced sequentially as listed above and each iteration was designed to solve problems of the previous one. Currently, we are using the evidence-based approach for all of our benchmark experiments.
3232

33-
**Displacement matching** has the advantage that it **can easily deal with translated, rotated and scaled objects** and recognize them without additional computations for reference frame transforms. If we represent the displacement in a rotation-invariant way (for example as point pair features) the recognition performance is not influenced by the rotation of the object. For scale, we can simply use a scaling factor for the length of the displacements which we can calculate from the difference in length between the first sensed displacement and stored displacemnts of initial hypotheses (assuming we sample a displacement that is stored in the graph, which is a strong assumption). It is the only LM that can deal with scale at the moment. The major downside of this approach is that it **only works if we sample the same displacements that are stored in the graph model** of the object while the number of possible displacements grows explosively with the size of the graph.
33+
**Displacement matching** has the advantage that it **can easily deal with translated, rotated and scaled objects** and recognize them without additional computations for reference frame transforms. If we represent the displacement in a rotation-invariant way (for example as point pair features) the recognition performance is not influenced by the rotation of the object. For scale, we can simply use a scaling factor for the length of the displacements which we can calculate from the difference in length between the first sensed displacement and stored displacements of initial hypotheses (assuming we sample a displacement that is stored in the graph, which is a strong assumption). It is the only LM that can deal with scale at the moment. The major downside of this approach is that it **only works if we sample the same displacements that are stored in the graph model** of the object while the number of possible displacements grows explosively with the size of the graph.
3434

3535
**Feature matching addresses this sampling issue** of displacement matching by instead matching features at nearby locations in the learned model. The problem with this approach is that locations are not invariant to the rotation of the reference frame of the model. We, therefore, have to cycle through different rotations during matching and apply them to the displacement that is used to query the model. This however is more computationally expensive.
3636

docs/how-to-use-monty/running-benchmarks.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ For more details on the current benchmark experiments, see [this page.](../overv
1010

1111
**When merging a change that impacts the performance on the benchmark experiments, you need to update the table in our documentation [here](../overview/benchmark-experiments.md).**
1212

13-
# How to run a Benchmark Experiment
13+
# How to Run a Benchmark Experiment
1414

1515
To run a benchmark experiment, simply call
1616

docs/how-to-use-monty/tutorials/running-your-first-experiment.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,8 @@ If you examine the `MontyExperiment` class, you will also notice that there are
105105
- Do post-epoch logging.
106106
- Do post-train logging.
107107

108-
and **this is exactly the procedure that was executed when you ran `python run.py -e first_experiment`.** When we run Monty in evaluation mode, the same sequencce of calls is initiated by `MontyExperiment.evaluate` minus the model updating step in `MontyExperiment.post_episode`. See [here](../../how-monty-works/experiment.md) for more details on epochs, episodes, and steps.
108+
and **this is exactly the procedure that was executed when you ran `python run.py -e firs
109+
t_experiment`.** When we run Monty in evaluation mode, the same sequence of calls is initiated by `MontyExperiment.evaluate` minus the model updating step in `MontyExperiment.post_episode`. See [here](../../how-monty-works/experiment.md) for more details on epochs, episodes, and steps.
109110

110111
## Model
111112

@@ -123,7 +124,7 @@ You can, of course, customize step types and when to switch between step types b
123124

124125
**In this particular experiment, `n_train_epochs` was set to 1, and `max_train_steps` was set to 1. This means a single epoch was run, with one matching step per episode**. In the next section, we go up a level from the model step to understand episodes and epochs.
125126

126-
## Data{set, loader}
127+
## `Data{set, loader}`
127128

128129
In the config for first_experiment, there is a comment that marks the start of data configuration. Now we turn our attention to everything below that line, as this is where episode specifics are defined.
129130

docs/how-to-use-monty/tutorials/unsupervised-continual-learning.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ surf_agent_2obj_unsupervised = dict(
147147
```
148148
If you have read our previous tutorials on [pretraining](pretraining-a-model.md) or [running inference with a pretrained model](running-inference-with-a-pretrained-model.md), you may spot a few differences in this setup. For pretraining, we used the `MontySupervisedObjectPretrainingExperiment` class which also performs training (and not evaluation). While that was a training-only setup, it is different from our unsupervised continual learning config since it supplies object labels to learning modules. For running inference with a pretrained model, we used the `MontyObjectRecognitionExperiment` class but specified that we only wanted to perform evaluation (i.e., `do_train=False` and `do_eval=True`). In contrast, here we used the `MontyObjectRecognitionExperiment` with arguments `do_train=True` and `do_eval=False`. This combination of experiment class and `do_train`/`do_eval` arguments is specific to unsupervised continual learning. We have also increased `min_training_steps`, `object_evidence_threshold`, and `required_symmetry_evidence` to avoid early misclassification when there are fewer objects in memory.
149149

150-
Besides these crucial changes, we have also made a few minor adjustments to simplify the rest of the configs. First, we did not explicitly define our sensor module or motor system configs. This is because we are using `SurfaceAndViewMontyConfig`'s default sensor modules, motor system, and matrices that define connectivity between agents, sensors, and learning modules. Second, we are using a `RandomRotationObjectInitializer` which randomly rotates an object at the beginning of each episode rather than rotating an object by a specific user-defined rotation. Third, we are using the `CSVLoggingConfig`. This is equivalent to setting up a base `LoggingConfig` and specifying that we only want a `BasicCSVStatsHandler`, but it's a bit more succint. Monty has many config classes provided for this kind of convenience.
150+
Besides these crucial changes, we have also made a few minor adjustments to simplify the rest of the configs. First, we did not explicitly define our sensor module or motor system configs. This is because we are using `SurfaceAndViewMontyConfig`'s default sensor modules, motor system, and matrices that define connectivity between agents, sensors, and learning modules. Second, we are using a `RandomRotationObjectInitializer` which randomly rotates an object at the beginning of each episode rather than rotating an object by a specific user-defined rotation. Third, we are using the `CSVLoggingConfig`. This is equivalent to setting up a base `LoggingConfig` and specifying that we only want a `BasicCSVStatsHandler`, but it's a bit more succinct. Monty has many config classes provided for this kind of convenience.
151151

152152
# Running the Unsupervised Continual Learning Experiment
153153

docs/overview/benchmark-experiments.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Performance of current implementation on our benchmark test suite.
44
---
55
# Object and Pose Recognition on the YCB Dataset
66

7-
## What do we Test?
7+
## What Do We Test?
88

99
We split up the experiments into a short benchmark test suite and a long one. The short suite tests performance on a subset of 10 out of the 77 [YCB](https://www.ycbbenchmarks.com/) objects which allows us to assess performance under different conditions more quickly. Unless otherwise indicated, the 10 objects are chosen to be distinct in morphology and models are learned using the surface agent, which follows the object surface much like a finger.
1010

@@ -115,7 +115,7 @@ An object is classified as detected correctly if the detected object ID is in th
115115
| surf_agent_unsupervised_10distinctobj_noise | 80.00% | 67.78% | 1.09 | 2.78 | 22m | 13s |
116116
| surf_agent_unsupervised_10simobj | 50.00% | 76.67% | 2.75 | 2.20 | 25m | 15s |
117117

118-
To obtain these results use `print_unsupervised_stats(train_stats, epoch_len=10)` (wandb logging is currently not written for unsupervised stats). Unsupervised, continual learning can, by definition, not be parallelized accross epochs. Therefore these experiments were run without multiprocessing on the laptop (running on cloud CPUs works as well but since these are slower without parallelization these were run on the laptop).
118+
To obtain these results use `print_unsupervised_stats(train_stats, epoch_len=10)` (wandb logging is currently not written for unsupervised stats). Unsupervised, continual learning can, by definition, not be parallelized across epochs. Therefore these experiments were run without multiprocessing on the laptop (running on cloud CPUs works as well but since these are slower without parallelization these were run on the laptop).
119119

120120
# Monty-Meets-World
121121

docs/overview/vision-of-the-thousand-brains-project/capabilities-of-the-system.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Even though we cannot predict the ultimate use cases of the system, we want to t
77

88
Following is a list of capabilities that we are always thinking about when designing and implementing the system. We are not looking for point solutions for each of these problems but a general algorithm that can solve them all. It is by no means a comprehensive list but should give an idea of the scope of the system.
99

10-
### Capabilities That our System Already Has (At Least to a Certain Extent):
10+
### Capabilities That Our System Already Has (At Least to a Certain Extent):
1111

1212
- Recognizing objects independent of their location and orientation in the world.
1313

@@ -21,7 +21,7 @@ Following is a list of capabilities that we are always thinking about when desig
2121

2222
- Recognizing objects when they are partially occluded by other objects.
2323

24-
### Further Capabilities That we are Currently Working on:
24+
### Further Capabilities That we Are Currently Working On:
2525

2626
- Learning categories of objects and generalizing to new instances of a category.
2727

0 commit comments

Comments
 (0)