Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft of patch extraction and switch classification #170

Draft
wants to merge 16 commits into
base: develop
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 30 additions & 22 deletions .devcontainer/analyst/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -11,37 +11,45 @@
"--cap-add=SYS_PTRACE",
"--security-opt=seccomp:unconfined",
"--security-opt=apparmor:unconfined",
"--volume=/tmp/.X11-unix:/tmp/.X11-unix"
"--volume=/tmp/.X11-unix:/tmp/.X11-unix",
"--gpus", "all"
],
"containerEnv": {
"DISPLAY": "${localEnv:DISPLAY}",
"LIBGL_ALWAYS_SOFTWARE": "1" // Needed for software rendering of opengl
},
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.profiles.linux": {
"zsh": {
"path": "/bin/zsh"
}
},
"terminal.integrated.defaultProfile.linux": "zsh"
"customizations": {
"vscode": {
"settings": {
"terminal.integrated.profiles.linux": {
"zsh": {
"path": "/bin/zsh"
}
},
"terminal.integrated.defaultProfile.linux": "zsh"
},
"extensions": [
// "althack.ament-task-provider",
"DotJoshJohnson.xml",
"ms-azuretools.vscode-docker",
"ms-python.python",
"ms-vscode.cpptools",
"redhat.vscode-yaml",
"smilerobotics.urdf",
"streetsidesoftware.code-spell-checker",
"twxs.cmake",
"yzhang.markdown-all-in-one",
"zachflower.uncrustify",
"ms-toolsai.jupyter"
]
}
},
"extensions": [
// "althack.ament-task-provider",
"DotJoshJohnson.xml",
"ms-azuretools.vscode-docker",
"ms-python.python",
"ms-vscode.cpptools",
"redhat.vscode-yaml",
"smilerobotics.urdf",
"streetsidesoftware.code-spell-checker",
"twxs.cmake",
"yzhang.markdown-all-in-one",
"zachflower.uncrustify"
],

"mounts": [
"source=${localWorkspaceFolder},target=/src/isaac/src,type=bind,consistency=cached",
"source=${localWorkspaceFolder}/../../astrobee/src,target=/src/astrobee/src,type=bind,consistency=cached"
"source=${localWorkspaceFolder}/../../astrobee/src,target=/src/astrobee/src,type=bind,consistency=cached",
"source=${localWorkspaceFolder}/../../data/Sock_example,target=/src/data/vent,type=bind,consistency=cached"
],
"workspaceFolder": "/src/isaac/src"
}
51 changes: 51 additions & 0 deletions analyst/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,3 +58,54 @@ With the trained CNN we can run new colledted data through it, namely real image

Open the tutorial


# Image patch extraction and classification Tutorials

## Training Pipeline

To gather a training and validation dataset we use the training pipline.

The main pipline along with tutorials for sub-features are detailed down below:

## 1) Import Bagfile data to database (optional if using remote database)

Open the tutorial [here](http://localhost:8888/lab/tree/1_import_bagfiles.ipynb).

This tutorial covers how to upload bag files to a local database. Upload all the bagfiles that contain the training data.
Be aware that uploading large bag files might take a long time. If possible select only the time intervals/topic names that are required for analysis to speed up the process.

## Extract image patches and split into training and testing dataset for CNN

Open the notebook [here](http://localhost:8888/lab/tree/gather_training_dataset.ipynb).

This is the main pipeline that extracts the image patches of your target used for training the CNN. It also splits it up into training and testing (validation) data. If you want to look into or try some of the featured used in the pipline you can to that in these notebooks.

### Target selection with UI

Open the notebook [here](http://localhost:8888/lab/tree/select_target.ipynb)

This notebook uses a UI for the user to select a target in a specified image. It then gives out the cooridnates for the points in said image.

### Query database for images with target in frame

Open the notebook [here](http://localhost:8888/lab/tree/query_images.ipynb)

This notebook queries the database for images that have the target in frame, note this needs to be specified with the coordinates of the target and is not done using the select_target notebook. It then filters with the help of astrobee's position, camera FOV, intrinsics and extrinsics.

### Saving, warping and extracting images

This is done with a script, you can look at it here [here](http://localhost:8888/lab/tree/scripts/save_patch.py)

You can look at this notebook for a simple version of this [here](http://localhost:8888/lab/tree/scripts/warp_and_extract_one_patch.ipynb)

### Training CNN

To train the preweighted DenseNet121 use the notebook [here](http://localhost:8888/lab/tree/scripts/switch_classifying_CNN_training.ipynb)

## Classifying Pipline

The clasifying pipeline builds a lot on the same principles as the training pipeline, see the above notebooks for more info on each feature.

If you want to use the classifier to test one image use the notebook [here](http://localhost:8888/lab/tree/scripts/evaluate_image_with_CNN.ipynb)

If you instead want to classify all images in one bag you can use the notebook [here](http://localhost:8888/lab/tree/scripts/evaluate_bag_with_CNN.ipynb)
Loading