Skip to content

Test time agumentations #81

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 15 commits into from
Closed

Test time agumentations #81

wants to merge 15 commits into from

Conversation

edyoshikun
Copy link
Contributor

This PR adds test time augmentations to the prediction step.

@ziw-liu ziw-liu linked an issue Jun 3, 2024 that may be closed by this pull request
edyoshikun and others added 11 commits June 7, 2024 22:29
* refactor data loading into its own module

* update type annotations

* move the logging module out

* move old logging into utils

* rename tests to match module name

* bump torch

* draft fcmae encoder

* add stem to the encoder

* wip: masked stem layernorm

* wip: patchify masked features for linear

* use mlp from timm

* hack: POC training script for FCMAE

* fix mask for fitting

* remove training script

* default architecture

* fine-tuning options

* fix cli for finetuning

* draft combined data module

* fix import

* manual validation loss reduction

* update linting
new black version has different rules

* update development guide

* update type hints

* bump iohub

* draft ctmc v1 dataset

* update tests

* move test_data

* remove path conversion

* configurable normalizations (#68)

* inital commit adding the normalization.

* adding dataset_statistics to each fov to facilitate the configurable augmentations

* fix indentation

* ruff

* test preprocessing

* remove redundant field

* cleanup

---------

Co-authored-by: Ziwen Liu <[email protected]>

* fix ctmc dataloading

* add example ctmc v1 loading script

* changing the normalization and augmentations default from None to empty list.

* invert intensity transform

* concatenated data module

* subsample videos

* livecell dataset

* all sample fields are optional

* fix multi-dataloader validation

* lint

* fixing preprocessing for varying array shapes (i.e aics dataset)

* update loading scripts

* fix CombineMode

* always use untrainable head for FCMAE

* move log values to GPU before syncing
Lightning-AI/pytorch-lightning#18803

* custom head

* ddp caching fixes

* fix caching when using combined loader

* compose normalizations for predict and test stages

* black

* fix normalization in example config

* fix normalization in example config

* prefetch more in validation

* fix collate when multi-sample transform is not used

* ddp caching fixes

* fix caching when using combined loader

* typing fixes

* fix test dataset

* fix invert transform

* add ddp prepare flag for combined data module

* remove redundant operations

* filter empty detections

* pass trainer to underlying data modules in concatenated

* hack: add test dataloader for LiveCell dataset

* test datasets for livecell and ctmc

* fix merge error

* fix merge error

* fix mAP default for over 100 detections

* bump torchmetric

* fix combined loader training for virtual staining task

* fix non-combined data loader training

* add fcmae to graph script

* fix type hint

* format

* add back convolutiuon option for fcmae head

---------

Co-authored-by: Eduardo Hirata-Miyasaki <[email protected]>
* rename file

* rename the architecture

* fix merge
* test on python 3.12

* black

* CI: only install CPU wheels for torch

* install torch first

* install torchvision together with torch

* bumping monai

---------

Co-authored-by: Eduardo Hirata-Miyasaki <[email protected]>
* add reference and minor edits

* add back abstracts
* fix architecture name

* add boxes to the diagram

* deleting some extra redundant lines

---------

Co-authored-by: Eduardo Hirata-Miyasaki <[email protected]>
* add the scale metadata from the input.

* add change to config file

* adding a try except

* passing None for default behaviour and letting iohub handle the scale default.

* making default metadta_store to none and adding letting iohub handle the exceptions

* fix docstring

* fix type hint

* read input store directly

* revert change to the example config

---------

Co-authored-by: Ziwen Liu <[email protected]>
@edyoshikun edyoshikun changed the base branch from main to chroma_tracers June 18, 2024 16:17
@edyoshikun edyoshikun changed the base branch from chroma_tracers to main June 18, 2024 16:17
ziw-liu added 2 commits June 18, 2024 09:28
* rename file

* rename the architecture

* fix merge
* rename file

* rename the architecture

* fix merge
@edyoshikun
Copy link
Contributor Author

Closing in favor of #91 .
Merging to main seemed to be an issue here. Diagnosing later.

@edyoshikun edyoshikun closed this Jun 18, 2024
@edyoshikun edyoshikun deleted the ttta branch June 18, 2024 23:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement test time augmentations to avoid high frequency fluctuations
2 participants