Skip to content

MrGiovanni/CARE

Repository files navigation

CARE Logo

Completeness-Aware Reconstruction Enhancement

Diffusion based Anatomy-aware enhancement of sparse-view CT reconstruction. CARE visualization

The Completeness-Aware Reconstruction Enhancement (CARE) framework addresses a critical gap in sparse-view CT reconstruction by shifting the evaluation from traditional pixel-wise metrics to anatomy-aware metrics derived from automated structural segmentation. By incorporating segmentation-informed losses into latent diffusion models, CARE significantly improves the reconstruction fidelity of clinically relevant anatomical structures, ensuring that critical diagnostic features are preserved under highly limited view conditions.

Paper

Are Pixel-Wise Metrics Reliable for Sparse-View Computed Tomography Reconstruction?
Tianyu Lin1, Xinran Li1, Chuntung Zhuang1, Qi Chen1, Yuanhao Cai1, Kai Ding2, Alan L. Yuille1 and Zongwei Zhou1,*
1Johns Hopkins University, 2Johns Hopkins Medicine

We have documented common questions for the paper in Frequently Asked Questions (FAQ).

Installation

Create an conda environement via:

conda create -n care python=3.11 -y
conda activate care

Then install all requirements using:

pip install -r requirements.txt

We have documented detailed steps to help prepare for downloading model checkpoints.

CARE as a CT Reconstruction Enhancement Baseline

Pretrained Autoencoder Checkpoint
huggingface-cli download TianyuLin/CARE --include="autoencoder/*" --local-dir="./STEP1-AutoEncoderModel/klvae/"
Pretrained Diffusion Model Checkpoint
huggingface-cli download TianyuLin/CARE --include="diffusion/*" --local-dir="./STEP2-DiffusionModel/"
Pretrained CARE Model Checkpoints
huggingface-cli download TianyuLin/CARE --include="CARE/*" --local-dir="./STEP3-CAREModel/"

Note

The following script is designed for the nine reconstruction methods mentioned in the paper: three traditional reconstruction methods (FDK, SART, ASD-POCS), five NeRF-based reconstruction methods (InTomo, NeRF, TensoRF, NAF, SAX-NeRF) using the SAX-NeRF Repo, and a Gaussian-Spaltting-based method R2-GS based on its own R2-GS Repo. Feel free to edit to fit your need.

Firstly, Based on the CT reconstruction results from SAX-NeRF Repo and GitHub Repo, please use the provided script to format the dataset:

cd ./ReconstructionPipeline/  # working directory
python -W ignore step1_softlink_BDMAP_O.py   # place the ground truth CT and segmentation
python -W ignore step2_extractAndpixelMetric.py # calculate pixel-wise metrics (SSIM and PSNR)

The resulting dataset format is:

└── BDMAP_O/                      # ground truth folder
    └── BDMAP_O0000001
        └── ct.nii.gz   # the ground truth CT scan of this case
└── BDMAP_O_methodName_numViews/  # reconstruction results folder
    └── BDMAP_O0000001
        └── ct.nii.gz   # the reconstructed CT from `methodName` method with `numViews` X-rays

Run the inference of CARE model via:

cd ./STEP3-CAREModel
bash inference.sh nerf_50 
# e.g. using nerf as CT reconstruction baseline

Acknowledgement

This work was supported by the Lustgarten Foundation for Pancreatic Cancer Research and the Patrick J. McGovern Foundation Award. We would like to thank the Johns Hopkins Research IT team in IT@JH for their support and infrastructure resources where some of these analyses were conducted; especially DISCOVERY HPC DISCOVERY HPC. We thank Hamed Hooshangnejad, Heng Li, Wenxuan Li, and Guofeng Zhang for their helpful suggestions throughout the project.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •