Skip to content

shahcompbio/proteomegenerator3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

kentsislab/proteomegenerator3

GitHub Actions CI Status GitHub Actions Linting StatusCite with Zenodo nf-test

Nextflow run with docker run with singularity

Introduction

kentsislab/proteomegenerator3 is a bioinformatics pipeline that can be used to create sample-specific, proteogenomics search databases from long-read RNAseq data. It takes in a samplesheet and aligned long-read RNAseq data as input, performs guided, de novo transcript assembly, ORF prediction, and then produces a protein fasta file suitable for use with computational proteomics search platforms (e.g, Fragpipe, DIA-NN).

  1. Pre-processing of aligned reads to create transcript read classes with bambu which can be re-used in future analyses. Optional filtering:
    1. Filtering on MAPQ and read length with samtools
  2. Transcript assembly, quantification, and filtering with bambu. Option to merge multiple samples into a unified transcriptome.
  3. ORF prediction with Transdecoder.
  4. Formatting of ORFs into a UniProt-style fasta file which can be used for computational proteomics searchs with Fragpipe, DIA-NN, Spectronaut.
  5. Concatenation of sample-specific proteome fasta produced in #4 with a UniProt proteome of the user's choice to allow for spectra to compete between non-canonical and canonical proteoforms.
  6. Deduplication of sequences and basic statistics with seqkit
  7. MultiQC to collate package versions used (MultiQC)

Usage

Note

If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data. When using the profile, it will run on a minimal test dataset that can be run in 5-10 minutes on most modern laptops.

First, prepare a samplesheet with your input data that looks as follows:

samplesheet.csv:

sample,bam,rcFile,fusion_tsv
CONTROL_REP1,AEG588A1_S1_L002_R1_001.bam,,fusion_predictions.tsv

Each row represents a long-read RNAseq sample. The columns are as follows:

  1. sample: Sample name (required)
  2. bam: Aligned, sorted long-read RNAseq BAM file (required)
  3. rcFile: Optional Bambu read class file (.rds) from previous runs; use with --skip_preprocessing flag to speed up runtime and re-analyze previous samples
  4. fusion_tsv: Optional fusion predictions TSV file from ctat-lr-fusion

To produce the necessary files, we recommend using the nf-core/nanoseq pipeline for alignment, or ctat-lr-fusion for fusion calling.

Now, you can run the pipeline using:

nextflow run kentsislab/proteomegenerator3 -r 1.1.0 \
   -profile <docker/singularity/.../institute> \
   --input samplesheet.csv \
   --fasta <REF_GENOME> \
   --gtf <REF_GTF> \
   --outdir <OUTDIR>

Where REF_GENOME and REF_GTF are the reference genome and transcriptome respectively. These can be from GENCODE or Ensembl, but should match the reference used to align the data.

Warning

Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters; see docs.

Additional parameters

To see all optional parameters that could be used with the pipeline and their explanations, use the help menu:

nextflow run kentsislab/proteomegenerator3 -r 1.1.0 --help

This options can be run using flags. For example:

nextflow run kentsislab/proteomegenerator3 -r 1.1.0 \
   -profile <docker/singularity/.../institute> \
   --input samplesheet.csv \
   --fasta <REF_GENOME> \
   --gtf <REF_GTF> \
   --outdir <OUTDIR> \
   --filter_reads

Will pre-filter the bam file before transcript assembly is performed on mapq and read length.

As another example, you can skip multi-sample transcript merging and process each sample independently:

nextflow run kentsislab/proteomegenerator3 -r 1.1.0 \
   -profile <docker/singularity/.../institute> \
   --input samplesheet.csv \
   --fasta <REF_GENOME> \
   --gtf <REF_GTF> \
   --outdir <OUTDIR> \
   --skip_multisample

To run with the latest version, which may not be stable you can use the -r dev -latest flags:

nextflow run kentsislab/proteomegenerator3 -r dev -latest --help

I have highlighted the following options here:

  1. filter_reads: use this flag to pre-filter reads using mapq and read length
  2. mapq: min mapq for read filtering [default: 20]
  3. read_len: min read length for read filtering [default: 500]
  4. filter_acc_reads: filter reads on accessory chromosomes; sometimes causes issues for bambu
  5. skip_preprocessing: use previously generated bambu read classes
  6. NDR: modulate bambu's novel discovery rate [default: 0.1]
  7. recommended_NDR: run bambu with recommended NDR (as determined by bambu's algorithm)
  8. skip_multisample: skip multi-sample transcript merging and process samples individually
  9. single_best_only: select only the single best ORF per transcript [default: false]
  10. uniprot_proteome: local path to UniProt proteome for (i) BLAST-based ORF validation in Transdecoder subworkflow and (ii) concatenation of the final proteome fasta file.
  11. UPID: UniProt proteome ID (UPID) for automated download (if no local path was provided with option #10) [default: UP000005640]

Credits

kentsislab/proteomegenerator3 was originally written by Asher Preska Steinberg.

We thank the following people for their extensive assistance in the development of this pipeline:

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines.

Citations

If you use kentsislab/proteomegenerator3 for your analysis, please cite our manuscript:

End-to-end proteogenomics for discovery of cryptic and non-canonical cancer proteoforms using long-read transcriptomics and multi-dimensional proteomics

Katarzyna Kulej, Asher Preska Steinberg, Jinxin Zhang, Gabriella Casalena, Eli Havasov, Sohrab P. Shah, Andrew McPherson, Alex Kentsis.

BioRXiv. 2025 Aug 28. doi: 10.1101/2025.08.23.671943.

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

This pipeline uses code and infrastructure developed and maintained by the nf-core community, reused here under the MIT license.

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.

About

Generate proteogenomics search databases from long-read RNAseq

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published