Skip to content

Commit

Permalink
Clean up title structures etc.
Browse files Browse the repository at this point in the history
  • Loading branch information
jfy133 committed Oct 6, 2023
1 parent c55890c commit 292ad6c
Show file tree
Hide file tree
Showing 12 changed files with 66 additions and 88 deletions.
1 change: 0 additions & 1 deletion _quarto.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@ book:
chapters:
- accessing-ancientmetagenomic-data.qmd
- ancient-metagenomic-pipelines.qmd
- summary.qmd
- part: appendices.qmd
chapters:
- resources.qmd
Expand Down
16 changes: 7 additions & 9 deletions authentication-decontamination.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ There are additional software requirements for this chapter
Please see the relevant chapter section in [Before you start](/before-you-start.qmd) before continuing with this chapter.
:::

# Introduction
## Introduction

In ancient metagenomics we typically try to answer two questions: "Who is there?" and "How ancient?", meaning we would like to detect an organism and investigate whether this organism is ancient. There are three typical ways to identify the presence of an organism in a metagenomic sample:

Expand Down Expand Up @@ -74,7 +74,7 @@ The chapter has the following outline:
- Similarity to expected microbiome source (microbial source tracking)


# Simulated ancient metagenomic data
## Simulated ancient metagenomic data

In this chapter, we will use 10 pre-simulated metagenomics with [gargammel](https://academic.oup.com/bioinformatics/article/33/4/577/2608651) ancient metagenomic samples from @Pochon2022-hj. \

Expand Down Expand Up @@ -106,11 +106,11 @@ Now, after the basic data pre-processing has been done, we can proceed with vali

In here you will see a range of directories, each representing different parts of this tutorial. One set of trimmed 'simulated' reads from @Pochon2022-hj in `rawdata/`.

# Genomic hit confirmation
## Genomic hit confirmation

Once an organism has been detected in a sample (via alignment, classification or *de-novo* assembly), one needs to take a closer look at multiple quality metrics in order to reliably confirm that the organism is not a false-positive detection and is of ancient origin. The methods used for this purpose can be divided into modern validation and ancient-specific validation criteria. Below, we will cover both of them.

## Modern validation criteria
## Modern genomic hit validation criteria

The modern validation methods aim at confirming organism presence regardless of its ancient status. The main approaches include evenness / breadth of coverage computation, assessing alignment quality, and monitoring affinity of the DNA reads to the reference genome of the potential host.

Expand Down Expand Up @@ -160,8 +160,6 @@ done

:::



Taxonomic k-mer-based classification of the ancient metagenomic reads can be done via KrakenUniq. However as this requires a very large database file, the results from running KrakenUniq on the 10 simulated genomes can be found in.

```bash
Expand Down Expand Up @@ -359,11 +357,11 @@ Another important way to detect reads that cross-map between related species is

In contrast, a large number of multi-allelic sites indicates that the assigned reads originate from more than one species or strain, which can result in symmetric allele frequency distributions (e.g., if two species or strains are present in equal abundance) (panel g) or asymmetric distributions (e.g., if two species or strains are present in unequal abundance) (panel h). A large number of mis-assigned reads from closely related species can result in a large number of multi-allelic sites with low frequencies of the derived allele (panel i). The situations (g-i) correspond to incorrect assignment of the reads to the reference. Please also check the corresponding "Bad alignments" IGV visualization to the right in the figure above.

## Ancient-specific validation criteria
## Ancient-specific genomic hit validation criteria

In contrast to modern genomic hit validation criteria, the ancient-specific validation methods concentrate on DNA degradation and damage pattern as ultimate signs of ancient DNA. Below, we will discuss deamination profile, read length distribution and post mortem damage (PMD) scores metrics that provide good confirmation of ancient origin of the detected organism.

### Ancient status
### Degradation patterns

Checking evenness of coverage and alignment quality can help us to make sure that the organism we are thinking about is really present in the metagenomic sample. However, we still need to address the question "How ancinet?". For this purpose we need to compute **deamination profile** and **read length distribution** of the aligned reads in order to prove that they demonstrate damage pattern and are sufficiently fragmented, which would be a good evidence of ancient origin of the detected organisms.

Expand Down Expand Up @@ -437,7 +435,7 @@ pydamage analyze -w 30 -p 14 filtered.sorted.bam
```
:::

# Microbiome contamination correction
## Microbiome contamination correction

Modern contamination can severely bias ancient metagenomic analysis. Also, ancient contamination, i.e. entered *post-mortem*, can potentially lead to false biological interpretations. Therefore, a lot of efforts in the ancient metagenomics field are directed on establishing methodology for identification of contaminants. Among them, the use of negative (blank) control samples is perhaps the most reliable and straightforward method. Additionally, one often performs microbial source tracking for predicting environment (including contamination environment) of origin for ancient metagenomic samples.

Expand Down
2 changes: 1 addition & 1 deletion bare-bones-bash.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -745,7 +745,7 @@ To delete the conda environment
conda remove --name bare-bones-bash --all -y
```

### Conclusion
## Summary

You should now know the basics of working on the command line, like:

Expand Down
2 changes: 1 addition & 1 deletion citing-this-book.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The source material for this book is located on GitHub:

If you wish to cite this book, please use the following bibliographic information

> James A. Fellows Yates, Christina Warinner, Alina Hiß, Arthur Kocher, Clemens Schmid, Irina Velsko, Maxime Borry, Megan Michel, Nikolay Oskolkov, Sebastian Duchene, Thiseas Lamnidis, Aida Andrades Valtueña, Alexander Herbig, & Alexander Hübner. (2023). Introduction to Ancient Metagenomics. In Introduction to Ancient Metagenomics (Version 2022). Zenodo. DOI: [10.5281/zenodo.8027281](https://doi.org/10.5281/zenodo.8027281)
> James A. Fellows Yates, Christina Warinner, Alina Hiß, Arthur Kocher, Clemens Schmid, Irina Velsko, Maxime Borry, Megan Michel, Nikolay Oskolkov, Sebastian Duchene, Thiseas Lamnidis, Aida Andrades Valtueña, Alexander Herbig, Alexander Hübner, Kevin Nota, Robin Warner, Meriam Guellil. (2023). Introduction to Ancient Metagenomics (Edition 2023). Zenodo. DOI: [10.5281/zenodo.8027281](https://doi.org/10.5281/zenodo.8027281)
<!-- TODO Update authors after each summer school -->

Expand Down
32 changes: 11 additions & 21 deletions denovo-assembly.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,7 @@ Around 2015, a technical revolution started when the first programs, e.g. MEGAHI

The technical advancement of being able to perform *de novo* assembly on metagenomic samples led to an explosion of studies that analysed samples that were considered almost impossible to study beforehand. For researchers that are exposed to ancient DNA, the imminent question arises: can we apply the same methods to ancient DNA data? In this practical course, we will walk through all required steps that are necessary to successfully perform _de novo_ assembly from ancient DNA metagenomic sequencing data and show you what you can do once you have obtained the data.

## Practical course

### Sample overview
## Sample overview

For this practical course, I selected a palaeofaeces sample from the study by @Maixner2021, who generated deep metagenomic sequencing data for four palaeofaeces samples that were excavated from an Austrian salt mine in Hallstatt and were associated with the Celtic Iron Age. We will focus on the youngest sample, **2612**, which was dated to be just a few hundred years old (@fig-denovoassembly-maixner).

Expand Down Expand Up @@ -113,7 +111,7 @@ to find this out.
There are about 3.25 million paired-end sequences in these files.
:::

### Preparing the sequencing data for _de novo_ assembly
## Preparing the sequencing data for _de novo_ assembly

Before running the actual assembly, we need to pre-process our sequencing data. Typical pre-processing steps include the trimming of adapter sequences and barcodes from the sequencing data and the removal of host or contaminant sequences, such as the bacteriophage PhiX, which is commonly sequenced as a quality control.

Expand Down Expand Up @@ -162,7 +160,7 @@ The sequencing data for the sample **2612** were generated across eight differen
Overall, we have almost no short DNA molecules (< 50 bp) but most DNA molecules are longer than 80 bp. Additionally, there were > 200,000 read pairs that could not be overlapped. Therefore, we can conclude that the sample **2612** is moderately degraded ancient DNA sample and has many long DNA molecules.
:::

### _De novo_ assembly
## _De novo_ assembly

Now, we will actual perform the _de novo_ assembly on the sequencing data. For this, we will use the program MEGAHIT [@LiMegahit2015], a _de Bruijn_-graph assembler.

Expand Down Expand Up @@ -258,7 +256,7 @@ We standardised this approach and added it to the Nextflow pipeline nf-core/mag
While MEGAHIT is able to assemble ancient metagenomic sequencing data with high amounts of ancient DNA damage, it tends to introduce damage-derived T and A alleles into the contig sequences instead of the true C and G alleles. This can lead to a higher number of nonsense mutations in coding sequences. We strongly advise you to correct such mutations, e.g. by using the ancient DNA workflow of the Nextflow pipeline [nf-core/mag](https://nf-co.re/mag).
:::

### Aligning the short-read data against the contigs
## Aligning the short-read data against the contigs

After the assembly, the next detrimental step that is required for many subsequent analyses is the alignment of the short-read sequencing data back to assembled contigs.

Expand Down Expand Up @@ -303,7 +301,7 @@ samtools index alignment/2612.sorted.calmd.bam
```
:::

### Reconstructing metagenome-assembled genomes
## Reconstructing metagenome-assembled genomes

There are typically two major approaches on how to study biological diversity of samples using the results obtained from the _de novo_ assembly. The first one is to reconstruct metagenome-assembled genomes (MAGs) and to study the species diversity.

Expand Down Expand Up @@ -340,9 +338,6 @@ Make sure you have followed the instructions for setting up the additional softw

To skip the first steps of metaWRAP and start straight with the binning, we need to create the folder structure and files that metaWRAP expects:


<!-- UP TI HERRE -->

```{bash, eval = F}
mkdir -p metawrap/INITIAL_BINNING/2612/work_files
ln -s $PWD/alignment/2612.sorted.calmd.bam \
Expand Down Expand Up @@ -420,7 +415,6 @@ tar xvf checkM/checkm_data_2015_01_16.tar.gz -C checkM
echo checkM | checkm data setRoot checkM
```


Afterwards, we can execute metaWRAP's bin refinement module:

```{bash, eval = F}
Expand All @@ -444,9 +438,7 @@ conda deactivate
The latter step will produce a summary file, `metawrap_50_10_bins.stats`, that lists all retained bins and some key characteristics, such as the genome size, the completeness estimate, and the contamination estimate. The latter two can be used to assign a quality score according to the Minimum Information for MAG (MIMAG; see info box).
:::


::: {.callout-note title="The Minimum Information for MAG (MIMAG)"}

The two most common metrics to evaluate the quality of MAGs are:

- the **completeness**: how many of the expected lineage-specific single-copy marker genes were present in the MAG?
Expand All @@ -455,13 +447,11 @@ The two most common metrics to evaluate the quality of MAGs are:
These metric is usually calculated using the marker-gene catalogue of checkM [@Parks2015], also if there are other estimates from other tools such as BUSCO [@Manni2021], GUNC [@Orakov2021] or checkM2 [@Chklovski2022].

Depending on the estimates on completeness and contamination plus the presence of RNA genes, MAGs are assigned to the quality category following the Minimum Information for MAG criteria [@Bowers2017] You can find the overview [here](https://www.nature.com/articles/nbt.3893/tables/1).

:::

As these two steps will run rather long and need a large amount of memory and disk space, I have provided the results of metaWRAP's bin refinement. You can find the file here: `/<PATH>/<TO>/denovo-assembly/metawrap_50_10_bins.stats`. Be aware that these results are based on the bin refinement of the results of three binning tools and include CONCOCT.

::: {.callout-tip title="Question" appearence="simple"}

**How many bins were retained after the refinement with metaWRAP? How many high-quality and medium-quality MAGs did the refinement yield following the MIMAG criteria?**

Hint: You can more easily visualise tables on the terminal using the Python program `visidata`. You can open a table using `vd -f tsv /<PATH>/<TO>/denovo-assembly/metawrap_50_10_bins.stats`. (press <kdb>ctrl</kbd>+<kbd>q</kbd> to exit). Next to separating the columns nicely, it allows you to perform a lot of operations like sorting conveniently. Check the cheat sheet [here](https://jsvine.github.io/visidata-cheat-sheet/en/).
Expand All @@ -474,7 +464,7 @@ In total, metaWRAP retained five bins, similarly to MaxBin2. Of these five bins,

:::

### Taxonomic assignment of contigs
## Taxonomic assignment of contigs

What should we do when we simply want to know to which taxon a certain contig most likely belongs to?

Expand All @@ -494,7 +484,7 @@ For each tool, we can either use pre-computed reference databases or compute our

As for any task that involves the alignment of sequences against a reference database, the chosen reference database should fit the sequences you are searching for. If your reference database does not capture the diversity of your samples, you will not be able to assign a subset of the contigs. There is also a trade-off between a large reference database that contains all sequences and its memory requirement. @Wright2023 elaborated on this quite extensively when comparing Kraken2 against MetaPhlAn.

While all of these tools can do the job, I typically prefer to use the program MMSeqs2 [@Steinegger2017] because it comes along with a very fast algorithm based on aminoacid sequence alignment and implements a lowest common ancestor (LCA) algorithm (@fig-denovoassembly-mmseqs2). Recently, they implemented a _taxonomy_ workflow [@Mirdita2021] that allows to efficiently assign contigs to taxons. Luckily, it comes with multiple pre-computed reference databases, such as the GTDB v207 reference database [@Parks2020], and therefore it is even more accessible for users.
While all of these tools can do the job, I typically prefer to use the program MMSeqs2 [@Steinegger2017] because it comes along with a very fast algorithm based on amino acid sequence alignment and implements a lowest common ancestor (LCA) algorithm (@fig-denovoassembly-mmseqs2). Recently, they implemented a _taxonomy_ workflow [@Mirdita2021] that allows to efficiently assign contigs to taxons. Luckily, it comes with multiple pre-computed reference databases, such as the GTDB v207 reference database [@Parks2020], and therefore it is even more accessible for users.

![Scheme of the _taxonomy_ workflow implemented into MMSeqs2. Adapted from @Mirdita2021, Fig. 1.](assets/images/chapters/denovo-assembly/MMSeqs2_classify_Fig1.jpeg){#fig-denovoassembly-mmseqs2}

Expand Down Expand Up @@ -557,7 +547,7 @@ From the 3,523 assigned contigs, 2,013 were assigned to the rank "species", whil
The most contigs were assigned the archael species _Halococcus morrhuae_ (n=386), followed by the bacterial species _Olsenella E sp003150175_ (n=298) and _Collinsella sp900768795_ (n=186).
:::

### Taxonomic assignment of MAGs
## Taxonomic assignment of MAGs

MMSeqs2's _taxonomy_ workflow is very useful to classify all contigs taxonomically. However, how would we determine which species we reconstructed by binning our contigs?

Expand Down Expand Up @@ -633,7 +623,7 @@ We would expect all five species to be present in our sample. All MAGs but `bin.

:::

### Evaluating the amount of ancient DNA damage
## Evaluating the amount of ancient DNA damage

One of the common questions that remain at this point of our analysis is whether the contigs that we assembled show evidence for the presence of ancient DNA damage. If yes, we could argue that these microbes are indeed ancient, particularly when their DNA fragment length distribution is rather short, too.

Expand Down Expand Up @@ -667,7 +657,7 @@ From the 3,606 contigs, pyDamage inferred a q-value, i.e. a p-value corrected fo
This reflects also on the MAGs. Although four of the five MAGs were human gut microbiome taxa, they did not show strong evidence of ancient DNA damage. This suggests that the sample is too young and is well preserved.
:::

### Annotating genomes for function
## Annotating genomes for function

The second approach on how to study biological diversity of samples using the assembly results is to compare the reconstructed genes and their functions with each other.

Expand Down Expand Up @@ -753,7 +743,7 @@ To delete the conda environment
conda remove --name denovo-assembly --all -y
```

### Summary
## Summary

In this practical course you have gone through all the important steps that are necessary for _de novo_ assembling ancient metagenomic sequencing data to obtain contiguous DNA sequences with little error. Furthermore, you have learned how to cluster these sequences into bins without using any references and how to refine them based on lineage-specific marker genes. For these refined bins, you have evaluated their quality regarding common standards set by the scientific community and assigned the MAGs to its most likely taxon. Finally, we learned how to infer the presence of ancient DNA damage and annotate them for RNA genes and protein-coding sequences.

Expand Down
23 changes: 10 additions & 13 deletions functional-profiling.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -3,28 +3,27 @@ title: Functional Profiling
author: Irina Velsko, James A. Fellows Yates
---

::: {.callout-warning}
::: {.callout-important}
This chapter has not been updated since the 2022 edition of this book.
:::

::: {.callout-tip}
For this chapter's exercises, if not already performed, you will need to create the [conda environment](before-you-start.qmd#creating-a-conda-environment) from the `yml` file in the following [link](https://github.com/SPAAM-community/intro-to-ancient-metagenomics-book/raw/main/assets/envs/functional-profiling.yml) (right click and save as to download), and once created, activate the environment with:
For this chapter's exercises, if not already performed, you will need to download the chapter's dataset, decompress the archive, and create and activate the conda environment.

Do this, use `wget` or right click and save to download this Zenodo archive: [10.5281/zenodo.6983188](https://doi.org/10.5281/zenodo.6983188), and unpack

```bash
conda activate functional-profiling
tar xvf 5c-functional-genomics.tar.gz
cd 5c-functional-genomics/
```

To download the data for this chapter, please download following archive with, extract the tar, and change into the directory.

For example
You can then create the subsequently activate environment with

```bash
wget -P . -O functional-profiling.tar.gz https://zenodo.org/record/6983189/files/5c-functional-genomics.tar.gz
tar -xzf functional-profiling.tar.gz
cd functional-profiling/
conda env create -f day5.yml
conda activate phylogenomics-functional
```
:::

:::{.callout-note}
The above conda environment _does not_ include HUMAnN3 due to conflicts with the R packages in the environment.

Expand All @@ -39,8 +38,6 @@ conda create -n humann3 -c bioconda humann

## Preparation



Open R Studio from within the conda environment

```bash
Expand Down Expand Up @@ -72,7 +69,7 @@ Running HUMAnN3 module requires about 72 GB of memory because it has to load a
If you have sufficient computational memory resources, you can run the following steps to run the bin refinement yourself.

We will not run HUMANn3 here as it requires very large databases and takes a long time to run, we have already prepared output for you.

:::
::: {.callout-warning title="Example commands - do not run!" collapse="true"}

```bash
Expand Down
2 changes: 1 addition & 1 deletion genome-mapping.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -452,7 +452,7 @@ To delete the conda environment
conda remove --name genome-mapping --all -y
```

### Conclusions
## Summary

- Mapping DNA sequencing reads to a reference genome is a complex procedure that requires multiple steps.
- Mapping results are the basis for genotyping, i.e. the detection of differences to the reference.
Expand Down
Loading

0 comments on commit 292ad6c

Please sign in to comment.