nf-core/magmap
Best-practice analysis pipeline for mapping reads to a (large) collections of genomes
Introduction
This document describes the output produced by the pipeline.
The directories listed below will be created in the results directory after the pipeline has finished. All paths are relative to the top-level results directory.
Pipeline overview
The pipeline is built using Nextflow and the results are organized as follow:
- Summary tables - Tab separated tables ready for further analysis in tools like R and Python
- Module output
- Preprocessing
- FastQC - Read quality control
- Trim galore! - Primer trimming
- MultiQC - Aggregate report describing results
- BBduk - Filter out sequences from samples that matches sequences in a user-provided fasta file (optional)
- Community composition - General analysis of the taxonomic composition of the communities in the samples using the kmer-based Kraken2 tool.
- Filtering genomes - Generate a list of genomes that will be used for the mapping
- Sourmash - Output from Sourmash filtering of genomes.
- Prokka - Output from Prokka
- Genome fetching - Genomes fetched from remote sources
- Quantification of genome features
- BBmap - Output from BBmap
- FeatureCounts - Output from FeatureCounts
- Preprocessing
- Pipeline information - Report metrics generated during the workflow execution
Summary tables
Consistently named and formatted output tables in TSV format ready for further analysis.
Output files
summary_tables/
magmap.overall_stats.tsv.gz
: Overall statistics from the pipeline, e.g. number of reads, number of called ORFs, number of reads mapping back to contigs/ORFs etc.magmap.<FEATURE>.counts.tsv.gz
: Read counts forFEATURE
per ORF and sample.magmap.genome_metadata.tsv.gz
: Genome metadata from GTDB, GTDB-Tk and CheckM/CheckM2 if provided by the user.magmap.genomes2orfs.tsv.gz
: Translation table from ORF identifiers to genome identifiers.magmap.prokka-annotations.tsv.gz
: Annotation details extracted from GFF files.
Module output
Preprocessing
FastQC
FastQC gives general quality metrics about your sequenced reads. It provides information about the quality score distribution across your reads, per base sequence content (%A/T/G/C), adapter contamination and overrepresented sequences. For further reading and documentation see the FastQC help pages. FastQC is run as part of Trim galore! therefore its output can be found in Trim galore’s folder.
Output files
trimgalore/fastqc/
*_fastqc.html
: FastQC report containing quality metrics for your untrimmed raw fastq files.
Trim galore!
Trim galore! is trimming primer sequences from sequencing reads. Primer sequences are non-biological sequences that often introduce point mutations that do not reflect sample sequences. This is especially true for degenerated PCR primers.
Output files
trimgalore/
: directory containing log files with retained reads, trimming percentage, etc. for each sample.*trimming_report.txt
: report of read numbers that pass trimgalore.
MultiQC
MultiQC is a visualization tool that generates a single HTML report summarising all samples in your project. Most of the pipeline QC results are visualised in the report and further statistics are available in the report data directory.
Results generated by MultiQC collate pipeline QC from supported tools e.g. FastQC. The pipeline has special steps which also allow the software versions to be reported in the MultiQC output for future traceability. For more information about how to use MultiQC reports, see http://multiqc.info.
Output files
multiqc/
multiqc_report.html
: a standalone HTML file that can be viewed in your web browser.multiqc_data/
: directory containing parsed statistics from the different tools used in the pipeline.multiqc_plots/
: directory containing static images from the report in various formats.
The FastQC plots displayed in the MultiQC report shows untrimmed reads. They may contain adapter sequence and potentially regions with low quality.
BBduk
BBduk is a filtering tool that removes specific sequences from the samples using a reference fasta file. BBduk is built-in tool from BBmap.
Output files
bbmap/
*.bbduk.log
: a text file with the results from BBduk analysis. Number of filtered reads can be seen in this log.
Community composition
If not skipped with the --skip_kraken2
parameter, Kraken2 will be called to provide an overview of taxonomic community compositions of the samples.
In addition, the Taxburst program will be called to produce an html file with a “Krona” diagram
kraken2/
*.txt
: Text format Kraken2 output
taxburst/
*.html
: Krona diagrams
Filtering genomes
The Sourmash program can be used to prefilter genomes so that only genomes likely to be represented among the reads are passed to mapping.
In addition, Sourmash can be used to fetch remote genomes, see usage docs.
No output from Sourmash is enabled by default; the output is only used to select genomes for further processing.
Use --sourmash_save_sourmash
to copy output files.
sourmash/
*
: Output from Sourmash
Prokka
Prokka will be used to identify ORFs in any genomes for which a gff file is not provided.
In addition to calling ORFs (done with Prodigal) Prokka will functionally annotate the ORFs.
To make it easier to reuse already annotated genomes in other projects, output from Prokka is directed to subdirectories of the directory specified with the --prokka_store_dir
parameter (by default prokka
in the working directory for the pipeline run).
Genomes already found in the directory specified, will be skipped by the Prokka step.
Output files
prokka/
<accno>
*.ffn
: nucleotide fasta file output*.faa
: amino acid fasta file output*.gff
: genome feature file output
Genome fetching
When the pipeline is run with --skip_sourmash false
and one or more index files passed to --indexes
, remote genomes will be identified and downloaded to the directory specified by --genome_store_dir
(by default genomes
in the working directory for the pipeline run).
Quantification of genome features
BBmap
Only logs are saved by default from the BBmap step.
To save the .bam
files, use --bbmap_save_bam
and to save the index, use --bbmap_save_index
.
bbmap/
bam/
<SAMPLE>.bam
: bam file forSAMPLE
logs/
:<SAMPLE>.bbmap.log
: BBmap log forSAMPLE
FeatureCounts
featurecounts/
<SAMPLE>.<FEATURE>.featureCounts.tsv
: Counts forSAMPLE
andFEATURE
<SAMPLE>.<FEATURE>.featureCounts.tsv.summary
: Summary of counts forSAMPLE
andFEATURE
Pipeline information
Output files
pipeline_info/
- Reports generated by Nextflow:
execution_report.html
,execution_timeline.html
,execution_trace.txt
andpipeline_dag.dot
/pipeline_dag.svg
. - Reports generated by the pipeline:
pipeline_report.html
,pipeline_report.txt
andsoftware_versions.yml
. Thepipeline_report*
files will only be present if the--email
/--email_on_fail
parameter’s are used when running the pipeline. - Reformatted samplesheet files used as input to the pipeline:
samplesheet.valid.csv
. - Parameters used by the pipeline run:
params.json
.
- Reports generated by Nextflow:
Nextflow provides excellent functionality for generating various reports relevant to the running and execution of the pipeline. This will allow you to troubleshoot errors with the running of the pipeline, and also provide you with other information such as launch commands, run times and resource usage.