-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Labels
enhancementNew feature or requestNew feature or request
Description
We need to create a single-docker-image pipeline for spodgi.
We'll then have component_segmentation and schematize add to that triple store and use it as the only data source. It's not performant for gigabase genomes, but totally sufficient for COVID and simpler to develop RDF logic against a pure TTL.
Input: GFA
- Run odgi & odgi bin
- Load them on spodgi
- Run Schematize for RDF server
What we need to do:
- Update ttl model for storing sorting/binning Write Ontology for Pantograph concepts extending vg graph-genome.github.io#15
- Update component_schematize to generate RDF with bins. Turtle output component_segmentation#34
- Update Schematize to load TTL data of graph genomes Data loading based on pantograph RDF via SPARQL Schematize#53.
- Create docker version of
spodgiDocker images pangenome/spodgi#13 - Merge these procedures on one pipeline (this issue).
Optional
- Expose spodgi as an server and querying via http SPARQL endpoint over http? pangenome/spodgi#12
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request