Skip to content

Meta v0 SummRt #8

Open
Open
@kaitejohnson

Description

@kaitejohnson

Outlining tasks for a v0 of an R(t) evaluation package

Thinking each of these points can be separate issues that individuals can create and assign to themselves/ their small group

  • Create and document/describe canonical dataset (s) as package data. Start with data used in RtEval repo
  • Write S3 method to take in package function arguments needed to fit a dataset and generate standardized output formats across multiple packages.
    To discuss: what methods do we want (e.g. fit, plot, score and are they specific to a dataset or general across datasets? What is a standardized output format conducive to evaluation across all the packages? E.g. Chad has created plot data for all packages (just R(t) by day, Rt_median, lb, ub) but should we have additional ones such as predicted observations?)
    • EpiNow2
    • EpiEstim
    • rtestim
    • EpiLPS
  • Function to evaluate standardized outputs for each dataset (onus is on the dataset team to tell the eval team how to eval e.g. if real data this will be forecast/nowcast based eval of generated observations, if simulated data could be that + R(t) eval)
  • For each canonical dataset, a vignette combining fits and standardized outputs, running the evaluating, plotting outputs for comparison
  • README.md with sections describing the purpose of the repo, what package developers can do to contribute (e.g. check their implementation, create a vignette in their package that fits to the canonical dataset and link to it, add a dataset that their package is intended for, open a PR to add their package, etc), what this is intended to provide to users/evaluators (a roadmap for which R(t) packages to use when)
  • Automate updating of package versions in CI at some decided upon cadence and rerun vignettes (vignettes could also be rerun on merge to main, but this might be prohibitively slow). See epinowcast for example of automated vignette updating
  • Set up github actions for creating package down site, running tests, and checking pre-commit (and set up locally)

A lot of this functionality is already in https://github.com/cmilando/RtEval so I think the main goal should be to reuse as much of that infrastructure as possible

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions