Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@

* Added documentation pages for each metric type (e.g., `?class-metrics`, `?numeric-metrics`) that list all available metrics with their direction and range. (#547, #540)

* For metrics with alternate argument values that will be used in a metric set, the documentation pages emphasize doing this via `metric_tweak()` #626

* `get_metrics()` was added to return a `metric_set()` containing all metrics of a specified type. (#534)

* All class metrics and probability metrics now include mathematical formulas in their documentation. (#605)
Expand Down
27 changes: 6 additions & 21 deletions R/aaa-metric_set.R
Original file line number Diff line number Diff line change
Expand Up @@ -103,25 +103,10 @@
#'
#' # ---------------------------------------------------------------------------
#'
#' # If you need to set options for certain metrics,
#' # do so by wrapping the metric and setting the options inside the wrapper,
#' # passing along truth and estimate as quoted arguments.
#' # Then add on the function class of the underlying wrapped function,
#' # and the direction of optimization.
#' ccc_with_bias <- function(data, truth, estimate, na_rm = TRUE, ...) {
#' ccc(
#' data = data,
#' truth = !!rlang::enquo(truth),
#' estimate = !!rlang::enquo(estimate),
#' # set bias = TRUE
#' bias = TRUE,
#' na_rm = na_rm,
#' ...
#' )
#' }
#'
#' # Use `new_numeric_metric()` to formalize this new metric function
#' ccc_with_bias <- new_numeric_metric(ccc_with_bias, "maximize")
#' # If you need to set options for certain metrics, do so by using
#' # `metric_tweak()`. Here's an example where we use the `bias` option to the
#' # `ccc()` metric
#' ccc_with_bias <- metric_tweak("ccc_with_bias", ccc, bias = TRUE)
#'
#' multi_metric2 <- metric_set(rmse, rsq, ccc_with_bias)
#'
Expand Down Expand Up @@ -324,7 +309,7 @@ make_prob_class_metric_function <- function(fns) {
if (!is_empty(class_fns) && missing(estimate) && dots_not_empty) {
cli::cli_abort(
c(
"!" = "{.arg estimate} is required for class metrics but was not
"!" = "{.arg estimate} is required for class metrics but was not
provided.",
"i" = "In a metric set, the {.arg estimate} argument must be named.",
"i" = "Example: {.code my_metrics(data, truth, estimate = my_column)}"
Expand Down Expand Up @@ -803,7 +788,7 @@ validate_estimate_static_linear_pred <- function(
) {
if (length(estimate_eval) != 2L) {
cli::cli_abort(
"{.arg estimate} must select exactly 2 columns from {.arg data},
"{.arg estimate} must select exactly 2 columns from {.arg data},
not {length(estimate_eval)}.",
call = call
)
Expand Down
9 changes: 9 additions & 0 deletions R/class-f_meas.R
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,15 @@
#' @author Max Kuhn
#'
#' @template examples-class
#' @examples
#'
#' # Using a different value of 'beta'... if you are adding the metric to a
#' # metric set, you can create a new metric function with the updated argument
#' # value:
#'
#' f2_meas <- metric_tweak("f2_meas", f_meas, beta = 2)
#' multi_metrics <- metric_set(f_meas, f2_meas)
#' multi_metrics(two_class_example, truth, estimate = predicted)
#'
#' @export
f_meas <- function(data, ...) {
Expand Down
10 changes: 10 additions & 0 deletions R/class-kap.R
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,16 @@
#' hpc_cv |>
#' group_by(Resample) |>
#' kap(obs, pred)
#'
#' # Using a different value of 'weighting'... if you are adding the metric to a
#' # metric set, you can create a new metric function with the updated argument
#' # value:
#'
#' kap_lin <- metric_tweak("kap_lin", kap, weighting = "linear")
#' kap_quad <- metric_tweak("kap_quad", kap, weighting = "quadratic")
#' multi_metrics <- metric_set(kap, kap_lin, kap_quad)
#' multi_metrics(hpc_cv, obs, estimate = pred)

kap <- function(data, ...) {
UseMethod("kap")
}
Expand Down
9 changes: 8 additions & 1 deletion R/class-npv.R
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,14 @@
#' 102.
#'
#' @template examples-class
#'
#' @examples
#' # Using a different value of 'prevalence'... if you are adding the metric to a
#' # metric set, you can create a new metric function with the updated argument
#' # value:
#'
#' npv_alt_prev <- metric_tweak("npv_alt_prev", npv, prevalence = 0.40)
#' multi_metrics <- metric_set(npv, npv_alt_prev)
#' multi_metrics(two_class_example, truth, estimate = predicted)
#' @export
npv <- function(data, ...) {
UseMethod("npv")
Expand Down
9 changes: 9 additions & 0 deletions R/class-ppv.R
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,15 @@
#'
#' @template examples-class
#' @examples
#' # Using a different value of 'prevalence'... if you are adding the metric to a
#' # metric set, you can create a new metric function with the updated argument
#' # value:
#'
#' ppv_alt_prev <- metric_tweak("ppv_alt_prev", ppv, prevalence = 0.40)
#' multi_metrics <- metric_set(ppv, ppv_alt_prev)
#' multi_metrics(two_class_example, truth, estimate = predicted)
#'
#' @examples
#' # But what if we think that Class 1 only occurs 40% of the time?
#' ppv(two_class_example, truth, predicted, prevalence = 0.40)
#'
Expand Down
8 changes: 8 additions & 0 deletions R/num-ccc.R
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,14 @@
#'
#'
#' @template examples-numeric
#' @examples
#' # Using a different value of 'bias'... if you are adding the metric to a
#' # metric set, you can create a new metric function with the updated argument
#' # value:
#'
#' ccc_bias <- metric_tweak("ccc_bias", ccc, bias = TRUE)
#' multi_metrics <- metric_set(ccc, ccc_bias)
#' multi_metrics(solubility_test, solubility, prediction)
#'
#' @export
#'
Expand Down
8 changes: 8 additions & 0 deletions R/num-huber_loss.R
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,14 @@
#' _Annals of Statistics_, 53 (1), 73-101.
#'
#' @template examples-numeric
#' @examples
#' # Using a different value of 'delta'... if you are adding the metric to a
#' # metric set, you can create a new metric function with the updated argument
#' # value:
#'
#' huber_loss_2 <- metric_tweak("huber_loss_2", huber_loss, delta = 2)
#' multi_metrics <- metric_set(huber_loss, huber_loss_2)
#' multi_metrics(solubility_test, solubility, prediction)
#'
#' @export
huber_loss <- function(data, ...) {
Expand Down
7 changes: 7 additions & 0 deletions R/num-huber_loss_pseudo.R
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,14 @@
#' (Second Edition). Page 619.
#'
#' @template examples-numeric
#' @examples
#' # Using a different value of 'delta'... if you are adding the metric to a
#' # metric set, you can create a new metric function with the updated argument
#' # value:
#'
#' huber_loss_pseudo_2 <- metric_tweak("huber_loss_pseudo_2", huber_loss_pseudo, delta = 2)
#' multi_metrics <- metric_set(huber_loss_pseudo, huber_loss_pseudo_2)
#' multi_metrics(solubility_test, solubility, prediction)
#' @export
huber_loss_pseudo <- function(data, ...) {
UseMethod("huber_loss_pseudo")
Expand Down
8 changes: 8 additions & 0 deletions R/prob-classification_cost.R
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,14 @@
#' hpc_cv |>
#' group_by(Resample) |>
#' classification_cost(obs, VF:L, costs = hpc_costs)
#'
#' # If this metric will be used in a metric set, you can create a new metric
#' # function with the updated argument value:
#'
#' class_costs <- metric_tweak("class_costs", classification_cost, costs = hpc_costs)
#' multi_metrics <- metric_set(class_costs, roc_auc)
#' multi_metrics(hpc_cv, obs, VF:L)
#'
classification_cost <- function(data, ...) {
UseMethod("classification_cost")
}
Expand Down
8 changes: 8 additions & 0 deletions man/ccc.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 8 additions & 0 deletions man/classification_cost.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

9 changes: 9 additions & 0 deletions man/f_meas.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 8 additions & 0 deletions man/huber_loss.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 7 additions & 0 deletions man/huber_loss_pseudo.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

9 changes: 9 additions & 0 deletions man/kap.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

23 changes: 4 additions & 19 deletions man/metric_set.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 7 additions & 0 deletions man/npv.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 8 additions & 0 deletions man/ppv.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading