accu contains current accuracy information
returned by the corresponding generating function
comp_accu_prob.
Details
Current metrics include:
acc: Overall accuracy as the probability (or proportion) of correctly classifying cases or ofdec_corcases:See
accfor definition and explanations.accvalues range from 0 (no correct prediction) to 1 (perfect prediction).wacc: Weighted accuracy, as a weighted average of the sensitivitysens(aka. hit rateHR,TPR,powerorrecall) and the the specificityspec(aka.TNR) in whichsensis multiplied by a weighting parameterw(ranging from 0 to 1) andspecis multiplied byw's complement(1 - w):wacc = (w * sens) + ((1 - w) * spec)If
w = .50,waccbecomes balanced accuracybacc.mcc: The Matthews correlation coefficient (with values ranging from -1 to +1):mcc = ((hi * cr) - (fa * mi)) / sqrt((hi + fa) * (hi + mi) * (cr + fa) * (cr + mi))A value of
mcc = 0implies random performance;mcc = 1implies perfect performance.See Wikipedia: Matthews correlation coefficient for additional information.
f1s: The harmonic mean of the positive predictive valuePPV(aka.precision) and the sensitivitysens(aka. hit rateHR,TPR,powerorrecall):f1s = 2 * (PPV * sens) / (PPV + sens)See Wikipedia: F1 score for additional information.
Notes:
Accuracy metrics describe the correspondence of decisions (or predictions) to actual conditions (or truth).
There are several possible interpretations of accuracy:
Computing exact accuracy values based on probabilities (by
comp_accu_prob) may differ from accuracy values computed from (possibly rounded) frequencies (bycomp_accu_freq).When frequencies are rounded to integers (see the default of
round = TRUEincomp_freqandcomp_freq_prob) the accuracy metrics computed bycomp_accu_freqcorrespond to these rounded values. Usecomp_accu_probto obtain exact accuracy metrics from probabilities.
See also
The corresponding generating function comp_accu_prob computes exact accuracy metrics from probabilities;
acc defines accuracy as a probability;
comp_accu_freq computes accuracy metrics from frequencies;
num for basic numeric parameters;
freq for current frequency information;
prob for current probability information;
txt for current text settings.
Other lists containing current scenario information:
freq,
num,
pal,
pal_bw,
pal_bwp,
pal_kn,
pal_mbw,
pal_mod,
pal_org,
pal_rgb,
pal_unikn,
pal_vir,
prob,
txt,
txt_TF,
txt_org
Other metrics:
acc,
comp_acc(),
comp_accu_freq(),
comp_accu_prob(),
comp_err(),
err
Examples
accu <- comp_accu_prob() # => compute exact accuracy metrics (from probabilities)
accu # => current accuracy information
#> $acc
#> [1] 0.775
#>
#> $w
#> [1] 0.5
#>
#> $wacc
#> [1] 0.8
#>
#> $mcc
#> [1] 0.5303301
#>
#> $f1s
#> [1] 0.6538462
#>
## Contrasting comp_accu_freq and comp_accu_prob:
# (a) comp_accu_freq (based on rounded frequencies):
freq1 <- comp_freq(N = 10, prev = 1/3, sens = 2/3, spec = 3/4) # => rounded frequencies!
accu1 <- comp_accu_freq(freq1$hi, freq1$mi, freq1$fa, freq1$cr) # => accu1 (based on rounded freq).
# accu1
#
# (b) comp_accu_prob (based on probabilities):
accu2 <- comp_accu_prob(prev = 1/3, sens = 2/3, spec = 3/4) # => exact accu (based on prob).
# accu2
all.equal(accu1, accu2) # => 4 differences!
#> [1] "Component “acc”: Mean relative difference: 0.03174603"
#> [2] "Component “wacc”: Mean relative difference: 0.02586207"
#> [3] "Component “mcc”: Mean relative difference: 0.1306675"
#> [4] "Component “f1s”: Mean relative difference: 0.07692308"
#
# (c) comp_accu_freq (exact values, i.e., without rounding):
freq3 <- comp_freq(N = 10, prev = 1/3, sens = 2/3, spec = 3/4, round = FALSE)
accu3 <- comp_accu_freq(freq3$hi, freq3$mi, freq3$fa, freq3$cr) # => accu3 (based on EXACT freq).
# accu3
all.equal(accu2, accu3) # => TRUE (qed).
#> [1] TRUE
