CRAN Package Check Results for Package scoringutils

Last updated on 2026-01-14 17:50:15 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 2.1.2 11.34 269.99 281.33 OK
r-devel-linux-x86_64-debian-gcc 2.1.2 6.15 178.29 184.44 OK
r-devel-linux-x86_64-fedora-clang 2.1.2 18.00 406.00 424.00 OK
r-devel-linux-x86_64-fedora-gcc 2.1.2 20.00 424.29 444.29 OK
r-devel-windows-x86_64 2.1.2 14.00 232.00 246.00 OK
r-patched-linux-x86_64 2.1.2 12.96 236.09 249.05 OK
r-release-linux-x86_64 2.1.2 10.10 234.98 245.08 OK
r-release-macos-arm64 2.1.2 OK
r-release-macos-x86_64 2.1.2 7.00 168.00 175.00 OK
r-release-windows-x86_64 2.1.2 15.00 224.00 239.00 OK
r-oldrel-macos-arm64 2.1.2 OK
r-oldrel-macos-x86_64 2.1.2 8.00 203.00 211.00 OK
r-oldrel-windows-x86_64 2.1.2 20.00 280.00 300.00 ERROR

Check Details

Version: 2.1.2
Check: tests
Result: ERROR Running 'testthat.R' [63s] Running the tests in 'tests/testthat.R' failed. Complete output: > library(testthat) > library(scoringutils) > > test_check("scoringutils") Saving _problems/test-metrics-binary-17.R Saving _problems/test-metrics-binary-29.R Saving _problems/test-metrics-binary-33.R Saving _problems/test-metrics-binary-51.R Saving _problems/test-metrics-binary-123.R Saving _problems/test-metrics-binary-137.R Saving _problems/test-metrics-binary-156.R Saving _problems/test-metrics-binary-174.R i Some rows containing NA values may be removed. This is fine if not unexpected. [ FAIL 8 | WARN 0 | SKIP 13 | PASS 587 ] ══ Skipped tests (13) ══════════════════════════════════════════════════════════ • On CRAN (13): 'test-class-forecast.R:116:1', 'test-get-correlations.R:50:3', 'test-get-coverage.R:67:3', 'test-get-coverage.R:90:3', 'test-get-forecast-counts.R:63:3', 'test-helper-quantile-interval-range.R:97:1', 'test-helper-quantile-interval-range.R:149:1', 'test-pairwise_comparison.R:537:3', 'test-pairwise_comparison.R:545:3', 'test-plot_heatmap.R:9:3', 'test-plot_wis.R:24:3', 'test-plot_wis.R:35:3', 'test-plot_wis.R:47:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-metrics-binary.R:17:3'): correct input works ───────────────── Expected `assert_input_binary(observed, predicted)` not to throw any conditions. Actually got a <simpleError> with message: Assertion on 'observed' failed: Must have exactly 2 levels. ── Failure ('test-metrics-binary.R:29:3'): correct input works ───────────────── Expected `assert_input_binary(observed, predicted = 0.2)` not to throw any conditions. Actually got a <simpleError> with message: Assertion on 'observed' failed: Must have exactly 2 levels. ── Failure ('test-metrics-binary.R:33:3'): correct input works ───────────────── Expected `assert_input_binary(observed, matrix(predicted))` not to throw any conditions. Actually got a <simpleError> with message: Assertion on 'observed' failed: Must have exactly 2 levels. ── Error ('test-metrics-binary.R:48:3'): function throws an error for wrong input formats ── Error in `assert_input_binary(observed = observed, predicted = as.list(predicted))`: Assertion on 'observed' failed: Must have exactly 2 levels. Backtrace: ▆ 1. ├─testthat::expect_error(...) at test-metrics-binary.R:48:3 2. │ └─testthat:::expect_condition_matching_(...) 3. │ └─testthat:::quasi_capture(...) 4. │ ├─testthat (local) .capture(...) 5. │ │ └─base::withCallingHandlers(...) 6. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) 7. └─scoringutils:::assert_input_binary(observed = observed, predicted = as.list(predicted)) 8. └─checkmate::assert_factor(observed, n.levels = 2, min.len = 1) 9. └─checkmate::makeAssertion(x, res, .var.name, add) 10. └─checkmate:::mstop(...) ── Error ('test-metrics-binary.R:120:3'): function throws an error when missing observed or predicted ── Error in `assert_input_binary(observed, predicted)`: Assertion on 'observed' failed: Must have exactly 2 levels. Backtrace: ▆ 1. ├─testthat::expect_error(brier_score(observed = observed), "argument \"predicted\" is missing, with no default") at test-metrics-binary.R:120:3 2. │ └─testthat:::expect_condition_matching_(...) 3. │ └─testthat:::quasi_capture(...) 4. │ ├─testthat (local) .capture(...) 5. │ │ └─base::withCallingHandlers(...) 6. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) 7. └─scoringutils::brier_score(observed = observed) 8. └─scoringutils:::assert_input_binary(observed, predicted) 9. └─checkmate::assert_factor(observed, n.levels = 2, min.len = 1) 10. └─checkmate::makeAssertion(x, res, .var.name, add) 11. └─checkmate:::mstop(...) ── Error ('test-metrics-binary.R:134:3'): Brier score works with different inputs ── Error in `assert_input_binary(observed, predicted)`: Assertion on 'observed' failed: Must have exactly 2 levels. Backtrace: ▆ 1. ├─testthat::expect_equal(...) at test-metrics-binary.R:134:3 2. │ └─testthat::quasi_label(enquo(object), label) 3. │ └─rlang::eval_bare(expr, quo_get_env(quo)) 4. └─scoringutils::brier_score(observed, predicted = 0.2) 5. └─scoringutils:::assert_input_binary(observed, predicted) 6. └─checkmate::assert_factor(observed, n.levels = 2, min.len = 1) 7. └─checkmate::makeAssertion(x, res, .var.name, add) 8. └─checkmate:::mstop(...) ── Error ('test-metrics-binary.R:156:3'): Binary metrics work within and outside of `score()` ── Error in `assert_forecast(data)`: ! Checking `forecast`: Input looks like a binary forecast, but found the following issue: Assertion on 'observed' failed: Must have exactly 2 levels. Backtrace: ▆ 1. ├─scoringutils::score(as_forecast_binary(df)) at test-metrics-binary.R:156:3 2. ├─scoringutils::as_forecast_binary(df) 3. └─scoringutils:::as_forecast_binary.default(df) 4. ├─scoringutils::assert_forecast(data) 5. └─scoringutils:::assert_forecast.forecast_binary(data) 6. └─cli::cli_abort(c(`!` = "Checking `forecast`: Input looks like a binary forecast, but\n found the following issue: {input_check}")) 7. └─rlang::abort(...) ── Error ('test-metrics-binary.R:171:3'): `logs_binary()` works as expected ──── Error in `assert_input_binary(observed, predicted)`: Assertion on 'observed' failed: Must have exactly 2 levels. Backtrace: ▆ 1. ├─testthat::expect_equal(...) at test-metrics-binary.R:171:3 2. │ └─testthat::quasi_label(enquo(object), label) 3. │ └─rlang::eval_bare(expr, quo_get_env(quo)) 4. └─scoringutils::logs_binary(observed, predicted) 5. └─scoringutils:::assert_input_binary(observed, predicted) 6. └─checkmate::assert_factor(observed, n.levels = 2, min.len = 1) 7. └─checkmate::makeAssertion(x, res, .var.name, add) 8. └─checkmate:::mstop(...) [ FAIL 8 | WARN 0 | SKIP 13 | PASS 587 ] Deleting unused snapshots: 'get-correlations/plot-correlation.svg', 'get-coverage/plot-interval-coverage.svg', 'get-coverage/plot-quantile-coverage.svg', 'get-forecast-counts/plot-available-forecasts.svg', 'pairwise_comparison/plot-pairwise-comparison-pval.svg', 'pairwise_comparison/plot-pairwise-comparison.svg', 'plot_heatmap/plot-heatmap.svg', 'plot_wis/plot-wis-flip.svg', 'plot_wis/plot-wis-no-relative.svg', and 'plot_wis/plot-wis.svg' Error: ! Test failures. Execution halted Flavor: r-oldrel-windows-x86_64