pareto_rank() to its own
page.is_nondominated().r2_exact() implements the exact computation of the R2
indicator for bi-objective solution sets.hypervolume() is significantly faster for more than
four dimensions (Andreia P. Guerreiro).hypervolume() now handles 1D inputs and provides a
clear error for 0D inputs (#58).GPGame and targeted packages to Benchmarks.is_nondominated(), filter_dominated() and
pareto_rank() use a significantly faster algorithm by Kung
et al. for dimensions larger than 3.generate_ndset(): New shapes
"inverted-simplex" and "concave-simplex".
Shape "convex-simplex" is now equivalent to
generate_ndset(..., method="simplex")^2, which is slightly
more uniform than the previous approach.hv_approx() is now
"Rphi-FWE+", which is typically as accurate as the other
methods, but significantly faster.pareto_rank() is faster in 3D.igd(), igd_plus(),
avg_hausdorff_dist() are faster.epsilon_mult() when mixing
minimization and maximization.epsilon_additive() (@leandrolanzieri)hypervolume() uses the inclusion-exclusion algorithm
for small inputs of up to 15 points, which is significantly faster.mooocore now requires R >= 4.1.is_nondominated(): Fix wrong assert (#38).hv_contributions() ignores dominated points by default.
Set ignore_dominated=FALSE to restore the previous
behavior. The 3D case uses the HVC3D algorithm.any_dominated().generate_ndset() to generate random
nondominated sets with different shapes.is_nondominated(), any_dominated() and
pareto_rank() now handle single-objective inputs correctly
(#27) (#29).is_nondominated() and filter_dominated()
are faster for dimensions larger than 3.is_nondominated() and filter_dominated()
are now stable in 2D and 3D with keep_weakly=FALSE, that
is, only the first of duplicated points is marked as nondominated.hv_approx().hv_contributions() is much faster for 2D
inputs.DTLZLinearShape.8d.front.60pts.10 and
ran.10pts.9d.10.hypervolume() now uses the HV3D+ algorithm for the
3D case and the HV4D+ algorithm for the 4D case. For dimensions larger
than 4, the recursive algorithm uses HV4D+ as the base case, which is
significantly faster.
read_datasets() is significantly faster for large
files.
is_nondominated() and
filter_dominated() are faster for 3D inputs.
vorobT() and vorobDev() to
vorob_t() and vorob_dev() to be consistent
with other function names.