Package evaluation to test MLJTuning on Julia 1.14.0-DEV.1563 (14ca1abc72*) started at 2026-01-15T23:31:50.346 ################################################################################ # Set-up # Installing PkgEval dependencies (TestEnv)... Activating project at `~/.julia/environments/v1.14` Set-up completed after 9.97s ################################################################################ # Installation # Installing MLJTuning... Resolving package versions... Installed CompositionsBase ──────────── v0.1.2 Installed NameResolution ────────────── v0.1.5 Installed MicroCollections ──────────── v0.2.0 Installed Rmath_jll ─────────────────── v0.5.1+0 Installed MacroTools ────────────────── v0.5.16 Installed DefineSingletons ──────────── v0.1.2 Installed FillArrays ────────────────── v1.15.0 Installed DataStructures ────────────── v0.19.3 Installed Adapt ─────────────────────── v4.4.0 Installed DelimitedFiles ────────────── v1.9.1 Installed ContextVariablesX ─────────── v0.1.3 Installed ConstructionBase ──────────── v1.6.0 Installed HypergeometricFunctions ───── v0.3.28 Installed AliasTables ───────────────── v1.1.3 Installed ScientificTypes ───────────── v3.1.2 Installed FLoops ────────────────────── v0.2.2 Installed OrderedCollections ────────── v1.8.1 Installed JuliaVariables ────────────── v0.2.4 Installed RecipesBase ───────────────── v1.3.4 Installed HashArrayMappedTries ──────── v0.2.0 Installed IteratorInterfaceExtensions ─ v1.0.0 Installed Compat ────────────────────── v4.18.1 Installed StatsBase ─────────────────── v0.34.10 Installed DataAPI ───────────────────── v1.16.0 Installed InvertedIndices ───────────── v1.3.1 Installed Statistics ────────────────── v1.11.1 Installed InitialValues ─────────────── v0.3.1 Installed MLJTuning ─────────────────── v0.8.9 Installed NNlib ─────────────────────── v0.9.33 Installed Atomix ────────────────────── v1.1.2 Installed PrecompileTools ───────────── v1.3.3 Installed DataValueInterfaces ───────── v1.0.0 Installed StatsAPI ──────────────────── v1.8.0 Installed ProgressMeter ─────────────── v1.11.0 Installed MLCore ────────────────────── v1.0.0 Installed ArgCheck ──────────────────── v2.5.0 Installed StaticArraysCore ──────────── v1.4.4 Installed MLUtils ───────────────────── v0.4.8 Installed StaticArrays ──────────────── v1.9.16 Installed PrettyPrint ───────────────── v0.2.0 Installed IrrationalConstants ───────── v0.2.6 Installed GPUArraysCore ─────────────── v0.2.0 Installed QuadGK ────────────────────── v2.11.2 Installed ColorTypes ────────────────── v0.12.1 Installed ChainRulesCore ────────────── v1.26.0 Installed BangBang ──────────────────── v0.4.6 Installed StringManipulation ────────── v0.4.2 Installed Requires ──────────────────── v1.3.1 Installed OpenSpecFun_jll ───────────── v0.5.6+0 Installed SimpleTraits ──────────────── v0.9.5 Installed StableRNGs ────────────────── v1.0.4 Installed StatisticalMeasuresBase ───── v0.1.3 Installed LogExpFunctions ───────────── v0.3.29 Installed FLoopsBase ────────────────── v0.1.1 Installed ComputationalResources ────── v0.3.2 Installed Rmath ─────────────────────── v0.9.0 Installed CategoricalDistributions ──── v0.2.1 Installed KernelAbstractions ────────── v0.9.39 Installed Distributions ─────────────── v0.25.123 Installed ScopedValues ──────────────── v1.5.0 Installed PrettyTables ──────────────── v3.1.2 Installed MLJBase ───────────────────── v1.12.1 Installed Parameters ────────────────── v0.12.3 Installed PtrArrays ─────────────────── v1.3.0 Installed SplittablesBase ───────────── v0.1.15 Installed StatsFuns ─────────────────── v1.5.2 Installed Tables ────────────────────── v1.12.1 Installed UnsafeAtomics ─────────────── v0.3.0 Installed SpecialFunctions ──────────── v2.6.1 Installed Transducers ───────────────── v0.4.85 Installed InverseFunctions ──────────── v0.1.17 Installed Reexport ──────────────────── v1.2.2 Installed FixedPointNumbers ─────────── v0.8.5 Installed Setfield ──────────────────── v1.1.2 Installed Preferences ───────────────── v1.5.1 Installed Missings ──────────────────── v1.2.0 Installed Baselet ───────────────────── v0.1.1 Installed TableTraits ───────────────── v1.0.1 Installed JLLWrappers ───────────────── v1.7.1 Installed ShowCases ─────────────────── v0.1.0 Installed LaTeXStrings ──────────────── v1.4.0 Installed ScientificTypesBase ───────── v3.0.0 Installed PDMats ────────────────────── v0.11.37 Installed SortingAlgorithms ─────────── v1.2.2 Installed Crayons ───────────────────── v4.1.1 Installed UnPack ────────────────────── v1.0.2 Installed DocStringExtensions ───────── v0.9.5 Installed LearnAPI ──────────────────── v2.0.1 Installed StatisticalTraits ─────────── v3.5.0 Installed CategoricalArrays ─────────── v1.0.2 Installed LatinHypercubeSampling ────── v1.9.0 Installed MLStyle ───────────────────── v0.4.17 Installed MLJModelInterface ─────────── v1.12.1 Installed Accessors ─────────────────── v0.1.43 Installing 2 artifacts Installed artifact Rmath 121.9 KiB Installed artifact OpenSpecFun 194.9 KiB Updating `~/.julia/environments/v1.14/Project.toml` [03970b2e] + MLJTuning v0.8.9 Updating `~/.julia/environments/v1.14/Manifest.toml` [7d9f7c33] + Accessors v0.1.43 [79e6a3ab] + Adapt v4.4.0 [66dad0bd] + AliasTables v1.1.3 [dce04be8] + ArgCheck v2.5.0 [a9b6321e] + Atomix v1.1.2 [198e06fe] + BangBang v0.4.6 [9718e550] + Baselet v0.1.1 [324d7699] + CategoricalArrays v1.0.2 [af321ab8] + CategoricalDistributions v0.2.1 [d360d2e6] + ChainRulesCore v1.26.0 [3da002f7] + ColorTypes v0.12.1 [34da2185] + Compat v4.18.1 [a33af91c] + CompositionsBase v0.1.2 [ed09eef8] + ComputationalResources v0.3.2 [187b0558] + ConstructionBase v1.6.0 [6add18c4] + ContextVariablesX v0.1.3 [a8cc5b0e] + Crayons v4.1.1 [9a962f9c] + DataAPI v1.16.0 [864edb3b] + DataStructures v0.19.3 [e2d170a0] + DataValueInterfaces v1.0.0 [244e2a9f] + DefineSingletons v0.1.2 [8bb1440f] + DelimitedFiles v1.9.1 [31c24e10] + Distributions v0.25.123 [ffbed154] + DocStringExtensions v0.9.5 [cc61a311] + FLoops v0.2.2 [b9860ae5] + FLoopsBase v0.1.1 [1a297f60] + FillArrays v1.15.0 [53c48c17] + FixedPointNumbers v0.8.5 [46192b85] + GPUArraysCore v0.2.0 [076d061b] + HashArrayMappedTries v0.2.0 [34004b35] + HypergeometricFunctions v0.3.28 [22cec73e] + InitialValues v0.3.1 [3587e190] + InverseFunctions v0.1.17 [41ab1584] + InvertedIndices v1.3.1 [92d709cd] + IrrationalConstants v0.2.6 [82899510] + IteratorInterfaceExtensions v1.0.0 [692b3bcd] + JLLWrappers v1.7.1 [b14d175d] + JuliaVariables v0.2.4 [63c18a36] + KernelAbstractions v0.9.39 [b964fa9f] + LaTeXStrings v1.4.0 [a5e1c1ea] + LatinHypercubeSampling v1.9.0 [92ad9a40] + LearnAPI v2.0.1 [2ab3a3ac] + LogExpFunctions v0.3.29 [c2834f40] + MLCore v1.0.0 [a7f614a8] + MLJBase v1.12.1 [e80e1ace] + MLJModelInterface v1.12.1 [03970b2e] + MLJTuning v0.8.9 [d8e11817] + MLStyle v0.4.17 [f1d291b0] + MLUtils v0.4.8 [1914dd2f] + MacroTools v0.5.16 [128add7d] + MicroCollections v0.2.0 [e1d29d7a] + Missings v1.2.0 [872c559c] + NNlib v0.9.33 [71a1bf82] + NameResolution v0.1.5 [bac558e1] + OrderedCollections v1.8.1 [90014a1f] + PDMats v0.11.37 [d96e819e] + Parameters v0.12.3 [aea7be01] + PrecompileTools v1.3.3 [21216c6a] + Preferences v1.5.1 [8162dcfd] + PrettyPrint v0.2.0 [08abe8d2] + PrettyTables v3.1.2 [92933f4c] + ProgressMeter v1.11.0 [43287f4e] + PtrArrays v1.3.0 [1fd47b50] + QuadGK v2.11.2 [3cdcf5f2] + RecipesBase v1.3.4 [189a3867] + Reexport v1.2.2 [ae029012] + Requires v1.3.1 [79098fc4] + Rmath v0.9.0 [321657f4] + ScientificTypes v3.1.2 [30f210dd] + ScientificTypesBase v3.0.0 [7e506255] + ScopedValues v1.5.0 [efcf1570] + Setfield v1.1.2 [605ecd9f] + ShowCases v0.1.0 [699a6c99] + SimpleTraits v0.9.5 [a2af1166] + SortingAlgorithms v1.2.2 [276daf66] + SpecialFunctions v2.6.1 [171d559e] + SplittablesBase v0.1.15 [860ef19b] + StableRNGs v1.0.4 [90137ffa] + StaticArrays v1.9.16 [1e83bf80] + StaticArraysCore v1.4.4 [c062fc1d] + StatisticalMeasuresBase v0.1.3 [64bff920] + StatisticalTraits v3.5.0 [10745b16] + Statistics v1.11.1 [82ae8749] + StatsAPI v1.8.0 [2913bbd2] + StatsBase v0.34.10 [4c63d2b9] + StatsFuns v1.5.2 [892a3eda] + StringManipulation v0.4.2 [3783bdb8] + TableTraits v1.0.1 [bd369af6] + Tables v1.12.1 [28d57a85] + Transducers v0.4.85 [3a884ed6] + UnPack v1.0.2 [013be700] + UnsafeAtomics v0.3.0 [efe28fd5] + OpenSpecFun_jll v0.5.6+0 [f50d1b31] + Rmath_jll v0.5.1+0 [56f22d72] + Artifacts v1.11.0 [2a0f44e3] + Base64 v1.11.0 [ade2ca70] + Dates v1.11.0 [8ba89e20] + Distributed v1.11.0 [7b1f6079] + FileWatching v1.11.0 [9fa8497b] + Future v1.11.0 [b77e0a4c] + InteractiveUtils v1.11.0 [ac6e5ff7] + JuliaSyntaxHighlighting v1.13.0 [8f399da3] + Libdl v1.11.0 [37e2e46d] + LinearAlgebra v1.13.0 [56ddb016] + Logging v1.11.0 [d6f4376e] + Markdown v1.11.0 [a63ad114] + Mmap v1.11.0 [de0858da] + Printf v1.11.0 [3fa0cd96] + REPL v1.11.0 [9a3f8284] + Random v1.11.0 [ea8e919c] + SHA v1.0.0 [9e88b42a] + Serialization v1.11.0 [6462fe0b] + Sockets v1.11.0 [2f01184e] + SparseArrays v1.13.0 [f489334b] + StyledStrings v1.13.0 [4607b0f0] + SuiteSparse [fa267f1f] + TOML v1.0.3 [8dfed614] + Test v1.11.0 [cf7118a7] + UUIDs v1.11.0 [4ec0a83e] + Unicode v1.11.0 [e66e0078] + CompilerSupportLibraries_jll v1.3.0+1 [4536629a] + OpenBLAS_jll v0.3.29+0 [05823500] + OpenLibm_jll v0.8.7+0 [bea87d4a] + SuiteSparse_jll v7.10.1+0 [8e850b90] + libblastrampoline_jll v5.15.0+0 Installation completed after 9.74s ################################################################################ # Precompilation # Precompiling PkgEval dependencies... Precompiling packages... 4681.4 ms ✓ TestEnv 1 dependency successfully precompiled in 5 seconds. 27 already precompiled. Precompiling package dependencies... Precompiling packages... 2178.8 ms ✓ Baselet 4248.8 ms ✓ MacroTools 830.3 ms ✓ Reexport 1146.2 ms ✓ Statistics 919.8 ms ✓ DataAPI 1105.9 ms ✓ ConstructionBase 1996.0 ms ✓ IrrationalConstants 821.7 ms ✓ UnPack 932.7 ms ✓ PrettyPrint 798.9 ms ✓ DataValueInterfaces 969.2 ms ✓ StaticArraysCore 867.5 ms ✓ StatsAPI 1202.8 ms ✓ Requires 3405.1 ms ✓ UnsafeAtomics 1338.3 ms ✓ OrderedCollections 2391.6 ms ✓ ShowCases 883.7 ms ✓ InvertedIndices 25282.8 ms ✓ MLStyle 950.2 ms ✓ StableRNGs 939.8 ms ✓ InverseFunctions 952.0 ms ✓ HashArrayMappedTries 861.0 ms ✓ CompositionsBase 1140.7 ms ✓ DocStringExtensions 1194.2 ms ✓ AbstractTrees 1594.5 ms ✓ InitialValues 815.4 ms ✓ DefineSingletons 1017.2 ms ✓ ComputationalResources 1002.8 ms ✓ ArgCheck 1909.7 ms ✓ FillArrays 850.9 ms ✓ PtrArrays 888.5 ms ✓ ScientificTypesBase 854.3 ms ✓ IteratorInterfaceExtensions 847.2 ms ✓ LaTeXStrings 1801.3 ms ✓ Crayons 1841.4 ms ✓ PDMats 997.9 ms ✓ DelimitedFiles 1813.7 ms ✓ ProgressMeter 1136.2 ms ✓ Compat 1141.5 ms ✓ Preferences 3380.1 ms ✓ SimpleTraits 3697.4 ms ✓ FixedPointNumbers 1319.3 ms ✓ Statistics → SparseArraysExt 962.0 ms ✓ ScikitLearnBase 969.7 ms ✓ Missings 806.6 ms ✓ ConstructionBase → ConstructionBaseLinearAlgebraExt 833.3 ms ✓ NameResolution 1507.8 ms ✓ Distances 936.4 ms ✓ Adapt 1022.0 ms ✓ Atomix 997.6 ms ✓ Parameters 3514.1 ms ✓ DataStructures 1674.2 ms ✓ InverseFunctions → InverseFunctionsTestExt 847.2 ms ✓ InverseFunctions → InverseFunctionsDatesExt 892.2 ms ✓ ScopedValues 808.7 ms ✓ CompositionsBase → CompositionsBaseInverseFunctionsExt 1442.1 ms ✓ LogExpFunctions 1622.9 ms ✓ FillArrays → FillArraysSparseArraysExt 950.7 ms ✓ FillArrays → FillArraysStatisticsExt 1063.8 ms ✓ AliasTables 883.0 ms ✓ StatisticalTraits 781.6 ms ✓ TableTraits 1485.3 ms ✓ FillArrays → FillArraysPDMatsExt 819.5 ms ✓ Compat → CompatLinearAlgebraExt 1169.8 ms ✓ JLLWrappers 1543.8 ms ✓ LearnAPI 927.4 ms ✓ PrecompileTools 2849.7 ms ✓ ColorTypes 2450.3 ms ✓ DecisionTree 3002.4 ms ✓ Setfield 11162.3 ms ✓ JuliaVariables 1296.9 ms ✓ Distances → DistancesSparseArraysExt 1255.1 ms ✓ GPUArraysCore 1355.8 ms ✓ Adapt → AdaptSparseArraysExt 1185.4 ms ✓ SortingAlgorithms 2269.7 ms ✓ QuadGK 5110.0 ms ✓ Accessors 966.7 ms ✓ LogExpFunctions → LogExpFunctionsInverseFunctionsExt 3375.6 ms ✓ MLJModelInterface 1752.7 ms ✓ Tables 3092.9 ms ✓ ChainRulesCore 1262.0 ms ✓ ContextVariablesX 3162.4 ms ✓ CategoricalArrays 1581.7 ms ✓ Rmath_jll 1560.9 ms ✓ OpenSpecFun_jll 1526.9 ms ✓ Arpack_jll 2722.7 ms ✓ RecipesBase 13250.7 ms ✓ StaticArrays 3025.6 ms ✓ StringManipulation 1256.5 ms ✓ ColorTypes → StyledStringsExt 3348.9 ms ✓ SplittablesBase 4428.5 ms ✓ StatsBase 2271.9 ms ✓ Accessors → LinearAlgebraExt 1820.5 ms ✓ Accessors → TestExt 3022.5 ms ✓ MLCore 1346.4 ms ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt 907.4 ms ✓ Distances → DistancesChainRulesCoreExt 3410.2 ms ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt 1105.1 ms ✓ FLoopsBase 1909.1 ms ✓ Rmath 5230.6 ms ✓ SpecialFunctions 1713.4 ms ✓ Arpack 1694.8 ms ✓ CategoricalArrays → CategoricalArraysRecipesBaseExt 1573.2 ms ✓ StaticArrays → StaticArraysStatisticsExt 1756.4 ms ✓ StaticArrays → StaticArraysChainRulesCoreExt 1652.2 ms ✓ ConstructionBase → ConstructionBaseStaticArraysExt 1626.4 ms ✓ Adapt → AdaptStaticArraysExt 1954.7 ms ✓ Accessors → StaticArraysExt 34387.1 ms ✓ PrettyTables 1990.9 ms ✓ LatinHypercubeSampling 1389.6 ms ✓ PDMats → StatsBaseExt 1533.9 ms ✓ CategoricalArrays → CategoricalArraysStatsBaseExt 1644.2 ms ✓ BangBang 3775.9 ms ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt 2035.1 ms ✓ HypergeometricFunctions 7671.6 ms ✓ NearestNeighbors 7966.3 ms ✓ KernelAbstractions 1096.1 ms ✓ BangBang → BangBangTablesExt 1163.5 ms ✓ BangBang → BangBangChainRulesCoreExt 1690.2 ms ✓ BangBang → BangBangStaticArraysExt 2558.5 ms ✓ MicroCollections 2838.2 ms ✓ StatsFuns 1959.7 ms ✓ KernelAbstractions → LinearAlgebraExt 2864.4 ms ✓ KernelAbstractions → SparseArraysExt 7615.7 ms ✓ Transducers 1045.3 ms ✓ StatsFuns → StatsFunsInverseFunctionsExt 3557.5 ms ✓ StatsFuns → StatsFunsChainRulesCoreExt 9385.7 ms ✓ Distributions 12728.6 ms ✓ NNlib 2114.6 ms ✓ Transducers → TransducersAdaptExt 14227.0 ms ✓ FLoops 2973.1 ms ✓ Distributions → DistributionsChainRulesCoreExt 3414.3 ms ✓ Distributions → DistributionsTestExt 6193.7 ms ✓ ScientificTypes 3676.5 ms ✓ MultivariateStats 2596.5 ms ✓ NNlib → NNlibSpecialFunctionsExt 21502.7 ms ✓ MLUtils 7724.7 ms ✓ CategoricalDistributions 17864.7 ms ✓ StatisticalMeasuresBase 18109.7 ms ✓ MLJBase 41572.8 ms ✓ StatisticalMeasures 13289.1 ms ✓ MLJTuning 10178.9 ms ✓ StatisticalMeasures → ScientificTypesExt 10244.1 ms ✓ MLJBase → DefaultMeasuresExt 143 dependencies successfully precompiled in 511 seconds. 21 already precompiled. Precompilation completed after 526.49s ################################################################################ # Testing # Testing MLJTuning Status `/tmp/jl_vpC22H/Project.toml` [324d7699] CategoricalArrays v1.0.2 [ed09eef8] ComputationalResources v0.3.2 [7806a523] DecisionTree v0.12.4 [b4f34e82] Distances v0.10.12 [31c24e10] Distributions v0.25.123 [a5e1c1ea] LatinHypercubeSampling v1.9.0 [a7f614a8] MLJBase v1.12.1 [e80e1ace] MLJModelInterface v1.12.1 [03970b2e] MLJTuning v0.8.9 [6f286f6a] MultivariateStats v0.10.3 [b8a86587] NearestNeighbors v0.4.26 [92933f4c] ProgressMeter v1.11.0 [3cdcf5f2] RecipesBase v1.3.4 [321657f4] ScientificTypes v3.1.2 [860ef19b] StableRNGs v1.0.4 [a19d573c] StatisticalMeasures v0.3.3 [c062fc1d] StatisticalMeasuresBase v0.1.3 [10745b16] Statistics v1.11.1 [2913bbd2] StatsBase v0.34.10 [bd369af6] Tables v1.12.1 [8ba89e20] Distributed v1.11.0 [37e2e46d] LinearAlgebra v1.13.0 [9a3f8284] Random v1.11.0 [9e88b42a] Serialization v1.11.0 [8dfed614] Test v1.11.0 Status `/tmp/jl_vpC22H/Manifest.toml` [1520ce14] AbstractTrees v0.4.5 [7d9f7c33] Accessors v0.1.43 [79e6a3ab] Adapt v4.4.0 [66dad0bd] AliasTables v1.1.3 [dce04be8] ArgCheck v2.5.0 [7d9fca2a] Arpack v0.5.4 [a9b6321e] Atomix v1.1.2 [198e06fe] BangBang v0.4.6 [9718e550] Baselet v0.1.1 [324d7699] CategoricalArrays v1.0.2 [af321ab8] CategoricalDistributions v0.2.1 [d360d2e6] ChainRulesCore v1.26.0 [3da002f7] ColorTypes v0.12.1 [34da2185] Compat v4.18.1 [a33af91c] CompositionsBase v0.1.2 [ed09eef8] ComputationalResources v0.3.2 [187b0558] ConstructionBase v1.6.0 [6add18c4] ContextVariablesX v0.1.3 [a8cc5b0e] Crayons v4.1.1 [9a962f9c] DataAPI v1.16.0 [864edb3b] DataStructures v0.19.3 [e2d170a0] DataValueInterfaces v1.0.0 [7806a523] DecisionTree v0.12.4 [244e2a9f] DefineSingletons v0.1.2 [8bb1440f] DelimitedFiles v1.9.1 [b4f34e82] Distances v0.10.12 [31c24e10] Distributions v0.25.123 [ffbed154] DocStringExtensions v0.9.5 [cc61a311] FLoops v0.2.2 [b9860ae5] FLoopsBase v0.1.1 [1a297f60] FillArrays v1.15.0 [53c48c17] FixedPointNumbers v0.8.5 [46192b85] GPUArraysCore v0.2.0 [076d061b] HashArrayMappedTries v0.2.0 [34004b35] HypergeometricFunctions v0.3.28 [22cec73e] InitialValues v0.3.1 [3587e190] InverseFunctions v0.1.17 [41ab1584] InvertedIndices v1.3.1 [92d709cd] IrrationalConstants v0.2.6 [82899510] IteratorInterfaceExtensions v1.0.0 [692b3bcd] JLLWrappers v1.7.1 [b14d175d] JuliaVariables v0.2.4 [63c18a36] KernelAbstractions v0.9.39 [b964fa9f] LaTeXStrings v1.4.0 [a5e1c1ea] LatinHypercubeSampling v1.9.0 [92ad9a40] LearnAPI v2.0.1 [2ab3a3ac] LogExpFunctions v0.3.29 [c2834f40] MLCore v1.0.0 [a7f614a8] MLJBase v1.12.1 [e80e1ace] MLJModelInterface v1.12.1 [03970b2e] MLJTuning v0.8.9 [d8e11817] MLStyle v0.4.17 [f1d291b0] MLUtils v0.4.8 [1914dd2f] MacroTools v0.5.16 [128add7d] MicroCollections v0.2.0 [e1d29d7a] Missings v1.2.0 [6f286f6a] MultivariateStats v0.10.3 [872c559c] NNlib v0.9.33 [71a1bf82] NameResolution v0.1.5 [b8a86587] NearestNeighbors v0.4.26 [bac558e1] OrderedCollections v1.8.1 [90014a1f] PDMats v0.11.37 [d96e819e] Parameters v0.12.3 [aea7be01] PrecompileTools v1.3.3 [21216c6a] Preferences v1.5.1 [8162dcfd] PrettyPrint v0.2.0 [08abe8d2] PrettyTables v3.1.2 [92933f4c] ProgressMeter v1.11.0 [43287f4e] PtrArrays v1.3.0 [1fd47b50] QuadGK v2.11.2 [3cdcf5f2] RecipesBase v1.3.4 [189a3867] Reexport v1.2.2 [ae029012] Requires v1.3.1 [79098fc4] Rmath v0.9.0 [321657f4] ScientificTypes v3.1.2 [30f210dd] ScientificTypesBase v3.0.0 [6e75b9c4] ScikitLearnBase v0.5.0 [7e506255] ScopedValues v1.5.0 [efcf1570] Setfield v1.1.2 [605ecd9f] ShowCases v0.1.0 [699a6c99] SimpleTraits v0.9.5 [a2af1166] SortingAlgorithms v1.2.2 [276daf66] SpecialFunctions v2.6.1 [171d559e] SplittablesBase v0.1.15 [860ef19b] StableRNGs v1.0.4 [90137ffa] StaticArrays v1.9.16 [1e83bf80] StaticArraysCore v1.4.4 [a19d573c] StatisticalMeasures v0.3.3 [c062fc1d] StatisticalMeasuresBase v0.1.3 [64bff920] StatisticalTraits v3.5.0 [10745b16] Statistics v1.11.1 [82ae8749] StatsAPI v1.8.0 [2913bbd2] StatsBase v0.34.10 [4c63d2b9] StatsFuns v1.5.2 [892a3eda] StringManipulation v0.4.2 [3783bdb8] TableTraits v1.0.1 [bd369af6] Tables v1.12.1 [28d57a85] Transducers v0.4.85 [3a884ed6] UnPack v1.0.2 [013be700] UnsafeAtomics v0.3.0 ⌅ [68821587] Arpack_jll v3.5.2+0 [efe28fd5] OpenSpecFun_jll v0.5.6+0 [f50d1b31] Rmath_jll v0.5.1+0 [56f22d72] Artifacts v1.11.0 [2a0f44e3] Base64 v1.11.0 [ade2ca70] Dates v1.11.0 [8ba89e20] Distributed v1.11.0 [7b1f6079] FileWatching v1.11.0 [9fa8497b] Future v1.11.0 [b77e0a4c] InteractiveUtils v1.11.0 [ac6e5ff7] JuliaSyntaxHighlighting v1.13.0 [8f399da3] Libdl v1.11.0 [37e2e46d] LinearAlgebra v1.13.0 [56ddb016] Logging v1.11.0 [d6f4376e] Markdown v1.11.0 [a63ad114] Mmap v1.11.0 [de0858da] Printf v1.11.0 [3fa0cd96] REPL v1.11.0 [9a3f8284] Random v1.11.0 [ea8e919c] SHA v1.0.0 [9e88b42a] Serialization v1.11.0 [6462fe0b] Sockets v1.11.0 [2f01184e] SparseArrays v1.13.0 [f489334b] StyledStrings v1.13.0 [4607b0f0] SuiteSparse [fa267f1f] TOML v1.0.3 [8dfed614] Test v1.11.0 [cf7118a7] UUIDs v1.11.0 [4ec0a83e] Unicode v1.11.0 [e66e0078] CompilerSupportLibraries_jll v1.3.0+1 [4536629a] OpenBLAS_jll v0.3.29+0 [05823500] OpenLibm_jll v0.8.7+0 [bea87d4a] SuiteSparse_jll v7.10.1+0 [8e850b90] libblastrampoline_jll v5.15.0+0 Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. Testing Running tests... [ Info: nworkers: 2 [ Info: nthreads: 1 Loading some models for testing... Test Summary: | Pass Total Time utilities | 11 11 4.3s Test Summary: | Pass Total Time selection heuristics | 7 7 2.6s Testing progressmeter basic fit with CPU1{Nothing}(nothing) and CPU1 resampling [ Info: Attempting to evaluate 12 models. measurement: 1.903643502106285 measurement: 1.8313682528233075 Evaluating over 12 metamodels: 17%[====> ] ETA: 0:00:16measurement: 1.725824054837585 measurement: 1.5876920544899495 measurement: 1.461278306396784 measurement: 1.3224538242874866 measurement: 1.2736828159099107 measurement: 1.1333245517941333 measurement: 1.050032852142519 measurement: 0.9515984846885978 measurement: 0.9657853181057472 measurement: 0.979226007963803 Evaluating over 12 metamodels: 100%[=========================] Time: 0:00:03 [ Info: Training machine(KNNRegressor(K = 4, …), …). Testing progressmeter basic fit with CPUProcesses{Nothing}(nothing) and CPU1 resampling [ Info: Attempting to evaluate 12 models. Evaluating over 12 metamodels: 0%[> ] ETA: N/A Evaluating over 12 metamodels: 8%[==> ] ETA: 0:12:55 Evaluating over 12 metamodels: 25%[======> ] ETA: 0:03:32 Evaluating over 12 metamodels: 33%[========> ] ETA: 0:02:22 Evaluating over 12 metamodels: 42%[==========> ] ETA: 0:01:40 Evaluating over 12 metamodels: 50%[============> ] ETA: 0:01:11 Evaluating over 12 metamodels: 58%[==============> ] ETA: 0:00:51 Evaluating over 12 metamodels: 83%[====================> ] ETA: 0:00:14 Evaluating over 12 metamodels: 100%[=========================] Time: 0:01:11 Testing progressmeter basic fit with CPUThreads{Nothing}(nothing) and CPU1 resampling [ Info: Attempting to evaluate 12 models. Evaluating over 12 metamodels: 0%[> ] ETA: N/A Evaluating over 12 metamodels: 8%[==> ] ETA: 0:00:00 Evaluating over 12 metamodels: 17%[====> ] ETA: 0:00:00 Evaluating over 12 metamodels: 25%[======> ] ETA: 0:00:00 Evaluating over 12 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 12 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 12 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 12 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 12 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 92%[======================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 100%[=========================] Time: 0:00:00 Testing progressmeter basic fit with CPU1{Nothing}(nothing) and CPUThreads resampling [ Info: Attempting to evaluate 12 models. Evaluating over 12 metamodels: 0%[> ] ETA: N/A Evaluating over 12 metamodels: 8%[==> ] ETA: 0:00:00 Evaluating over 12 metamodels: 17%[====> ] ETA: 0:00:00 Evaluating over 12 metamodels: 25%[======> ] ETA: 0:00:00 Evaluating over 12 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 12 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 12 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 12 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 12 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 92%[======================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 100%[=========================] Time: 0:00:00 Testing progressmeter basic fit with CPUProcesses{Nothing}(nothing) and CPUThreads resampling ┌ Info: The combination acceleration=CPUThreads{Nothing}(nothing) and acceleration_resampling=CPUProcesses{Nothing}(nothing) isn't supported. └ Resetting to `acceleration = CPUProcesses()` and `acceleration_resampling = CPUThreads()`. [ Info: Attempting to evaluate 12 models. Evaluating over 12 metamodels: 17%[====> ] ETA: 0:00:11 Evaluating over 12 metamodels: 25%[======> ] ETA: 0:00:07 Evaluating over 12 metamodels: 33%[========> ] ETA: 0:00:05 Evaluating over 12 metamodels: 42%[==========> ] ETA: 0:00:03 Evaluating over 12 metamodels: 50%[============> ] ETA: 0:00:02 Evaluating over 12 metamodels: 58%[==============> ] ETA: 0:00:02 Evaluating over 12 metamodels: 67%[================> ] ETA: 0:00:01 Evaluating over 12 metamodels: 75%[==================> ] ETA: 0:00:01 Evaluating over 12 metamodels: 92%[======================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 100%[=========================] Time: 0:00:02 Testing progressmeter basic fit with CPUThreads{Nothing}(nothing) and CPUThreads resampling [ Info: Attempting to evaluate 12 models. Evaluating over 12 metamodels: 17%[====> ] ETA: 0:00:04 Evaluating over 12 metamodels: 33%[========> ] ETA: 0:00:02 Evaluating over 12 metamodels: 42%[==========> ] ETA: 0:00:01 Evaluating over 12 metamodels: 50%[============> ] ETA: 0:00:01 Evaluating over 12 metamodels: 58%[==============> ] ETA: 0:00:01 Evaluating over 12 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 92%[======================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 100%[=========================] Time: 0:00:00 Testing progressmeter basic fit with CPU1{Nothing}(nothing) and CPUProcesses resampling [ Info: Attempting to evaluate 12 models. Evaluating over 12 metamodels: 0%[> ] ETA: N/A Evaluating over 12 metamodels: 8%[==> ] ETA: 0:00:00 Evaluating over 12 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 12 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 100%[=========================] Time: 0:00:00 Testing progressmeter basic fit with CPUProcesses{Nothing}(nothing) and CPUProcesses resampling [ Info: The combination acceleration=CPUProcesses{Nothing}(nothing) and acceleration_resampling=CPUProcesses{Nothing}(nothing) is not generally optimal. You may want to consider setting `acceleration = CPUProcesses()` and `acceleration_resampling = CPUThreads()`. [ Info: Attempting to evaluate 12 models. Evaluating over 12 metamodels: 17%[====> ] ETA: 0:00:35 Evaluating over 12 metamodels: 25%[======> ] ETA: 0:00:23 Evaluating over 12 metamodels: 33%[========> ] ETA: 0:00:15 Evaluating over 12 metamodels: 42%[==========> ] ETA: 0:00:11 Evaluating over 12 metamodels: 58%[==============> ] ETA: 0:00:05 Evaluating over 12 metamodels: 75%[==================> ] ETA: 0:00:03 Evaluating over 12 metamodels: 92%[======================> ] ETA: 0:00:01 Evaluating over 12 metamodels: 100%[=========================] Time: 0:00:07 Testing progressmeter basic fit with CPUThreads{Nothing}(nothing) and CPUProcesses resampling [ Info: Attempting to evaluate 12 models. Evaluating over 12 metamodels: 17%[====> ] ETA: 0:00:00 Evaluating over 12 metamodels: 25%[======> ] ETA: 0:00:00 Evaluating over 12 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 12 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 12 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 100%[=========================] Time: 0:00:00 Evaluating over 2 metamodels: 0%[> ] ETA: N/A Evaluating over 2 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 2 metamodels: 100%[=========================] Time: 0:00:00 Evaluating over 2 metamodels: 0%[> ] ETA: N/A Evaluating over 2 metamodels: 50%[============> ] ETA: 0:00:01 Evaluating over 2 metamodels: 100%[=========================] Time: 0:00:00 Evaluating over 2 metamodels: 0%[> ] ETA: N/A Evaluating over 2 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 2 metamodels: 100%[=========================] Time: 0:00:00 [ Info: No measure specified. Setting measure=LogLoss(tol = 2.22045e-16). Test Summary: | Pass Total Time tuned_models.jl | 119 119 6m49.7s Test Summary: | Pass Total Time range_methods | 33 33 9.8s [ Info: Training machine(ProbabilisticTunedModel(model = KNNClassifier(K = 5, …), …), …). [ Info: Attempting to evaluate 12 models. Evaluating over 12 metamodels: 0%[> ] ETA: N/A Evaluating over 12 metamodels: 8%[==> ] ETA: 0:00:45 Evaluating over 12 metamodels: 17%[====> ] ETA: 0:00:21 Evaluating over 12 metamodels: 25%[======> ] ETA: 0:00:13 Evaluating over 12 metamodels: 33%[========> ] ETA: 0:00:09 Evaluating over 12 metamodels: 42%[==========> ] ETA: 0:00:06 Evaluating over 12 metamodels: 50%[============> ] ETA: 0:00:04 Evaluating over 12 metamodels: 58%[==============> ] ETA: 0:00:03 Evaluating over 12 metamodels: 67%[================> ] ETA: 0:00:02 Evaluating over 12 metamodels: 75%[==================> ] ETA: 0:00:01 Evaluating over 12 metamodels: 83%[====================> ] ETA: 0:00:01 Evaluating over 12 metamodels: 92%[======================> ] ETA: 0:00:00 Evaluating over 12 metamodels: 100%[=========================] Time: 0:00:04 Evaluating over 3 metamodels: 0%[> ] ETA: N/A Evaluating over 3 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 3 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 3 metamodels: 100%[=========================] Time: 0:00:00 Evaluating over 7 metamodels: 29%[=======> ] ETA: 0:00:00 Evaluating over 7 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 7 metamodels: 71%[=================> ] ETA: 0:00:00 Evaluating over 7 metamodels: 86%[=====================> ] ETA: 0:00:00 Evaluating over 7 metamodels: 100%[=========================] Time: 0:00:00 Test Summary: | Pass Total Time grid | 36 36 1m16.2s [ Info: Training machine(DeterministicTunedModel(model = DummyModel(lambda = 1, …), …), …). [ Info: Attempting to evaluate 1000 models. Evaluating over 1000 metamodels: 0%[> ] ETA: N/A Evaluating over 1000 metamodels: 0%[> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 0%[> ] ETA: 0:00:56 Evaluating over 1000 metamodels: 0%[> ] ETA: 0:00:37 Evaluating over 1000 metamodels: 0%[> ] ETA: 0:00:28 Evaluating over 1000 metamodels: 0%[> ] ETA: 0:00:22 Evaluating over 1000 metamodels: 1%[> ] ETA: 0:00:19 Evaluating over 1000 metamodels: 1%[> ] ETA: 0:00:16 Evaluating over 1000 metamodels: 1%[> ] ETA: 0:00:14 Evaluating over 1000 metamodels: 1%[> ] ETA: 0:00:13 Evaluating over 1000 metamodels: 1%[> ] ETA: 0:00:11 Evaluating over 1000 metamodels: 1%[> ] ETA: 0:00:10 Evaluating over 1000 metamodels: 1%[> ] ETA: 0:00:09 Evaluating over 1000 metamodels: 1%[> ] ETA: 0:00:09 Evaluating over 1000 metamodels: 1%[> ] ETA: 0:00:08 Evaluating over 1000 metamodels: 2%[> ] ETA: 0:00:08 Evaluating over 1000 metamodels: 2%[> ] ETA: 0:00:07 Evaluating over 1000 metamodels: 2%[> ] ETA: 0:00:07 Evaluating over 1000 metamodels: 2%[> ] ETA: 0:00:06 Evaluating over 1000 metamodels: 2%[> ] ETA: 0:00:06 Evaluating over 1000 metamodels: 2%[> ] ETA: 0:00:06 Evaluating over 1000 metamodels: 2%[> ] ETA: 0:00:05 Evaluating over 1000 metamodels: 2%[> ] ETA: 0:00:05 Evaluating over 1000 metamodels: 2%[> ] ETA: 0:00:05 Evaluating over 1000 metamodels: 2%[> ] ETA: 0:00:05 Evaluating over 1000 metamodels: 2%[> ] ETA: 0:00:05 Evaluating over 1000 metamodels: 3%[> ] ETA: 0:00:04 Evaluating over 1000 metamodels: 3%[> ] ETA: 0:00:04 Evaluating over 1000 metamodels: 3%[> ] ETA: 0:00:04 Evaluating over 1000 metamodels: 3%[> ] ETA: 0:00:04 Evaluating over 1000 metamodels: 3%[> ] ETA: 0:00:04 Evaluating over 1000 metamodels: 3%[> ] ETA: 0:00:04 Evaluating over 1000 metamodels: 3%[> ] ETA: 0:00:04 Evaluating over 1000 metamodels: 3%[> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 3%[> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 4%[> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 4%[> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 4%[> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 4%[> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 4%[> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 4%[=> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 4%[=> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 4%[=> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 4%[=> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 4%[=> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 4%[=> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 5%[=> ] ETA: 0:00:03 Evaluating over 1000 metamodels: 5%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 5%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 5%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 5%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 5%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 5%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 5%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 6%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 6%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 6%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 6%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 6%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 6%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 6%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 6%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 6%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 6%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 7%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 7%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 7%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 7%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 7%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 7%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 7%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 7%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 7%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 8%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 8%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 8%[=> ] ETA: 0:00:02 Evaluating over 1000 metamodels: 8%[=> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 8%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 8%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 8%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 8%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 8%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 9%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 9%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 9%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 9%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 9%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 9%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 9%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 9%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 9%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 10%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 10%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 10%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 10%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 10%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 10%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 10%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 10%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 10%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 10%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 11%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 11%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 11%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 11%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 11%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 11%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 11%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 11%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 11%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 12%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 12%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 12%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 12%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 12%[==> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 12%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 12%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 12%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 12%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 12%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 12%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 13%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 13%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 13%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 13%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 13%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 13%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 13%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 13%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 14%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 14%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 14%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 14%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 14%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 14%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 14%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 14%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 14%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 14%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 14%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 15%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 15%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 15%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 15%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 15%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 15%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 15%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 15%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 15%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 16%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 16%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 16%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 16%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 16%[===> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 16%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 16%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 16%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 16%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 16%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 16%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 17%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 17%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 17%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 17%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 17%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 17%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 17%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 17%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 17%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 18%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 18%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 18%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 18%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 18%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 18%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 18%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 18%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 18%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 18%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 19%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 19%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 19%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 19%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 19%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 19%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 19%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 20%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 20%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 20%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 20%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 20%[====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 20%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 20%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 20%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 20%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 20%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 21%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 21%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 21%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 21%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 21%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 21%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 21%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 21%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 22%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 22%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 22%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 22%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 22%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 22%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 22%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 22%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 22%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 22%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 23%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 23%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 23%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 23%[=====> ] ETA: 0:00:01 Evaluating over 1000 metamodels: 23%[=====> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 23%[=====> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 23%[=====> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 24%[=====> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 24%[=====> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 24%[=====> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 24%[=====> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 24%[=====> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 24%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 24%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 24%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 24%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 24%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 25%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 25%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 25%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 25%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 25%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 25%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 25%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 25%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 26%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 26%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 26%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 26%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 26%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 26%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 26%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 26%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 28%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 28%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 28%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 28%[======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 28%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 28%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 28%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 28%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 28%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 28%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 29%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 29%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 29%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 29%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 29%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 29%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 29%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 29%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 31%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 31%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 31%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 31%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 31%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 31%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 31%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 32%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 32%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 32%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 32%[=======> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 32%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 32%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 32%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 32%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 32%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 34%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 34%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 34%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 34%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 34%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 34%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 34%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 34%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 34%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 34%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 34%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 35%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 35%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 35%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 35%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 35%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 35%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 35%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 35%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 35%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 36%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 36%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 36%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 36%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 36%[========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 36%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 36%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 36%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 36%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 36%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 36%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 38%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 38%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 38%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 38%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 38%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 38%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 38%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 38%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 39%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 39%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 39%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 39%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 39%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 39%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 39%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 39%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 39%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 40%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 40%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 40%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 40%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 40%[=========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 41%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 41%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 41%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 41%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 41%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 41%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 41%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 41%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 41%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 42%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 44%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 44%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 44%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 44%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 44%[==========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 44%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 44%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 44%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 44%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 44%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 44%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 45%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 45%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 45%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 45%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 45%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 45%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 45%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 45%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 46%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 46%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 46%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 46%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 46%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 46%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 46%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 46%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 46%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 46%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 46%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 48%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 48%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 48%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 48%[===========> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 48%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 48%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 48%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 48%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 48%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 48%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 49%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 49%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 49%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 49%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 49%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 49%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 49%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 49%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 51%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 51%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 51%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 51%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 51%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 51%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 51%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 52%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 52%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 52%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 52%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 52%[============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 52%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 52%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 52%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 52%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 52%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 54%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 54%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 54%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 54%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 54%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 54%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 54%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 54%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 54%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 54%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 54%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 55%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 55%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 55%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 55%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 55%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 55%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 55%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 55%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 56%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 56%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 56%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 56%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 56%[=============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 56%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 56%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 56%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 56%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 56%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 58%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 59%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 59%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 59%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 59%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 59%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 59%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 59%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 59%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 59%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 60%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 60%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 60%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 60%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 60%[==============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 61%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 61%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 61%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 61%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 61%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 61%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 61%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 61%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 61%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 62%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 62%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 62%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 62%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 62%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 62%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 62%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 62%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 62%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 64%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 64%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 64%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 64%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 64%[===============> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 64%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 64%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 64%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 64%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 64%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 65%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 65%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 65%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 65%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 65%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 65%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 65%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 65%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 66%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 66%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 66%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 66%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 66%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 66%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 66%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 66%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 66%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 66%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 68%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 68%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 68%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 68%[================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 68%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 68%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 68%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 68%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 68%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 69%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 69%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 69%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 69%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 69%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 69%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 71%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 71%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 71%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 71%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 71%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 71%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 71%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 71%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 72%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 72%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 72%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 72%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 72%[=================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 72%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 72%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 72%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 72%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 72%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 72%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 74%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 74%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 74%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 74%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 74%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 74%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 74%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 74%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 74%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 75%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 76%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 76%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 76%[==================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 76%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 76%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 76%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 76%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 76%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 76%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 77%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 77%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 77%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 77%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 77%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 78%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 78%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 78%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 78%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 78%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 78%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 78%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 78%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 78%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 79%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 79%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 79%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 79%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 79%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 79%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 79%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 79%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 80%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 80%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 80%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 80%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 80%[===================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 81%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 81%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 81%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 81%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 81%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 81%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 81%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 82%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 82%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 82%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 82%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 82%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 82%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 82%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 82%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 82%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 84%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 84%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 84%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 84%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 84%[====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 84%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 84%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 84%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 84%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 84%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 84%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 85%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 85%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 85%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 85%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 85%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 85%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 85%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 85%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 85%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 86%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 86%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 86%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 86%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 86%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 86%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 86%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 86%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 86%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 86%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 86%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 87%[=====================> ] ETA: 0:00:00 Evaluating over 1000 metamodels: 100%[=========================] Time: 0:00:01 Test Summary: | Pass Total Time random search | 19 19 13.6s ┌ Info: Only 19 (of 100) models evaluated. └ Model supply exhausted. Test Summary: | Pass Total Time Latin hypercube | 28 28 35.9s Test Summary: | Pass Total Time Explicit | 17 17 1m07.3s Testing progressmeter rngs option with CPU1{Nothing}(nothing) and CPU1 grid Evaluating over 30 metamodels: 0%[> ] ETA: N/A Evaluating over 30 metamodels: 3%[> ] ETA: 0:01:40 Evaluating over 30 metamodels: 7%[=> ] ETA: 0:01:03 Evaluating over 30 metamodels: 10%[==> ] ETA: 0:00:40 Evaluating over 30 metamodels: 13%[===> ] ETA: 0:00:29 Evaluating over 30 metamodels: 17%[====> ] ETA: 0:00:22 Evaluating over 30 metamodels: 20%[=====> ] ETA: 0:00:18 Evaluating over 30 metamodels: 23%[=====> ] ETA: 0:00:15 Evaluating over 30 metamodels: 27%[======> ] ETA: 0:00:12 Evaluating over 30 metamodels: 30%[=======> ] ETA: 0:00:10 Evaluating over 30 metamodels: 33%[========> ] ETA: 0:00:09 Evaluating over 30 metamodels: 37%[=========> ] ETA: 0:00:08 Evaluating over 30 metamodels: 40%[==========> ] ETA: 0:00:07 Evaluating over 30 metamodels: 43%[==========> ] ETA: 0:00:06 Evaluating over 30 metamodels: 47%[===========> ] ETA: 0:00:05 Evaluating over 30 metamodels: 50%[============> ] ETA: 0:00:04 Evaluating over 30 metamodels: 53%[=============> ] ETA: 0:00:04 Evaluating over 30 metamodels: 57%[==============> ] ETA: 0:00:03 Evaluating over 30 metamodels: 60%[===============> ] ETA: 0:00:03 Evaluating over 30 metamodels: 63%[===============> ] ETA: 0:00:03 Evaluating over 30 metamodels: 67%[================> ] ETA: 0:00:02 Evaluating over 30 metamodels: 70%[=================> ] ETA: 0:00:02 Evaluating over 30 metamodels: 73%[==================> ] ETA: 0:00:02 Evaluating over 30 metamodels: 77%[===================> ] ETA: 0:00:01 Evaluating over 30 metamodels: 80%[====================> ] ETA: 0:00:01 Evaluating over 30 metamodels: 83%[====================> ] ETA: 0:00:01 Evaluating over 30 metamodels: 87%[=====================> ] ETA: 0:00:01 Evaluating over 30 metamodels: 90%[======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 93%[=======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 97%[========================>] ETA: 0:00:00 Evaluating over 30 metamodels: 100%[=========================] Time: 0:00:04 [ Info: No measure specified. Setting measure=LPLoss(p = 2). Evaluating Learning curve with 3 rngs: 67%[============> ] ETA: 0:00:01 Evaluating Learning curve with 3 rngs: 100%[==================] Time: 0:00:02 [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). Testing progressmeter rngs option with CPUProcesses{Nothing}(nothing) and CPU1 grid [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: Training machine(DeterministicTunedModel(model = DeterministicEnsembleModel(atom = FooBarRegressor(lambda = 0.0), …), …), …). [ Info: Attempting to evaluate 30 models. Evaluating over 30 metamodels: 0%[> ] ETA: N/A Evaluating over 30 metamodels: 3%[> ] ETA: 0:00:00 Evaluating over 30 metamodels: 7%[=> ] ETA: 0:00:00 Evaluating over 30 metamodels: 10%[==> ] ETA: 0:00:00 Evaluating over 30 metamodels: 13%[===> ] ETA: 0:00:00 Evaluating over 30 metamodels: 17%[====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 20%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 23%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 77%[===================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 87%[=====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 90%[======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 93%[=======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 97%[========================>] ETA: 0:00:00 Evaluating over 30 metamodels: 100%[=========================] Time: 0:00:00 [ Info: No measure specified. Setting measure=LPLoss(p = 2). Evaluating Learning curve with 3 rngs: 0%[> ] ETA: N/A Evaluating Learning curve with 3 rngs: 33%[======> ] ETA: 0:01:27 Evaluating Learning curve with 3 rngs: 67%[============> ] ETA: 0:00:22 Evaluating Learning curve with 3 rngs: 100%[==================] Time: 0:00:43 [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). Testing progressmeter rngs option with CPUThreads{Nothing}(nothing) and CPU1 grid [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: Training machine(DeterministicTunedModel(model = DeterministicEnsembleModel(atom = FooBarRegressor(lambda = 0.0), …), …), …). [ Info: Attempting to evaluate 30 models. Evaluating over 30 metamodels: 7%[=> ] ETA: 0:00:00 Evaluating over 30 metamodels: 10%[==> ] ETA: 0:00:00 Evaluating over 30 metamodels: 13%[===> ] ETA: 0:00:00 Evaluating over 30 metamodels: 17%[====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 20%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 23%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 77%[===================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 87%[=====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 90%[======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 93%[=======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 97%[========================>] ETA: 0:00:00 Evaluating over 30 metamodels: 100%[=========================] Time: 0:00:00 [ Info: No measure specified. Setting measure=LPLoss(p = 2). Evaluating Learning curve with 3 rngs: 0%[> ] ETA: N/A Evaluating Learning curve with 3 rngs: 33%[======> ] ETA: 0:00:00 Evaluating Learning curve with 3 rngs: 67%[============> ] ETA: 0:00:00 Evaluating Learning curve with 3 rngs: 100%[==================] Time: 0:00:00 [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). Testing progressmeter rngs option with CPU1{Nothing}(nothing) and CPUThreads grid Evaluating over 30 metamodels: 0%[> ] ETA: N/A Evaluating over 30 metamodels: 3%[> ] ETA: 0:00:00 Evaluating over 30 metamodels: 7%[=> ] ETA: 0:00:00 Evaluating over 30 metamodels: 10%[==> ] ETA: 0:00:00 Evaluating over 30 metamodels: 13%[===> ] ETA: 0:00:00 Evaluating over 30 metamodels: 17%[====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 20%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 23%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 77%[===================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 87%[=====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 90%[======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 93%[=======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 97%[========================>] ETA: 0:00:00 Evaluating over 30 metamodels: 100%[=========================] Time: 0:00:00 [ Info: No measure specified. Setting measure=LPLoss(p = 2). Evaluating Learning curve with 3 rngs: 67%[============> ] ETA: 0:00:00 Evaluating Learning curve with 3 rngs: 100%[==================] Time: 0:00:00 [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). Testing progressmeter rngs option with CPUProcesses{Nothing}(nothing) and CPUThreads grid [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: Training machine(DeterministicTunedModel(model = DeterministicEnsembleModel(atom = FooBarRegressor(lambda = 0.0), …), …), …). [ Info: Attempting to evaluate 30 models. Evaluating over 30 metamodels: 7%[=> ] ETA: 0:00:00 Evaluating over 30 metamodels: 10%[==> ] ETA: 0:00:00 Evaluating over 30 metamodels: 13%[===> ] ETA: 0:00:00 Evaluating over 30 metamodels: 17%[====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 20%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 23%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 77%[===================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 87%[=====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 90%[======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 93%[=======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 97%[========================>] ETA: 0:00:00 Evaluating over 30 metamodels: 100%[=========================] Time: 0:00:00 [ Info: No measure specified. Setting measure=LPLoss(p = 2). Evaluating Learning curve with 3 rngs: 0%[> ] ETA: N/A Evaluating Learning curve with 3 rngs: 33%[======> ] ETA: 0:00:03 Evaluating Learning curve with 3 rngs: 67%[============> ] ETA: 0:00:01 Evaluating Learning curve with 3 rngs: 100%[==================] Time: 0:00:01 [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). Testing progressmeter rngs option with CPUThreads{Nothing}(nothing) and CPUThreads grid [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: Training machine(DeterministicTunedModel(model = DeterministicEnsembleModel(atom = FooBarRegressor(lambda = 0.0), …), …), …). [ Info: Attempting to evaluate 30 models. Evaluating over 30 metamodels: 7%[=> ] ETA: 0:00:00 Evaluating over 30 metamodels: 10%[==> ] ETA: 0:00:00 Evaluating over 30 metamodels: 13%[===> ] ETA: 0:00:00 Evaluating over 30 metamodels: 17%[====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 20%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 23%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 77%[===================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 87%[=====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 90%[======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 93%[=======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 97%[========================>] ETA: 0:00:00 Evaluating over 30 metamodels: 100%[=========================] Time: 0:00:00 [ Info: No measure specified. Setting measure=LPLoss(p = 2). Evaluating Learning curve with 3 rngs: 0%[> ] ETA: N/A Evaluating Learning curve with 3 rngs: 33%[======> ] ETA: 0:00:00 Evaluating Learning curve with 3 rngs: 67%[============> ] ETA: 0:00:00 Evaluating Learning curve with 3 rngs: 100%[==================] Time: 0:00:00 [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). Testing progressmeter rngs option with CPU1{Nothing}(nothing) and CPUProcesses grid Evaluating over 30 metamodels: 0%[> ] ETA: N/A Evaluating over 30 metamodels: 3%[> ] ETA: 0:01:07 Evaluating over 30 metamodels: 7%[=> ] ETA: 0:00:32 Evaluating over 30 metamodels: 10%[==> ] ETA: 0:00:21 Evaluating over 30 metamodels: 13%[===> ] ETA: 0:00:15 Evaluating over 30 metamodels: 17%[====> ] ETA: 0:00:13 Evaluating over 30 metamodels: 20%[=====> ] ETA: 0:00:11 Evaluating over 30 metamodels: 23%[=====> ] ETA: 0:00:11 Evaluating over 30 metamodels: 27%[======> ] ETA: 0:00:09 Evaluating over 30 metamodels: 30%[=======> ] ETA: 0:00:07 Evaluating over 30 metamodels: 33%[========> ] ETA: 0:00:06 Evaluating over 30 metamodels: 37%[=========> ] ETA: 0:00:06 Evaluating over 30 metamodels: 43%[==========> ] ETA: 0:00:04 Evaluating over 30 metamodels: 47%[===========> ] ETA: 0:00:04 Evaluating over 30 metamodels: 50%[============> ] ETA: 0:00:03 Evaluating over 30 metamodels: 53%[=============> ] ETA: 0:00:03 Evaluating over 30 metamodels: 57%[==============> ] ETA: 0:00:02 Evaluating over 30 metamodels: 60%[===============> ] ETA: 0:00:02 Evaluating over 30 metamodels: 63%[===============> ] ETA: 0:00:02 Evaluating over 30 metamodels: 67%[================> ] ETA: 0:00:02 Evaluating over 30 metamodels: 70%[=================> ] ETA: 0:00:01 Evaluating over 30 metamodels: 73%[==================> ] ETA: 0:00:01 Evaluating over 30 metamodels: 77%[===================> ] ETA: 0:00:01 Evaluating over 30 metamodels: 80%[====================> ] ETA: 0:00:01 Evaluating over 30 metamodels: 83%[====================> ] ETA: 0:00:01 Evaluating over 30 metamodels: 90%[======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 97%[========================>] ETA: 0:00:00 Evaluating over 30 metamodels: 100%[=========================] Time: 0:00:03 [ Info: No measure specified. Setting measure=LPLoss(p = 2). Evaluating Learning curve with 3 rngs: 67%[============> ] ETA: 0:00:01 Evaluating Learning curve with 3 rngs: 100%[==================] Time: 0:00:01 [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: No measure specified. Setting measure=LPLoss(p = 2). Testing progressmeter rngs option with CPUProcesses{Nothing}(nothing) and CPUProcesses grid ┌ Warning: The combination acceleration=CPUProcesses{Nothing}(nothing) and acceleration_grid=CPUProcesses{Nothing}(nothing) is not generally optimal. You may want to consider setting `acceleration = CPUProcesses()` and `acceleration_grid = CPUThreads()`. └ @ MLJTuning ~/.julia/packages/MLJTuning/xiLEY/src/learning_curves.jl:137 [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: Training machine(DeterministicTunedModel(model = DeterministicEnsembleModel(atom = FooBarRegressor(lambda = 0.0), …), …), …). [ Info: Attempting to evaluate 30 models. Evaluating over 30 metamodels: 0%[> ] ETA: N/A Evaluating over 30 metamodels: 3%[> ] ETA: 0:00:00 Evaluating over 30 metamodels: 20%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 87%[=====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 93%[=======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 97%[========================>] ETA: 0:00:00 Evaluating over 30 metamodels: 100%[=========================] Time: 0:00:00 ┌ Warning: The combination acceleration=CPUProcesses{Nothing}(nothing) and acceleration_grid=CPUProcesses{Nothing}(nothing) is not generally optimal. You may want to consider setting `acceleration = CPUProcesses()` and `acceleration_grid = CPUThreads()`. └ @ MLJTuning ~/.julia/packages/MLJTuning/xiLEY/src/learning_curves.jl:137 [ Info: No measure specified. Setting measure=LPLoss(p = 2). Evaluating Learning curve with 3 rngs: 0%[> ] ETA: N/A Evaluating Learning curve with 3 rngs: 33%[======> ] ETA: 0:00:07 Evaluating Learning curve with 3 rngs: 67%[============> ] ETA: 0:00:02 Evaluating Learning curve with 3 rngs: 100%[==================] Time: 0:00:03 ┌ Warning: The combination acceleration=CPUProcesses{Nothing}(nothing) and acceleration_grid=CPUProcesses{Nothing}(nothing) is not generally optimal. You may want to consider setting `acceleration = CPUProcesses()` and `acceleration_grid = CPUThreads()`. └ @ MLJTuning ~/.julia/packages/MLJTuning/xiLEY/src/learning_curves.jl:137 [ Info: No measure specified. Setting measure=LPLoss(p = 2). ┌ Warning: The combination acceleration=CPUProcesses{Nothing}(nothing) and acceleration_grid=CPUProcesses{Nothing}(nothing) is not generally optimal. You may want to consider setting `acceleration = CPUProcesses()` and `acceleration_grid = CPUThreads()`. └ @ MLJTuning ~/.julia/packages/MLJTuning/xiLEY/src/learning_curves.jl:137 [ Info: No measure specified. Setting measure=LPLoss(p = 2). ┌ Warning: The combination acceleration=CPUProcesses{Nothing}(nothing) and acceleration_grid=CPUProcesses{Nothing}(nothing) is not generally optimal. You may want to consider setting `acceleration = CPUProcesses()` and `acceleration_grid = CPUThreads()`. └ @ MLJTuning ~/.julia/packages/MLJTuning/xiLEY/src/learning_curves.jl:137 [ Info: No measure specified. Setting measure=LPLoss(p = 2). Testing progressmeter rngs option with CPUThreads{Nothing}(nothing) and CPUProcesses grid ┌ Warning: The combination acceleration=CPUThreads{Nothing}(nothing) and acceleration_grid=CPUProcesses{Nothing}(nothing) isn't supported. │ Resetting to `acceleration = CPUProcesses()` and `acceleration_grid = CPUThreads()`. └ @ MLJTuning ~/.julia/packages/MLJTuning/xiLEY/src/learning_curves.jl:147 [ Info: No measure specified. Setting measure=LPLoss(p = 2). [ Info: Training machine(DeterministicTunedModel(model = DeterministicEnsembleModel(atom = FooBarRegressor(lambda = 0.0), …), …), …). [ Info: Attempting to evaluate 30 models. Evaluating over 30 metamodels: 0%[> ] ETA: N/A Evaluating over 30 metamodels: 3%[> ] ETA: 0:00:00 Evaluating over 30 metamodels: 7%[=> ] ETA: 0:00:00 Evaluating over 30 metamodels: 10%[==> ] ETA: 0:00:00 Evaluating over 30 metamodels: 13%[===> ] ETA: 0:00:00 Evaluating over 30 metamodels: 17%[====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 20%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 23%[=====> ] ETA: 0:00:00 Evaluating over 30 metamodels: 27%[======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 30%[=======> ] ETA: 0:00:00 Evaluating over 30 metamodels: 33%[========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 37%[=========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 40%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 43%[==========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 47%[===========> ] ETA: 0:00:00 Evaluating over 30 metamodels: 50%[============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 53%[=============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 57%[==============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 60%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 63%[===============> ] ETA: 0:00:00 Evaluating over 30 metamodels: 67%[================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 70%[=================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 73%[==================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 77%[===================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 80%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 83%[====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 87%[=====================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 90%[======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 93%[=======================> ] ETA: 0:00:00 Evaluating over 30 metamodels: 97%[========================>] ETA: 0:00:00 Evaluating over 30 metamodels: 100%[=========================] Time: 0:00:00 ┌ Warning: The combination acceleration=CPUThreads{Nothing}(nothing) and acceleration_grid=CPUProcesses{Nothing}(nothing) isn't supported. │ Resetting to `acceleration = CPUProcesses()` and `acceleration_grid = CPUThreads()`. └ @ MLJTuning ~/.julia/packages/MLJTuning/xiLEY/src/learning_curves.jl:147 [ Info: No measure specified. Setting measure=LPLoss(p = 2). Evaluating Learning curve with 3 rngs: 0%[> ] ETA: N/A Evaluating Learning curve with 3 rngs: 33%[======> ] ETA: 0:00:00 Evaluating Learning curve with 3 rngs: 100%[==================] Time: 0:00:00 ┌ Warning: The combination acceleration=CPUThreads{Nothing}(nothing) and acceleration_grid=CPUProcesses{Nothing}(nothing) isn't supported. │ Resetting to `acceleration = CPUProcesses()` and `acceleration_grid = CPUThreads()`. └ @ MLJTuning ~/.julia/packages/MLJTuning/xiLEY/src/learning_curves.jl:147 [ Info: No measure specified. Setting measure=LPLoss(p = 2). ┌ Warning: The combination acceleration=CPUThreads{Nothing}(nothing) and acceleration_grid=CPUProcesses{Nothing}(nothing) isn't supported. │ Resetting to `acceleration = CPUProcesses()` and `acceleration_grid = CPUThreads()`. └ @ MLJTuning ~/.julia/packages/MLJTuning/xiLEY/src/learning_curves.jl:147 [ Info: No measure specified. Setting measure=LPLoss(p = 2). ┌ Warning: The combination acceleration=CPUThreads{Nothing}(nothing) and acceleration_grid=CPUProcesses{Nothing}(nothing) isn't supported. │ Resetting to `acceleration = CPUProcesses()` and `acceleration_grid = CPUThreads()`. └ @ MLJTuning ~/.julia/packages/MLJTuning/xiLEY/src/learning_curves.jl:147 [ Info: No measure specified. Setting measure=LPLoss(p = 2). Test Summary: | Pass Total Time learning curves | 85 85 1m20.8s [ Info: No measure specified. Setting measure=LPLoss(p = 2). Test Summary: | Pass Total Time Serialization | 13 13 8.2s Test Summary: | Pass Total Time density estimatation | 3 3 12.2s Testing MLJTuning tests passed Testing completed after 1215.23s PkgEval succeeded after 1800.79s