Package evaluation of CalibrateEmulateSample on Julia 1.13.0-DEV.860 (6ddb3d6410*) started at 2025-07-16T04:08:30.460 ################################################################################ # Set-up # Installing PkgEval dependencies (TestEnv)... Set-up completed after 8.54s ################################################################################ # Installation # Installing CalibrateEmulateSample... Resolving package versions... Installed Conda ────────────────── v1.10.2 Installed PyCall ───────────────── v1.96.4 Installed CalibrateEmulateSample ─ v0.7.0 Updating `~/.julia/environments/v1.13/Project.toml` [95e48a1f] + CalibrateEmulateSample v0.7.0 Updating `~/.julia/environments/v1.13/Manifest.toml` [47edcb42] + ADTypes v1.15.0 [14f7f29c] + AMD v0.5.3 [621f4979] + AbstractFFTs v1.5.0 [99985d1d] + AbstractGPs v0.5.24 ⌅ [80f14c24] + AbstractMCMC v4.4.2 [1520ce14] + AbstractTrees v0.4.5 [79e6a3ab] + Adapt v4.3.0 ⌃ [5b7e9947] + AdvancedMH v0.7.5 [66dad0bd] + AliasTables v1.1.3 [dce04be8] + ArgCheck v2.5.0 [7d9fca2a] + Arpack v0.5.4 [4fba245c] + ArrayInterface v7.19.0 [13072b0f] + AxisAlgorithms v1.1.0 [39de3d68] + AxisArrays v0.4.7 ⌅ [198e06fe] + BangBang v0.3.40 [9718e550] + Baselet v0.1.1 [6e4b80f9] + BenchmarkTools v1.6.0 [62783981] + BitTwiddlingConvenienceFunctions v0.1.6 [2a0fbf3d] + CPUSummary v0.2.6 [95e48a1f] + CalibrateEmulateSample v0.7.0 [d360d2e6] + ChainRulesCore v1.25.2 [ae650224] + ChunkSplitters v3.1.2 [fb6a15b2] + CloseOpenIntervals v0.1.13 [523fee87] + CodecBzip2 v0.8.5 [944b1d66] + CodecZlib v0.7.8 [bbf7d656] + CommonSubexpressions v0.3.1 [f70d9fcc] + CommonWorldInvalidations v1.0.0 [34da2185] + Compat v4.17.0 [a33af91c] + CompositionsBase v0.1.2 [8f4d0f93] + Conda v1.10.2 [88cd18e8] + ConsoleProgressMonitor v0.1.2 [187b0558] + ConstructionBase v1.6.0 [f65535da] + Convex v0.16.4 [adafc99b] + CpuId v0.3.1 [a8cc5b0e] + Crayons v4.1.1 [9a962f9c] + DataAPI v1.16.0 [a93c6f00] + DataFrames v1.7.0 [864edb3b] + DataStructures v0.18.22 [e2d170a0] + DataValueInterfaces v1.0.0 [244e2a9f] + DefineSingletons v0.1.2 [163ba53b] + DiffResults v1.1.0 [b552c78f] + DiffRules v1.15.1 [a0c0ee7d] + DifferentiationInterface v0.7.2 [b4f34e82] + Distances v0.10.12 [31c24e10] + Distributions v0.25.120 [ffbed154] + DocStringExtensions v0.9.5 [fdbdab4c] + ElasticArrays v1.2.12 [2904ab23] + ElasticPDMats v0.2.3 ⌃ [aa8a2aa5] + EnsembleKalmanProcesses v2.4.0 [4e289a0a] + EnumX v1.0.5 [c87230d0] + FFMPEG v0.4.2 [7a1cc6ca] + FFTW v1.9.0 ⌅ [442a2c76] + FastGaussQuadrature v0.4.9 [1a297f60] + FillArrays v1.13.0 [6a86dc24] + FiniteDiff v2.27.0 [59287772] + Formatting v0.4.3 ⌅ [f6369f11] + ForwardDiff v0.10.38 [069b7b12] + FunctionWrappers v1.1.3 [d9f16b24] + Functors v0.5.2 [891a1506] + GaussianProcesses v0.12.5 ⌃ [e4b2fa32] + GaussianRandomFields v2.1.6 [3e5b6fbb] + HostCPUFeatures v0.1.17 [615f187c] + IfElse v0.1.1 [22cec73e] + InitialValues v0.3.1 [842dd82b] + InlineStrings v1.4.4 ⌅ [a98d9a8b] + Interpolations v0.15.1 [8197267c] + IntervalSets v0.7.11 [3587e190] + InverseFunctions v0.1.17 [41ab1584] + InvertedIndices v1.3.1 ⌅ [92d709cd] + IrrationalConstants v0.1.1 [c8e1da08] + IterTools v1.10.0 [82899510] + IteratorInterfaceExtensions v1.0.0 [692b3bcd] + JLLWrappers v1.7.0 [682c06a0] + JSON v0.21.4 [0f8b85d8] + JSON3 v1.14.3 [5ab0869b] + KernelDensity v0.6.10 [ec8451be] + KernelFunctions v0.10.65 [40e66cde] + LDLFactorizations v0.10.1 [b964fa9f] + LaTeXStrings v1.4.0 [10f19ff3] + LayoutPointers v0.1.17 [1d6d02ad] + LeftChildRightSiblingTrees v0.2.0 [d3d80556] + LineSearches v7.4.0 [6fdf6af0] + LogDensityProblems v2.1.2 [2ab3a3ac] + LogExpFunctions v0.3.29 [e6f89c97] + LoggingExtras v1.1.0 [bdcacae8] + LoopVectorization v0.12.172 ⌅ [c7f686f2] + MCMCChains v5.7.1 ⌅ [be115224] + MCMCDiagnosticTools v0.2.1 [e80e1ace] + MLJModelInterface v1.11.1 [1914dd2f] + MacroTools v0.5.16 [d125e4d3] + ManualMemory v0.1.8 [b8f27783] + MathOptInterface v1.42.0 ⌅ [128add7d] + MicroCollections v0.1.4 [e1d29d7a] + Missings v1.2.0 [d8a4904e] + MutableArithmetics v1.6.4 [d41bc354] + NLSolversBase v7.10.0 [77ba4419] + NaNMath v1.1.3 [c020b1a1] + NaturalSort v1.0.0 [6fe1bfb0] + OffsetArrays v1.17.0 [429524aa] + Optim v1.13.2 [bac558e1] + OrderedCollections v1.8.1 [90014a1f] + PDMats v0.11.35 [d96e819e] + Parameters v0.12.3 [69de0a69] + Parsers v2.8.3 [1d0040c9] + PolyesterWeave v0.2.2 [2dfb63ee] + PooledArrays v1.4.3 [85a6dd25] + PositiveFactorizations v0.2.4 [aea7be01] + PrecompileTools v1.3.2 [21216c6a] + Preferences v1.4.3 [08abe8d2] + PrettyTables v2.4.0 [49802e3a] + ProgressBars v1.5.1 [33c8b6b6] + ProgressLogging v0.1.5 [92933f4c] + ProgressMeter v1.10.4 [43287f4e] + PtrArrays v1.3.0 [438e738f] + PyCall v1.96.4 [1fd47b50] + QuadGK v2.11.2 [36c3bae2] + RandomFeatures v0.3.4 [b3c3ace0] + RangeArrays v0.3.2 [c84ed2f1] + Ratios v0.4.5 [3cdcf5f2] + RecipesBase v1.3.4 [189a3867] + Reexport v1.2.2 [ae029012] + Requires v1.3.1 [37e2e3b7] + ReverseDiff v1.16.1 ⌅ [79098fc4] + Rmath v0.7.1 [c946c3f1] + SCS v2.1.0 [94e857df] + SIMDTypes v0.1.0 [476501e8] + SLEEFPirates v0.6.43 [30f210dd] + ScientificTypesBase v3.0.0 [3646fa90] + ScikitLearn v0.7.0 [6e75b9c4] + ScikitLearnBase v0.5.0 [91c51154] + SentinelArrays v1.4.8 [efcf1570] + Setfield v1.1.2 [a2af1166] + SortingAlgorithms v1.2.1 [276daf66] + SpecialFunctions v2.5.1 [171d559e] + SplittablesBase v0.1.15 [860ef19b] + StableRNGs v1.0.3 [aedffcd0] + Static v1.2.0 [0d7ed370] + StaticArrayInterface v1.8.0 [90137ffa] + StaticArrays v1.9.13 [1e83bf80] + StaticArraysCore v1.4.3 [64bff920] + StatisticalTraits v3.5.0 [10745b16] + Statistics v1.11.1 [82ae8749] + StatsAPI v1.7.1 ⌅ [2913bbd2] + StatsBase v0.33.21 ⌅ [4c63d2b9] + StatsFuns v0.9.18 [892a3eda] + StringManipulation v0.4.1 [856f2bd8] + StructTypes v1.11.0 [9449cd9e] + TSVD v0.4.4 [3783bdb8] + TableTraits v1.0.1 [bd369af6] + Tables v1.12.1 [62fd8b95] + TensorCore v0.1.1 [5d786b92] + TerminalLoggers v0.1.7 [8290d209] + ThreadingUtilities v0.5.5 [3bb67fe8] + TranscodingStreams v0.11.3 ⌃ [28d57a85] + Transducers v0.4.80 [bc48ee85] + Tullio v0.3.8 [3a884ed6] + UnPack v1.0.2 [3d5dd08c] + VectorizationBase v0.21.71 [81def892] + VersionParsing v1.3.0 [efce3f68] + WoodburyMatrices v1.0.0 [700de1a5] + ZygoteRules v0.2.7 ⌅ [68821587] + Arpack_jll v3.5.1+1 [6e34b625] + Bzip2_jll v1.0.9+0 [83423d85] + Cairo_jll v1.18.5+0 [2e619515] + Expat_jll v2.6.5+0 ⌅ [b22a6f82] + FFMPEG_jll v4.4.4+1 [f5851436] + FFTW_jll v3.3.11+0 [a3f928ae] + Fontconfig_jll v2.16.0+0 [d7e528f0] + FreeType2_jll v2.13.4+0 [559328eb] + FriBidi_jll v1.0.17+0 [b0724c58] + GettextRuntime_jll v0.22.4+0 [7746bdde] + Glib_jll v2.84.3+0 [3b182d85] + Graphite2_jll v1.3.15+0 [2e76f6c2] + HarfBuzz_jll v8.5.1+0 [1d5cc7b8] + IntelOpenMP_jll v2025.0.4+0 [c1c5ebd0] + LAME_jll v3.100.3+0 [1d63c593] + LLVMOpenMP_jll v18.1.8+0 [dd4b983a] + LZO_jll v2.10.3+0 [e9f186c6] + Libffi_jll v3.4.7+0 [94ce4f54] + Libiconv_jll v1.18.0+0 [4b2f31a3] + Libmount_jll v2.41.0+0 [38a345b3] + Libuuid_jll v2.41.0+0 [856f044c] + MKL_jll v2025.0.1+1 [e7412a2a] + Ogg_jll v1.3.6+0 [656ef2d0] + OpenBLAS32_jll v0.3.29+0 [efe28fd5] + OpenSpecFun_jll v0.5.6+0 [91d4177d] + Opus_jll v1.5.2+0 ⌅ [30392449] + Pixman_jll v0.44.2+0 ⌅ [f50d1b31] + Rmath_jll v0.4.3+0 [f4f2fc5b] + SCS_jll v3.2.7+0 [4f6342f7] + Xorg_libX11_jll v1.8.12+0 [0c0b7dd1] + Xorg_libXau_jll v1.0.13+0 [a3789734] + Xorg_libXdmcp_jll v1.1.6+0 [1082639a] + Xorg_libXext_jll v1.3.7+0 [ea2f1a96] + Xorg_libXrender_jll v0.9.12+0 [c7cfdc94] + Xorg_libxcb_jll v1.17.1+0 [c5fb5394] + Xorg_xtrans_jll v1.6.0+0 [a4ae2306] + libaom_jll v3.11.0+0 ⌅ [0ac62f75] + libass_jll v0.15.2+0 [f638f0a6] + libfdk_aac_jll v2.0.4+0 [b53b4c65] + libpng_jll v1.6.50+0 [f27f6e37] + libvorbis_jll v1.3.8+0 [1317d2d5] + oneTBB_jll v2022.0.0+0 ⌅ [1270edf5] + x264_jll v2021.5.5+0 ⌅ [dfaa095f] + x265_jll v3.5.0+0 [0dad84c5] + ArgTools v1.1.2 [56f22d72] + Artifacts v1.11.0 [2a0f44e3] + Base64 v1.11.0 [ade2ca70] + Dates v1.11.0 [8ba89e20] + Distributed v1.11.0 [f43a241f] + Downloads v1.7.0 [7b1f6079] + FileWatching v1.11.0 [9fa8497b] + Future v1.11.0 [b77e0a4c] + InteractiveUtils v1.11.0 [ac6e5ff7] + JuliaSyntaxHighlighting v1.12.0 [4af54fe1] + LazyArtifacts v1.11.0 [b27032c2] + LibCURL v0.6.4 [76f85450] + LibGit2 v1.11.0 [8f399da3] + Libdl v1.11.0 [37e2e46d] + LinearAlgebra v1.12.0 [56ddb016] + Logging v1.11.0 [d6f4376e] + Markdown v1.11.0 [a63ad114] + Mmap v1.11.0 [ca575930] + NetworkOptions v1.3.0 [44cfe95a] + Pkg v1.13.0 [de0858da] + Printf v1.11.0 [9abbd945] + Profile v1.11.0 [3fa0cd96] + REPL v1.11.0 [9a3f8284] + Random v1.11.0 [ea8e919c] + SHA v0.7.0 [9e88b42a] + Serialization v1.11.0 [1a1011a3] + SharedArrays v1.11.0 [6462fe0b] + Sockets v1.11.0 [2f01184e] + SparseArrays v1.12.0 [f489334b] + StyledStrings v1.11.0 [4607b0f0] + SuiteSparse [fa267f1f] + TOML v1.0.3 [a4e569a6] + Tar v1.10.0 [8dfed614] + Test v1.11.0 [cf7118a7] + UUIDs v1.11.0 [4ec0a83e] + Unicode v1.11.0 [e66e0078] + CompilerSupportLibraries_jll v1.3.0+1 [deac9b47] + LibCURL_jll v8.14.1+1 [e37daf67] + LibGit2_jll v1.9.1+0 [29816b5a] + LibSSH2_jll v1.11.3+1 [14a3606d] + MozillaCACerts_jll v2025.5.20 [4536629a] + OpenBLAS_jll v0.3.29+0 [05823500] + OpenLibm_jll v0.8.5+0 [458c3c95] + OpenSSL_jll v3.5.1+0 [efcefdf7] + PCRE2_jll v10.45.0+0 [bea87d4a] + SuiteSparse_jll v7.10.1+0 [83775a58] + Zlib_jll v1.3.1+2 [8e850b90] + libblastrampoline_jll v5.13.1+0 [8e850ede] + nghttp2_jll v1.65.0+0 [3f19e933] + p7zip_jll v17.5.0+2 Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m` Building Conda ─────────────────→ `~/.julia/scratchspaces/44cfe95a-1eb2-52ea-b672-e2afdf69b78f/b19db3927f0db4151cb86d073689f2428e524576/build.log` Building PyCall ────────────────→ `~/.julia/scratchspaces/44cfe95a-1eb2-52ea-b672-e2afdf69b78f/9816a3826b0ebf49ab4926e2b18842ad8b5c8f04/build.log` Building CalibrateEmulateSample → `~/.julia/scratchspaces/44cfe95a-1eb2-52ea-b672-e2afdf69b78f/f58547feedb27247426c2a1b4c3ba1a881596722/build.log` Installation completed after 121.81s ################################################################################ # Precompilation # Precompiling PkgEval dependencies... Precompiling package dependencies... Precompilation completed after 1416.36s ################################################################################ # Testing # Testing CalibrateEmulateSample Status `/tmp/jl_m4er9E/Project.toml` [99985d1d] AbstractGPs v0.5.24 ⌅ [80f14c24] AbstractMCMC v4.4.2 ⌃ [5b7e9947] AdvancedMH v0.7.5 [95e48a1f] CalibrateEmulateSample v0.7.0 [ae650224] ChunkSplitters v3.1.2 [8f4d0f93] Conda v1.10.2 [31c24e10] Distributions v0.25.120 [ffbed154] DocStringExtensions v0.9.5 ⌃ [aa8a2aa5] EnsembleKalmanProcesses v2.4.0 ⌅ [f6369f11] ForwardDiff v0.10.38 [891a1506] GaussianProcesses v0.12.5 [ec8451be] KernelFunctions v0.10.65 ⌅ [c7f686f2] MCMCChains v5.7.1 [49802e3a] ProgressBars v1.5.1 [438e738f] PyCall v1.96.4 [36c3bae2] RandomFeatures v0.3.4 [37e2e3b7] ReverseDiff v1.16.1 [3646fa90] ScikitLearn v0.7.0 [860ef19b] StableRNGs v1.0.3 [10745b16] Statistics v1.11.1 ⌅ [2913bbd2] StatsBase v0.33.21 [37e2e46d] LinearAlgebra v1.12.0 [44cfe95a] Pkg v1.13.0 [de0858da] Printf v1.11.0 [9a3f8284] Random v1.11.0 [8dfed614] Test v1.11.0 Status `/tmp/jl_m4er9E/Manifest.toml` [47edcb42] ADTypes v1.15.0 [14f7f29c] AMD v0.5.3 [621f4979] AbstractFFTs v1.5.0 [99985d1d] AbstractGPs v0.5.24 ⌅ [80f14c24] AbstractMCMC v4.4.2 [1520ce14] AbstractTrees v0.4.5 [79e6a3ab] Adapt v4.3.0 ⌃ [5b7e9947] AdvancedMH v0.7.5 [66dad0bd] AliasTables v1.1.3 [dce04be8] ArgCheck v2.5.0 [7d9fca2a] Arpack v0.5.4 [4fba245c] ArrayInterface v7.19.0 [13072b0f] AxisAlgorithms v1.1.0 [39de3d68] AxisArrays v0.4.7 ⌅ [198e06fe] BangBang v0.3.40 [9718e550] Baselet v0.1.1 [6e4b80f9] BenchmarkTools v1.6.0 [62783981] BitTwiddlingConvenienceFunctions v0.1.6 [2a0fbf3d] CPUSummary v0.2.6 [95e48a1f] CalibrateEmulateSample v0.7.0 [d360d2e6] ChainRulesCore v1.25.2 [ae650224] ChunkSplitters v3.1.2 [fb6a15b2] CloseOpenIntervals v0.1.13 [523fee87] CodecBzip2 v0.8.5 [944b1d66] CodecZlib v0.7.8 [bbf7d656] CommonSubexpressions v0.3.1 [f70d9fcc] CommonWorldInvalidations v1.0.0 [34da2185] Compat v4.17.0 [a33af91c] CompositionsBase v0.1.2 [8f4d0f93] Conda v1.10.2 [88cd18e8] ConsoleProgressMonitor v0.1.2 [187b0558] ConstructionBase v1.6.0 [f65535da] Convex v0.16.4 [adafc99b] CpuId v0.3.1 [a8cc5b0e] Crayons v4.1.1 [9a962f9c] DataAPI v1.16.0 [a93c6f00] DataFrames v1.7.0 [864edb3b] DataStructures v0.18.22 [e2d170a0] DataValueInterfaces v1.0.0 [244e2a9f] DefineSingletons v0.1.2 [163ba53b] DiffResults v1.1.0 [b552c78f] DiffRules v1.15.1 [a0c0ee7d] DifferentiationInterface v0.7.2 [b4f34e82] Distances v0.10.12 [31c24e10] Distributions v0.25.120 [ffbed154] DocStringExtensions v0.9.5 [fdbdab4c] ElasticArrays v1.2.12 [2904ab23] ElasticPDMats v0.2.3 ⌃ [aa8a2aa5] EnsembleKalmanProcesses v2.4.0 [4e289a0a] EnumX v1.0.5 [c87230d0] FFMPEG v0.4.2 [7a1cc6ca] FFTW v1.9.0 ⌅ [442a2c76] FastGaussQuadrature v0.4.9 [1a297f60] FillArrays v1.13.0 [6a86dc24] FiniteDiff v2.27.0 [59287772] Formatting v0.4.3 ⌅ [f6369f11] ForwardDiff v0.10.38 [069b7b12] FunctionWrappers v1.1.3 [d9f16b24] Functors v0.5.2 [891a1506] GaussianProcesses v0.12.5 ⌃ [e4b2fa32] GaussianRandomFields v2.1.6 [3e5b6fbb] HostCPUFeatures v0.1.17 [615f187c] IfElse v0.1.1 [22cec73e] InitialValues v0.3.1 [842dd82b] InlineStrings v1.4.4 ⌅ [a98d9a8b] Interpolations v0.15.1 [8197267c] IntervalSets v0.7.11 [3587e190] InverseFunctions v0.1.17 [41ab1584] InvertedIndices v1.3.1 ⌅ [92d709cd] IrrationalConstants v0.1.1 [c8e1da08] IterTools v1.10.0 [82899510] IteratorInterfaceExtensions v1.0.0 [692b3bcd] JLLWrappers v1.7.0 [682c06a0] JSON v0.21.4 [0f8b85d8] JSON3 v1.14.3 [5ab0869b] KernelDensity v0.6.10 [ec8451be] KernelFunctions v0.10.65 [40e66cde] LDLFactorizations v0.10.1 [b964fa9f] LaTeXStrings v1.4.0 [10f19ff3] LayoutPointers v0.1.17 [1d6d02ad] LeftChildRightSiblingTrees v0.2.0 [d3d80556] LineSearches v7.4.0 [6fdf6af0] LogDensityProblems v2.1.2 [2ab3a3ac] LogExpFunctions v0.3.29 [e6f89c97] LoggingExtras v1.1.0 [bdcacae8] LoopVectorization v0.12.172 ⌅ [c7f686f2] MCMCChains v5.7.1 ⌅ [be115224] MCMCDiagnosticTools v0.2.1 [e80e1ace] MLJModelInterface v1.11.1 [1914dd2f] MacroTools v0.5.16 [d125e4d3] ManualMemory v0.1.8 [b8f27783] MathOptInterface v1.42.0 ⌅ [128add7d] MicroCollections v0.1.4 [e1d29d7a] Missings v1.2.0 [d8a4904e] MutableArithmetics v1.6.4 [d41bc354] NLSolversBase v7.10.0 [77ba4419] NaNMath v1.1.3 [c020b1a1] NaturalSort v1.0.0 [6fe1bfb0] OffsetArrays v1.17.0 [429524aa] Optim v1.13.2 [bac558e1] OrderedCollections v1.8.1 [90014a1f] PDMats v0.11.35 [d96e819e] Parameters v0.12.3 [69de0a69] Parsers v2.8.3 [1d0040c9] PolyesterWeave v0.2.2 [2dfb63ee] PooledArrays v1.4.3 [85a6dd25] PositiveFactorizations v0.2.4 [aea7be01] PrecompileTools v1.3.2 [21216c6a] Preferences v1.4.3 [08abe8d2] PrettyTables v2.4.0 [49802e3a] ProgressBars v1.5.1 [33c8b6b6] ProgressLogging v0.1.5 [92933f4c] ProgressMeter v1.10.4 [43287f4e] PtrArrays v1.3.0 [438e738f] PyCall v1.96.4 [1fd47b50] QuadGK v2.11.2 [36c3bae2] RandomFeatures v0.3.4 [b3c3ace0] RangeArrays v0.3.2 [c84ed2f1] Ratios v0.4.5 [3cdcf5f2] RecipesBase v1.3.4 [189a3867] Reexport v1.2.2 [ae029012] Requires v1.3.1 [37e2e3b7] ReverseDiff v1.16.1 ⌅ [79098fc4] Rmath v0.7.1 [c946c3f1] SCS v2.1.0 [94e857df] SIMDTypes v0.1.0 [476501e8] SLEEFPirates v0.6.43 [30f210dd] ScientificTypesBase v3.0.0 [3646fa90] ScikitLearn v0.7.0 [6e75b9c4] ScikitLearnBase v0.5.0 [91c51154] SentinelArrays v1.4.8 [efcf1570] Setfield v1.1.2 [a2af1166] SortingAlgorithms v1.2.1 [276daf66] SpecialFunctions v2.5.1 [171d559e] SplittablesBase v0.1.15 [860ef19b] StableRNGs v1.0.3 [aedffcd0] Static v1.2.0 [0d7ed370] StaticArrayInterface v1.8.0 [90137ffa] StaticArrays v1.9.13 [1e83bf80] StaticArraysCore v1.4.3 [64bff920] StatisticalTraits v3.5.0 [10745b16] Statistics v1.11.1 [82ae8749] StatsAPI v1.7.1 ⌅ [2913bbd2] StatsBase v0.33.21 ⌅ [4c63d2b9] StatsFuns v0.9.18 [892a3eda] StringManipulation v0.4.1 [856f2bd8] StructTypes v1.11.0 [9449cd9e] TSVD v0.4.4 [3783bdb8] TableTraits v1.0.1 [bd369af6] Tables v1.12.1 [62fd8b95] TensorCore v0.1.1 [5d786b92] TerminalLoggers v0.1.7 [8290d209] ThreadingUtilities v0.5.5 [3bb67fe8] TranscodingStreams v0.11.3 ⌃ [28d57a85] Transducers v0.4.80 [bc48ee85] Tullio v0.3.8 [3a884ed6] UnPack v1.0.2 [3d5dd08c] VectorizationBase v0.21.71 [81def892] VersionParsing v1.3.0 [efce3f68] WoodburyMatrices v1.0.0 [700de1a5] ZygoteRules v0.2.7 ⌅ [68821587] Arpack_jll v3.5.1+1 [6e34b625] Bzip2_jll v1.0.9+0 [83423d85] Cairo_jll v1.18.5+0 [2e619515] Expat_jll v2.6.5+0 ⌅ [b22a6f82] FFMPEG_jll v4.4.4+1 [f5851436] FFTW_jll v3.3.11+0 [a3f928ae] Fontconfig_jll v2.16.0+0 [d7e528f0] FreeType2_jll v2.13.4+0 [559328eb] FriBidi_jll v1.0.17+0 [b0724c58] GettextRuntime_jll v0.22.4+0 [7746bdde] Glib_jll v2.84.3+0 [3b182d85] Graphite2_jll v1.3.15+0 [2e76f6c2] HarfBuzz_jll v8.5.1+0 [1d5cc7b8] IntelOpenMP_jll v2025.0.4+0 [c1c5ebd0] LAME_jll v3.100.3+0 [1d63c593] LLVMOpenMP_jll v18.1.8+0 [dd4b983a] LZO_jll v2.10.3+0 [e9f186c6] Libffi_jll v3.4.7+0 [94ce4f54] Libiconv_jll v1.18.0+0 [4b2f31a3] Libmount_jll v2.41.0+0 [38a345b3] Libuuid_jll v2.41.0+0 [856f044c] MKL_jll v2025.0.1+1 [e7412a2a] Ogg_jll v1.3.6+0 [656ef2d0] OpenBLAS32_jll v0.3.29+0 [efe28fd5] OpenSpecFun_jll v0.5.6+0 [91d4177d] Opus_jll v1.5.2+0 ⌅ [30392449] Pixman_jll v0.44.2+0 ⌅ [f50d1b31] Rmath_jll v0.4.3+0 [f4f2fc5b] SCS_jll v3.2.7+0 [4f6342f7] Xorg_libX11_jll v1.8.12+0 [0c0b7dd1] Xorg_libXau_jll v1.0.13+0 [a3789734] Xorg_libXdmcp_jll v1.1.6+0 [1082639a] Xorg_libXext_jll v1.3.7+0 [ea2f1a96] Xorg_libXrender_jll v0.9.12+0 [c7cfdc94] Xorg_libxcb_jll v1.17.1+0 [c5fb5394] Xorg_xtrans_jll v1.6.0+0 [a4ae2306] libaom_jll v3.11.0+0 ⌅ [0ac62f75] libass_jll v0.15.2+0 [f638f0a6] libfdk_aac_jll v2.0.4+0 [b53b4c65] libpng_jll v1.6.50+0 [f27f6e37] libvorbis_jll v1.3.8+0 [1317d2d5] oneTBB_jll v2022.0.0+0 ⌅ [1270edf5] x264_jll v2021.5.5+0 ⌅ [dfaa095f] x265_jll v3.5.0+0 [0dad84c5] ArgTools v1.1.2 [56f22d72] Artifacts v1.11.0 [2a0f44e3] Base64 v1.11.0 [ade2ca70] Dates v1.11.0 [8ba89e20] Distributed v1.11.0 [f43a241f] Downloads v1.7.0 [7b1f6079] FileWatching v1.11.0 [9fa8497b] Future v1.11.0 [b77e0a4c] InteractiveUtils v1.11.0 [ac6e5ff7] JuliaSyntaxHighlighting v1.12.0 [4af54fe1] LazyArtifacts v1.11.0 [b27032c2] LibCURL v0.6.4 [76f85450] LibGit2 v1.11.0 [8f399da3] Libdl v1.11.0 [37e2e46d] LinearAlgebra v1.12.0 [56ddb016] Logging v1.11.0 [d6f4376e] Markdown v1.11.0 [a63ad114] Mmap v1.11.0 [ca575930] NetworkOptions v1.3.0 [44cfe95a] Pkg v1.13.0 [de0858da] Printf v1.11.0 [9abbd945] Profile v1.11.0 [3fa0cd96] REPL v1.11.0 [9a3f8284] Random v1.11.0 [ea8e919c] SHA v0.7.0 [9e88b42a] Serialization v1.11.0 [1a1011a3] SharedArrays v1.11.0 [6462fe0b] Sockets v1.11.0 [2f01184e] SparseArrays v1.12.0 [f489334b] StyledStrings v1.11.0 [4607b0f0] SuiteSparse [fa267f1f] TOML v1.0.3 [a4e569a6] Tar v1.10.0 [8dfed614] Test v1.11.0 [cf7118a7] UUIDs v1.11.0 [4ec0a83e] Unicode v1.11.0 [e66e0078] CompilerSupportLibraries_jll v1.3.0+1 [deac9b47] LibCURL_jll v8.14.1+1 [e37daf67] LibGit2_jll v1.9.1+0 [29816b5a] LibSSH2_jll v1.11.3+1 [14a3606d] MozillaCACerts_jll v2025.5.20 [4536629a] OpenBLAS_jll v0.3.29+0 [05823500] OpenLibm_jll v0.8.5+0 [458c3c95] OpenSSL_jll v3.5.1+0 [efcefdf7] PCRE2_jll v10.45.0+0 [bea87d4a] SuiteSparse_jll v7.10.1+0 [83775a58] Zlib_jll v1.3.1+2 [8e850b90] libblastrampoline_jll v5.13.1+0 [8e850ede] nghttp2_jll v1.65.0+0 [3f19e933] p7zip_jll v17.5.0+2 Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. Testing Running tests... Starting tests for Emulator [ Info: fit successful SVD truncated at k: 2/6 [ Info: reducing input dimension from 10 to rank(input_cov) during low rank in normalization Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 SVD truncated at k: 2/6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Completed tests for Emulator, 220 seconds elapsed Starting tests for GaussianProcess ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 Using user-defined kernelType: SEIso{Float64}, Params: [0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: SumKernel{SEIso{Float64}, Noise{Float64}} Type: SEIso{Float64}, Params: [0.0, 0.0] Type: Noise{Float64}, Params: [0.0] created GP: 1 ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 ┌ Warning: GaussianProcess already built. skipping... └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/GaussianProcess.jl:151 optimized hyperparameters of GP: 1 Type: SumKernel{SEIso{Float64}, Noise{Float64}} Type: SEIso{Float64}, Params: [0.4671112501723779, -0.11637219097092133] Type: Noise{Float64}, Params: [-2.7795647959619263] ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 ┌ Warning: implicit `obsdim=2` argument is deprecated and now has to be passed explicitly to specify that each column corresponds to one observation │ caller = #_#1 at finite_gp_projection.jl:36 [inlined] └ @ Core ~/.julia/packages/AbstractGPs/lWdNB/src/finite_gp_projection.jl:36 optimised GP: 1 Sum of 2 kernels: Squared Exponential Kernel (metric = Distances.Euclidean(0.0)) - ARD Transform (dims: 1) - σ² = 0.7923560881646375 White Kernel - σ² = 0.0038521278620295635 [ Info: AbstractGP already built. Continuing... ┌ Warning: implicit `obsdim=2` argument is deprecated and now has to be passed explicitly to specify that each column corresponds to one observation │ caller = #_#1 at finite_gp_projection.jl:36 [inlined] └ @ Core ~/.julia/packages/AbstractGPs/lWdNB/src/finite_gp_projection.jl:36 ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 Using user-defined kernelType: SEIso{Float64}, Params: [0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: SumKernel{SEIso{Float64}, Noise{Float64}} Type: SEIso{Float64}, Params: [0.0, 0.0] Type: Noise{Float64}, Params: [0.0] created GP: 1 optimized hyperparameters of GP: 1 Type: SumKernel{SEIso{Float64}, Noise{Float64}} Type: SEIso{Float64}, Params: [0.467111250159053, -0.11637219100329507] Type: Noise{Float64}, Params: [-2.912614529741828] ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 Using user-defined kernelPyObject 1**2 * RBF(length_scale=1) Learning additive white noise [ Info: Training kernel 1, [ Info: PyObject 1**2 * RBF(length_scale=1) + WhiteKernel(noise_level=1) ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 ┌ Warning: GaussianProcess already built. skipping... └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/GaussianProcess.jl:271 SKlearn, already trained. continuing... Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: SEArd{Float64}, Params: [-0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: SumKernel{SEArd{Float64}, Noise{Float64}} Type: SEArd{Float64}, Params: [-0.0, -0.0, 0.0] Type: Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: SumKernel{SEArd{Float64}, Noise{Float64}} Type: SEArd{Float64}, Params: [-0.0, -0.0, 0.0] Type: Noise{Float64}, Params: [0.0] created GP: 2 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: SEArd{Float64}, Params: [-0.0, -0.0, 0.0] kernel in GaussianProcess: Type: SEArd{Float64}, Params: [-0.0, -0.0, 0.0] created GP: 1 kernel in GaussianProcess: Type: SEArd{Float64}, Params: [-0.0, -0.0, 0.0] created GP: 2 optimized hyperparameters of GP: 1 Type: SumKernel{SEArd{Float64}, Noise{Float64}} Type: SEArd{Float64}, Params: [-0.034010033457988934, 2.6947372695174536, 1.9374032687140703] Type: Noise{Float64}, Params: [-0.19545576083347674] optimized hyperparameters of GP: 2 Type: SumKernel{SEArd{Float64}, Noise{Float64}} Type: SEArd{Float64}, Params: [2.040064043166128, -0.263116528583071, 2.0697093362932244] Type: Noise{Float64}, Params: [-0.08245764941529067] optimized hyperparameters of GP: 1 Type: SEArd{Float64}, Params: [-0.070755506795137, 2.7805790822912098, 1.879857339393294] optimized hyperparameters of GP: 2 Type: SEArd{Float64}, Params: [2.0918473623519653, -0.15767169342096962, 2.145456577252389] optimised GP: 1 Sum of 2 kernels: Squared Exponential Kernel (metric = Distances.Euclidean(0.0)) - ARD Transform (dims: 2) - σ² = 48.17337764399554 White Kernel - σ² = 0.6764400036755718 optimised GP: 2 Sum of 2 kernels: Squared Exponential Kernel (metric = Distances.Euclidean(0.0)) - ARD Transform (dims: 2) - σ² = 62.76632305723065 White Kernel - σ² = 0.8479655247177975 Completed tests for GaussianProcess, 69 seconds elapsed Starting tests for RandomFeature ┌ Info: Shrinkage scale: 0.8879081219518875, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 2.300239629530913 [ Info: NICE-adjusted covariance condition number: 4.65930913095833 [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 2, "train_fraction" => 0.8, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "ensemble", "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 40, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 200, "cov_sample_multiplier" => 10.0) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 2, "train_fraction" => 0.8, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "ensemble", "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 70, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 200, "cov_sample_multiplier" => 10.0) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 2, "train_fraction" => 0.8, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "ensemble", "tikhonov" => 0, "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 90, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 200, "cov_sample_multiplier" => 10.0) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 2, "train_fraction" => 0.8, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "ensemble", "tikhonov" => 0, "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 100, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 200, "cov_sample_multiplier" => 10.0) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 0, "train_fraction" => 0.8, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "ensemble", "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 20, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 100, "cov_sample_multiplier" => 10.0) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 0, "train_fraction" => 0.8, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "ensemble", "tikhonov" => 0, "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 30, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 100, "cov_sample_multiplier" => 10.0) [ Info: hyperparameter learning for 1 models using 50 training points, 50 validation points and 100 features estimate cov with 520 iterations... WARNING: llvmcall with integer pointers is deprecated. Use actual pointers instead, replacing i32 or i64 with i8* or ptr in tile_halves(F, Type{T}, Tuple, Tuple, Tuple, Any, Any) where {F<:Function, T} at /home/pkgeval/.julia/packages/Tullio/2zyFP/src/threads.jl WARNING: llvmcall with integer pointers is deprecated. Use actual pointers instead, replacing i32 or i64 with i8* or ptr in _turbo_!(Base.Val{var"#UNROLL#"}, Base.Val{var"#OPS#"}, Base.Val{var"#ARF#"}, Base.Val{var"#AM#"}, Base.Val{var"#LPSYM#"}, Base.Val{Tuple{var"#LB#", var"#V#"}}, Vararg{Any, var"#num#vargs#"}) where {var"#UNROLL#", var"#OPS#", var"#ARF#", var"#AM#", var"#LPSYM#", var"#LB#", var"#V#", var"#num#vargs#"} at /home/pkgeval/.julia/packages/LoopVectorization/ImqiY/src/reconstruct_loopset.jl ┌ Info: Shrinkage scale: 0.006562067115524485, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 4904.333599088277 calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... [ Info: hyperparameter learning using 50 training points, 50 validation points and 100 features estimate cov with 520 iterations... ┌ Info: Shrinkage scale: 0.007293714733505262, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 4621.038037853869 calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 2, "train_fraction" => 0.8, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "bad_option", "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 20, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 100, "cov_sample_multiplier" => 10.0) ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 [ Info: hyperparameter learning for 1 models using 40 training points, 10 validation points and 100 features ┌ Warning: ScalarRandomFeatureInterface already built. skipping... └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/ScalarRandomFeature.jl:326 ┌ Warning: VectorRandomFeatureInterface already built. skipping... └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/VectorRandomFeature.jl:374 [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 2, "train_fraction" => 0.8, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "ensemble", "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 30, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 150, "cov_sample_multiplier" => 10.0) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 2, "train_fraction" => 0.8, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "ensemble", "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 40, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 150, "cov_sample_multiplier" => 10.0) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 2, "train_fraction" => 0.8, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "ensemble", "tikhonov" => 0, "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 60, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 150, "cov_sample_multiplier" => 10.0) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 2, "train_fraction" => 0.8, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "ensemble", "tikhonov" => 0, "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 70, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 150, "cov_sample_multiplier" => 10.0) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 2, "train_fraction" => 0.8, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "ensemble", "tikhonov" => 0, "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 100, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 150, "cov_sample_multiplier" => 10.0) [ Info: hyperparameter learning for 2 models using 80 training points, 20 validation points and 150 features estimate cov with 220 iterations... ┌ Info: Shrinkage scale: 0.017415757720506662, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 832.8361158438271 estimate cov with 220 iterations... ┌ Info: Shrinkage scale: 0.016080608883696453, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 975.9399126375708 calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... estimate cov with 220 iterations... ┌ Info: Shrinkage scale: 0.009743409344824946, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 1915.9829311270205 estimate cov with 220 iterations... ┌ Info: Shrinkage scale: 0.010166593289854052, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 1854.977181916463 calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... calculating 30 ensemble members... [ Info: hyperparameter learning for 2 models using 80 training points, 20 validation points and 150 features estimate cov with 220 iterations... ┌ Info: Shrinkage scale: 0.011104069117210277, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 1959.0507555953618 estimate cov with 220 iterations... ┌ Info: Shrinkage scale: 0.010096742393127583, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 2155.837563816166 calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... [ Info: Termination condition of scheduler `DataMisfitController` will be exceeded during the next iteration. calculating 40 ensemble members... calculating 40 ensemble members... ┌ Warning: Termination condition of scheduler `DataMisfitController` has been exceeded, returning `true` from `update_ensemble!` and preventing futher updates │ Set on_terminate="continue" in `DataMisfitController` to ignore termination └ @ EnsembleKalmanProcesses ~/.julia/packages/EnsembleKalmanProcesses/trJai/src/LearningRateSchedulers.jl:293 estimate cov with 220 iterations... ┌ Info: Shrinkage scale: 0.007935615203581705, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 2749.6202336873075 estimate cov with 220 iterations... ┌ Info: Shrinkage scale: 0.009648236224030458, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 2256.9166769120397 calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... calculating 40 ensemble members... [ Info: Termination condition of scheduler `DataMisfitController` will be exceeded during the next iteration. calculating 40 ensemble members... calculating 40 ensemble members... ┌ Warning: Termination condition of scheduler `DataMisfitController` has been exceeded, returning `true` from `update_ensemble!` and preventing futher updates │ Set on_terminate="continue" in `DataMisfitController` to ignore termination └ @ EnsembleKalmanProcesses ~/.julia/packages/EnsembleKalmanProcesses/trJai/src/LearningRateSchedulers.jl:293 [ Info: hyperparameter learning using 80 training points, 20 validation points and 150 features estimate cov with 220 iterations... ┌ Info: Shrinkage scale: 0.01268952597933114, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 2073.500307172599 approx_σ2 not posdef estimate cov with 220 iterations... ┌ Info: Shrinkage scale: 0.012337725138515634, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 2376.45938830322 approx_σ2 not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef calculating 60 ensemble members... blockcovmat not posdef [ Info: hyperparameter learning using 80 training points, 20 validation points and 150 features estimate cov with 420 iterations... ┌ Info: Shrinkage scale: 0.014140281011587022, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 1121.831074088369 approx_σ2 not posdef estimate cov with 420 iterations... ┌ Info: Shrinkage scale: 0.01460824787609219, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 1068.9879331160064 approx_σ2 not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef [ Info: Termination condition of scheduler `DataMisfitController` will be exceeded during the next iteration. calculating 70 ensemble members... blockcovmat not posdef calculating 70 ensemble members... blockcovmat not posdef ┌ Warning: Termination condition of scheduler `DataMisfitController` has been exceeded, returning `true` from `update_ensemble!` and preventing futher updates │ Set on_terminate="continue" in `DataMisfitController` to ignore termination └ @ EnsembleKalmanProcesses ~/.julia/packages/EnsembleKalmanProcesses/trJai/src/LearningRateSchedulers.jl:293 ┌ Warning: VectorRandomFeatureInterface already built. skipping... └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/VectorRandomFeature.jl:374 [ Info: hyperparameter learning using 80 training points, 20 validation points and 150 features estimate cov with 420 iterations... ┌ Info: Shrinkage scale: 0.005048417063170467, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 7492.281919263645 approx_σ2 not posdef estimate cov with 420 iterations... ┌ Info: Shrinkage scale: 0.005251988240003603, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 7342.963742116548 approx_σ2 not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef calculating 100 ensemble members... blockcovmat not posdef [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("n_cross_val_sets" => 2, "train_fraction" => 0.7, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "cov_correction" => "shrinkage", "multithread" => "tullio", "tikhonov" => 0, "inflation" => 0.0001, "n_iteration" => 10, "n_ensemble" => 70, "verbose" => false, "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "n_features_opt" => 150, "cov_sample_multiplier" => 10.0) [ Info: hyperparameter learning using 70 training points, 30 validation points and 150 features estimate cov with 620 iterations... 0.0%┣ ┫ 0/620 [00:00<00:00, -0s/it]  0.2%┣ ┫ 1/620 [00:03)::]: Assertion `havelock' failed. [567] signal 6 (-6): Aborted in expression starting at /home/pkgeval/.julia/packages/CalibrateEmulateSample/H2455/test/MarkovChainMonteCarlo/runtests.jl:184 unknown function (ip: 0x7c3a94e86ebc) at /lib/x86_64-linux-gnu/libc.so.6 gsignal at /lib/x86_64-linux-gnu/libc.so.6 (unknown line) abort at /lib/x86_64-linux-gnu/libc.so.6 (unknown line) unknown function (ip: 0x7c3a94e22394) at /lib/x86_64-linux-gnu/libc.so.6 __assert_fail at /lib/x86_64-linux-gnu/libc.so.6 (unknown line) _ZN20JITDebugInfoRegistry17registerJITObjectERKN4llvm6object10ObjectFileESt8functionIFmRKNS0_9StringRefEEE at /opt/julia/bin/../lib/julia/libjulia-codegen.so.1.13 (unknown line) _Z22jl_register_jit_objectRKN4llvm6object10ObjectFileESt8functionIFmRKNS_9StringRefEEE at /opt/julia/bin/../lib/julia/libjulia-codegen.so.1.13 (unknown line) _ZL23registerRTDyldJITObjectRN4llvm3orc29MaterializationResponsibilityERKNS_6object10ObjectFileERKNS_11RuntimeDyld16LoadedObjectInfoE at /opt/julia/bin/../lib/julia/libjulia-codegen.so.1.13 (unknown line) _ZN4llvm3orc24RTDyldObjectLinkingLayer9onObjLoadERNS0_29MaterializationResponsibilityERKNS_6object10ObjectFileERNS_11RuntimeDyld13MemoryManagerERNS8_16LoadedObjectInfoESt3mapINS_9StringRefENS_18JITEvaluatedSymbolESt4lessISE_ESaISt4pairIKSE_SF_EEERSt3setISE_SH_SaISE_EE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN4llvm6detail18UniqueFunctionBaseINS_5ErrorEJRKNS_6object10ObjectFileERNS_11RuntimeDyld16LoadedObjectInfoESt3mapINS_9StringRefENS_18JITEvaluatedSymbolESt4lessISB_ESaISt4pairIKSB_SC_EEEEE8CallImplIZNS_3orc24RTDyldObjectLinkingLayer4emitESt10unique_ptrINSM_29MaterializationResponsibilityESt14default_deleteISP_EESO_INS_12MemoryBufferESQ_IST_EEEUlS6_S9_SJ_E_EES2_PvS6_S9_RSJ_ at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN4llvm13jitLinkForORCENS_6object12OwningBinaryINS0_10ObjectFileEEERNS_11RuntimeDyld13MemoryManagerERNS_17JITSymbolResolverEbNS_15unique_functionIFNS_5ErrorERKS2_RNS4_16LoadedObjectInfoESt3mapINS_9StringRefENS_18JITEvaluatedSymbolESt4lessISG_ESaISt4pairIKSG_SH_EEEEEENS9_IFvS3_St10unique_ptrISD_St14default_deleteISD_EESA_EEE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN4llvm3orc24RTDyldObjectLinkingLayer4emitESt10unique_ptrINS0_29MaterializationResponsibilityESt14default_deleteIS3_EES2_INS_12MemoryBufferES4_IS7_EE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN9JuliaOJIT10LockLayerT4emitESt10unique_ptrIN4llvm3orc29MaterializationResponsibilityESt14default_deleteIS4_EES1_INS2_12MemoryBufferES5_IS8_EE at /opt/julia/bin/../lib/julia/libjulia-codegen.so.1.13 (unknown line) _ZN4llvm3orc35BasicObjectLayerMaterializationUnit11materializeESt10unique_ptrINS0_29MaterializationResponsibilityESt14default_deleteIS3_EE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN4llvm3orc19MaterializationTask3runEv at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN4llvm3orc16ExecutionSession22dispatchOutstandingMUsEv at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN4llvm3orc16ExecutionSession17OL_completeLookupESt10unique_ptrINS0_21InProgressLookupStateESt14default_deleteIS3_EESt10shared_ptrINS0_23AsynchronousSymbolQueryEESt8functionIFvRKNS_8DenseMapIPNS0_8JITDylibENS_8DenseSetINS0_15SymbolStringPtrENS_12DenseMapInfoISF_vEEEENSG_ISD_vEENS_6detail12DenseMapPairISD_SI_EEEEEE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN4llvm3orc25InProgressFullLookupState8completeESt10unique_ptrINS0_21InProgressLookupStateESt14default_deleteIS3_EE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN4llvm3orc16ExecutionSession19OL_applyQueryPhase1ESt10unique_ptrINS0_21InProgressLookupStateESt14default_deleteIS3_EENS_5ErrorE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN4llvm3orc16ExecutionSession6lookupENS0_10LookupKindERKSt6vectorISt4pairIPNS0_8JITDylibENS0_19JITDylibLookupFlagsEESaIS8_EENS0_15SymbolLookupSetENS0_11SymbolStateENS_15unique_functionIFvNS_8ExpectedINS_8DenseMapINS0_15SymbolStringPtrENS0_17ExecutorSymbolDefENS_12DenseMapInfoISI_vEENS_6detail12DenseMapPairISI_SJ_EEEEEEEEESt8functionIFvRKNSH_IS6_NS_8DenseSetISI_SL_EENSK_IS6_vEENSN_IS6_SV_EEEEEE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN4llvm3orc16ExecutionSession6lookupERKSt6vectorISt4pairIPNS0_8JITDylibENS0_19JITDylibLookupFlagsEESaIS7_EENS0_15SymbolLookupSetENS0_10LookupKindENS0_11SymbolStateESt8functionIFvRKNS_8DenseMapIS5_NS_8DenseSetINS0_15SymbolStringPtrENS_12DenseMapInfoISI_vEEEENSJ_IS5_vEENS_6detail12DenseMapPairIS5_SL_EEEEEE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN9JuliaOJIT11findSymbolsEN4llvm8ArrayRefINS0_9StringRefEEE at /opt/julia/bin/../lib/julia/libjulia-codegen.so.1.13 (unknown line) _ZL23jl_compile_codeinst_nowP19_jl_code_instance_t at /opt/julia/bin/../lib/julia/libjulia-codegen.so.1.13 (unknown line) jl_compile_codeinst_impl at /opt/julia/bin/../lib/julia/libjulia-codegen.so.1.13 (unknown line) jl_compile_method_internal at /opt/julia/bin/../lib/julia/libjulia-internal.so.1.13 (unknown line) ijl_apply_generic at /opt/julia/bin/../lib/julia/libjulia-internal.so.1.13 (unknown line) ijl_atexit_hook at /opt/julia/bin/../lib/julia/libjulia-internal.so.1.13 (unknown line) jl_exit_thread0_cb at /opt/julia/bin/../lib/julia/libjulia-internal.so.1.13 (unknown line) Allocations: 1234903441 (Pool: 1234900285; Big: 3156); GC: 814 PkgEval terminated after 2733.12s: test duration exceeded the time limit