Package evaluation of CalibrateEmulateSample on Julia 1.12.0-beta2.30 (9e913d7251*) started at 2025-05-05T11:52:43.572 ################################################################################ # Set-up # Installing PkgEval dependencies (TestEnv)... Set-up completed after 7.93s ################################################################################ # Installation # Installing CalibrateEmulateSample... Resolving package versions... Installed Conda ────────────────── v1.10.2 Installed PyCall ───────────────── v1.96.4 Installed CalibrateEmulateSample ─ v0.7.0 Updating `~/.julia/environments/v1.12/Project.toml` [95e48a1f] + CalibrateEmulateSample v0.7.0 Updating `~/.julia/environments/v1.12/Manifest.toml` [47edcb42] + ADTypes v1.14.0 [14f7f29c] + AMD v0.5.3 [621f4979] + AbstractFFTs v1.5.0 [99985d1d] + AbstractGPs v0.5.24 ⌅ [80f14c24] + AbstractMCMC v4.4.2 [1520ce14] + AbstractTrees v0.4.5 [79e6a3ab] + Adapt v4.3.0 ⌃ [5b7e9947] + AdvancedMH v0.7.5 [66dad0bd] + AliasTables v1.1.3 [dce04be8] + ArgCheck v2.5.0 [7d9fca2a] + Arpack v0.5.4 [4fba245c] + ArrayInterface v7.18.0 [13072b0f] + AxisAlgorithms v1.1.0 [39de3d68] + AxisArrays v0.4.7 ⌅ [198e06fe] + BangBang v0.3.40 [9718e550] + Baselet v0.1.1 [6e4b80f9] + BenchmarkTools v1.6.0 [62783981] + BitTwiddlingConvenienceFunctions v0.1.6 [2a0fbf3d] + CPUSummary v0.2.6 [95e48a1f] + CalibrateEmulateSample v0.7.0 [d360d2e6] + ChainRulesCore v1.25.1 [ae650224] + ChunkSplitters v3.1.2 [fb6a15b2] + CloseOpenIntervals v0.1.13 [523fee87] + CodecBzip2 v0.8.5 [944b1d66] + CodecZlib v0.7.8 [bbf7d656] + CommonSubexpressions v0.3.1 [f70d9fcc] + CommonWorldInvalidations v1.0.0 [34da2185] + Compat v4.16.0 [a33af91c] + CompositionsBase v0.1.2 [8f4d0f93] + Conda v1.10.2 [88cd18e8] + ConsoleProgressMonitor v0.1.2 [187b0558] + ConstructionBase v1.5.8 [f65535da] + Convex v0.16.4 [adafc99b] + CpuId v0.3.1 [a8cc5b0e] + Crayons v4.1.1 [9a962f9c] + DataAPI v1.16.0 [a93c6f00] + DataFrames v1.7.0 [864edb3b] + DataStructures v0.18.22 [e2d170a0] + DataValueInterfaces v1.0.0 [244e2a9f] + DefineSingletons v0.1.2 [163ba53b] + DiffResults v1.1.0 [b552c78f] + DiffRules v1.15.1 [a0c0ee7d] + DifferentiationInterface v0.6.52 [b4f34e82] + Distances v0.10.12 [31c24e10] + Distributions v0.25.119 [ffbed154] + DocStringExtensions v0.9.4 [fdbdab4c] + ElasticArrays v1.2.12 [2904ab23] + ElasticPDMats v0.2.3 ⌃ [aa8a2aa5] + EnsembleKalmanProcesses v2.4.0 [4e289a0a] + EnumX v1.0.5 [c87230d0] + FFMPEG v0.4.2 [7a1cc6ca] + FFTW v1.8.1 ⌅ [442a2c76] + FastGaussQuadrature v0.4.9 [1a297f60] + FillArrays v1.13.0 [6a86dc24] + FiniteDiff v2.27.0 [59287772] + Formatting v0.4.3 ⌅ [f6369f11] + ForwardDiff v0.10.38 [069b7b12] + FunctionWrappers v1.1.3 [d9f16b24] + Functors v0.5.2 [891a1506] + GaussianProcesses v0.12.5 ⌃ [e4b2fa32] + GaussianRandomFields v2.1.6 [3e5b6fbb] + HostCPUFeatures v0.1.17 [615f187c] + IfElse v0.1.1 [22cec73e] + InitialValues v0.3.1 [842dd82b] + InlineStrings v1.4.3 [a98d9a8b] + Interpolations v0.15.1 [8197267c] + IntervalSets v0.7.11 [3587e190] + InverseFunctions v0.1.17 [41ab1584] + InvertedIndices v1.3.1 ⌅ [92d709cd] + IrrationalConstants v0.1.1 [c8e1da08] + IterTools v1.10.0 [82899510] + IteratorInterfaceExtensions v1.0.0 [692b3bcd] + JLLWrappers v1.7.0 [682c06a0] + JSON v0.21.4 [0f8b85d8] + JSON3 v1.14.2 [5ab0869b] + KernelDensity v0.6.9 [ec8451be] + KernelFunctions v0.10.65 [40e66cde] + LDLFactorizations v0.10.1 [b964fa9f] + LaTeXStrings v1.4.0 [10f19ff3] + LayoutPointers v0.1.17 [1d6d02ad] + LeftChildRightSiblingTrees v0.2.0 [d3d80556] + LineSearches v7.3.0 [6fdf6af0] + LogDensityProblems v2.1.2 [2ab3a3ac] + LogExpFunctions v0.3.29 [e6f89c97] + LoggingExtras v1.1.0 [bdcacae8] + LoopVectorization v0.12.172 ⌃ [c7f686f2] + MCMCChains v5.7.1 ⌅ [be115224] + MCMCDiagnosticTools v0.2.1 [e80e1ace] + MLJModelInterface v1.11.1 [1914dd2f] + MacroTools v0.5.16 [d125e4d3] + ManualMemory v0.1.8 [b8f27783] + MathOptInterface v1.40.0 ⌅ [128add7d] + MicroCollections v0.1.4 [e1d29d7a] + Missings v1.2.0 [d8a4904e] + MutableArithmetics v1.6.4 [d41bc354] + NLSolversBase v7.9.1 [77ba4419] + NaNMath v1.1.3 [c020b1a1] + NaturalSort v1.0.0 [6fe1bfb0] + OffsetArrays v1.17.0 [429524aa] + Optim v1.12.0 [bac558e1] + OrderedCollections v1.8.0 [90014a1f] + PDMats v0.11.34 [d96e819e] + Parameters v0.12.3 [69de0a69] + Parsers v2.8.3 [1d0040c9] + PolyesterWeave v0.2.2 [2dfb63ee] + PooledArrays v1.4.3 [85a6dd25] + PositiveFactorizations v0.2.4 [aea7be01] + PrecompileTools v1.3.2 [21216c6a] + Preferences v1.4.3 [08abe8d2] + PrettyTables v2.4.0 [49802e3a] + ProgressBars v1.5.1 [33c8b6b6] + ProgressLogging v0.1.4 [92933f4c] + ProgressMeter v1.10.4 [43287f4e] + PtrArrays v1.3.0 [438e738f] + PyCall v1.96.4 [1fd47b50] + QuadGK v2.11.2 [36c3bae2] + RandomFeatures v0.3.4 [b3c3ace0] + RangeArrays v0.3.2 [c84ed2f1] + Ratios v0.4.5 [3cdcf5f2] + RecipesBase v1.3.4 [189a3867] + Reexport v1.2.2 [ae029012] + Requires v1.3.1 [37e2e3b7] + ReverseDiff v1.16.1 ⌅ [79098fc4] + Rmath v0.7.1 [c946c3f1] + SCS v2.1.0 [94e857df] + SIMDTypes v0.1.0 [476501e8] + SLEEFPirates v0.6.43 [30f210dd] + ScientificTypesBase v3.0.0 [3646fa90] + ScikitLearn v0.7.0 [6e75b9c4] + ScikitLearnBase v0.5.0 [91c51154] + SentinelArrays v1.4.8 [efcf1570] + Setfield v1.1.2 [a2af1166] + SortingAlgorithms v1.2.1 [276daf66] + SpecialFunctions v2.5.1 [171d559e] + SplittablesBase v0.1.15 [860ef19b] + StableRNGs v1.0.2 [aedffcd0] + Static v1.2.0 [0d7ed370] + StaticArrayInterface v1.8.0 [90137ffa] + StaticArrays v1.9.13 [1e83bf80] + StaticArraysCore v1.4.3 [64bff920] + StatisticalTraits v3.4.0 [10745b16] + Statistics v1.11.1 [82ae8749] + StatsAPI v1.7.0 ⌅ [2913bbd2] + StatsBase v0.33.21 ⌅ [4c63d2b9] + StatsFuns v0.9.18 [892a3eda] + StringManipulation v0.4.1 [856f2bd8] + StructTypes v1.11.0 [9449cd9e] + TSVD v0.4.4 [3783bdb8] + TableTraits v1.0.1 [bd369af6] + Tables v1.12.0 [62fd8b95] + TensorCore v0.1.1 [5d786b92] + TerminalLoggers v0.1.7 [8290d209] + ThreadingUtilities v0.5.3 [3bb67fe8] + TranscodingStreams v0.11.3 ⌃ [28d57a85] + Transducers v0.4.80 [bc48ee85] + Tullio v0.3.8 [3a884ed6] + UnPack v1.0.2 [3d5dd08c] + VectorizationBase v0.21.71 [81def892] + VersionParsing v1.3.0 [efce3f68] + WoodburyMatrices v1.0.0 [700de1a5] + ZygoteRules v0.2.7 ⌅ [68821587] + Arpack_jll v3.5.1+1 [6e34b625] + Bzip2_jll v1.0.9+0 ⌃ [83423d85] + Cairo_jll v1.18.4+0 [2e619515] + Expat_jll v2.6.5+0 ⌅ [b22a6f82] + FFMPEG_jll v4.4.4+1 [f5851436] + FFTW_jll v3.3.11+0 [a3f928ae] + Fontconfig_jll v2.16.0+0 [d7e528f0] + FreeType2_jll v2.13.4+0 [559328eb] + FriBidi_jll v1.0.17+0 [78b55507] + Gettext_jll v0.21.0+0 ⌃ [7746bdde] + Glib_jll v2.82.4+0 [3b182d85] + Graphite2_jll v1.3.15+0 [2e76f6c2] + HarfBuzz_jll v8.5.0+0 [1d5cc7b8] + IntelOpenMP_jll v2025.0.4+0 [c1c5ebd0] + LAME_jll v3.100.2+0 [1d63c593] + LLVMOpenMP_jll v18.1.8+0 [dd4b983a] + LZO_jll v2.10.3+0 ⌅ [e9f186c6] + Libffi_jll v3.2.2+2 [94ce4f54] + Libiconv_jll v1.18.0+0 [4b2f31a3] + Libmount_jll v2.41.0+0 [38a345b3] + Libuuid_jll v2.41.0+0 [856f044c] + MKL_jll v2025.0.1+1 [e7412a2a] + Ogg_jll v1.3.5+1 [656ef2d0] + OpenBLAS32_jll v0.3.29+0 [efe28fd5] + OpenSpecFun_jll v0.5.6+0 [91d4177d] + Opus_jll v1.3.3+0 [30392449] + Pixman_jll v0.44.2+0 ⌅ [f50d1b31] + Rmath_jll v0.4.3+0 [f4f2fc5b] + SCS_jll v3.2.7+0 ⌅ [02c8fc9c] + XML2_jll v2.13.6+1 [4f6342f7] + Xorg_libX11_jll v1.8.12+0 [0c0b7dd1] + Xorg_libXau_jll v1.0.13+0 [a3789734] + Xorg_libXdmcp_jll v1.1.6+0 [1082639a] + Xorg_libXext_jll v1.3.7+0 [ea2f1a96] + Xorg_libXrender_jll v0.9.12+0 [c7cfdc94] + Xorg_libxcb_jll v1.17.1+0 [c5fb5394] + Xorg_xtrans_jll v1.6.0+0 [a4ae2306] + libaom_jll v3.11.0+0 [0ac62f75] + libass_jll v0.15.2+0 [f638f0a6] + libfdk_aac_jll v2.0.3+0 [b53b4c65] + libpng_jll v1.6.47+0 [f27f6e37] + libvorbis_jll v1.3.7+2 [1317d2d5] + oneTBB_jll v2022.0.0+0 ⌅ [1270edf5] + x264_jll v2021.5.5+0 ⌅ [dfaa095f] + x265_jll v3.5.0+0 [0dad84c5] + ArgTools v1.1.2 [56f22d72] + Artifacts v1.11.0 [2a0f44e3] + Base64 v1.11.0 [ade2ca70] + Dates v1.11.0 [8ba89e20] + Distributed v1.11.0 [f43a241f] + Downloads v1.6.0 [7b1f6079] + FileWatching v1.11.0 [9fa8497b] + Future v1.11.0 [b77e0a4c] + InteractiveUtils v1.11.0 [ac6e5ff7] + JuliaSyntaxHighlighting v1.12.0 [4af54fe1] + LazyArtifacts v1.11.0 [b27032c2] + LibCURL v0.6.4 [76f85450] + LibGit2 v1.11.0 [8f399da3] + Libdl v1.11.0 [37e2e46d] + LinearAlgebra v1.12.0 [56ddb016] + Logging v1.11.0 [d6f4376e] + Markdown v1.11.0 [a63ad114] + Mmap v1.11.0 [ca575930] + NetworkOptions v1.3.0 [44cfe95a] + Pkg v1.12.0 [de0858da] + Printf v1.11.0 [9abbd945] + Profile v1.11.0 [3fa0cd96] + REPL v1.11.0 [9a3f8284] + Random v1.11.0 [ea8e919c] + SHA v0.7.0 [9e88b42a] + Serialization v1.11.0 [1a1011a3] + SharedArrays v1.11.0 [6462fe0b] + Sockets v1.11.0 [2f01184e] + SparseArrays v1.12.0 [f489334b] + StyledStrings v1.11.0 [4607b0f0] + SuiteSparse [fa267f1f] + TOML v1.0.3 [a4e569a6] + Tar v1.10.0 [8dfed614] + Test v1.11.0 [cf7118a7] + UUIDs v1.11.0 [4ec0a83e] + Unicode v1.11.0 [e66e0078] + CompilerSupportLibraries_jll v1.3.0+1 [deac9b47] + LibCURL_jll v8.11.1+1 [e37daf67] + LibGit2_jll v1.9.0+0 [29816b5a] + LibSSH2_jll v1.11.3+1 [14a3606d] + MozillaCACerts_jll v2025.2.25 [4536629a] + OpenBLAS_jll v0.3.29+0 [05823500] + OpenLibm_jll v0.8.5+0 [458c3c95] + OpenSSL_jll v3.5.0+0 [efcefdf7] + PCRE2_jll v10.44.0+1 [bea87d4a] + SuiteSparse_jll v7.8.3+2 [83775a58] + Zlib_jll v1.3.1+2 [8e850b90] + libblastrampoline_jll v5.12.0+0 [8e850ede] + nghttp2_jll v1.64.0+1 [3f19e933] + p7zip_jll v17.5.0+2 Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m` Building Conda ─────────────────→ `~/.julia/scratchspaces/44cfe95a-1eb2-52ea-b672-e2afdf69b78f/b19db3927f0db4151cb86d073689f2428e524576/build.log` Building PyCall ────────────────→ `~/.julia/scratchspaces/44cfe95a-1eb2-52ea-b672-e2afdf69b78f/9816a3826b0ebf49ab4926e2b18842ad8b5c8f04/build.log` Building CalibrateEmulateSample → `~/.julia/scratchspaces/44cfe95a-1eb2-52ea-b672-e2afdf69b78f/f58547feedb27247426c2a1b4c3ba1a881596722/build.log` Installation completed after 110.36s ################################################################################ # Precompilation # Precompiling PkgEval dependencies... Precompiling package dependencies... Precompilation completed after 2016.17s ################################################################################ # Testing # Testing CalibrateEmulateSample Status `/tmp/jl_KnMu56/Project.toml` [99985d1d] AbstractGPs v0.5.24 ⌅ [80f14c24] AbstractMCMC v4.4.2 ⌃ [5b7e9947] AdvancedMH v0.7.5 [95e48a1f] CalibrateEmulateSample v0.7.0 [ae650224] ChunkSplitters v3.1.2 [8f4d0f93] Conda v1.10.2 [31c24e10] Distributions v0.25.119 [ffbed154] DocStringExtensions v0.9.4 ⌃ [aa8a2aa5] EnsembleKalmanProcesses v2.4.0 ⌅ [f6369f11] ForwardDiff v0.10.38 [891a1506] GaussianProcesses v0.12.5 [ec8451be] KernelFunctions v0.10.65 ⌃ [c7f686f2] MCMCChains v5.7.1 [49802e3a] ProgressBars v1.5.1 [438e738f] PyCall v1.96.4 [36c3bae2] RandomFeatures v0.3.4 [37e2e3b7] ReverseDiff v1.16.1 [3646fa90] ScikitLearn v0.7.0 [860ef19b] StableRNGs v1.0.2 [10745b16] Statistics v1.11.1 ⌅ [2913bbd2] StatsBase v0.33.21 [37e2e46d] LinearAlgebra v1.12.0 [44cfe95a] Pkg v1.12.0 [de0858da] Printf v1.11.0 [9a3f8284] Random v1.11.0 [8dfed614] Test v1.11.0 Status `/tmp/jl_KnMu56/Manifest.toml` [47edcb42] ADTypes v1.14.0 [14f7f29c] AMD v0.5.3 [621f4979] AbstractFFTs v1.5.0 [99985d1d] AbstractGPs v0.5.24 ⌅ [80f14c24] AbstractMCMC v4.4.2 [1520ce14] AbstractTrees v0.4.5 [79e6a3ab] Adapt v4.3.0 ⌃ [5b7e9947] AdvancedMH v0.7.5 [66dad0bd] AliasTables v1.1.3 [dce04be8] ArgCheck v2.5.0 [7d9fca2a] Arpack v0.5.4 [4fba245c] ArrayInterface v7.18.0 [13072b0f] AxisAlgorithms v1.1.0 [39de3d68] AxisArrays v0.4.7 ⌅ [198e06fe] BangBang v0.3.40 [9718e550] Baselet v0.1.1 [6e4b80f9] BenchmarkTools v1.6.0 [62783981] BitTwiddlingConvenienceFunctions v0.1.6 [2a0fbf3d] CPUSummary v0.2.6 [95e48a1f] CalibrateEmulateSample v0.7.0 [d360d2e6] ChainRulesCore v1.25.1 [ae650224] ChunkSplitters v3.1.2 [fb6a15b2] CloseOpenIntervals v0.1.13 [523fee87] CodecBzip2 v0.8.5 [944b1d66] CodecZlib v0.7.8 [bbf7d656] CommonSubexpressions v0.3.1 [f70d9fcc] CommonWorldInvalidations v1.0.0 [34da2185] Compat v4.16.0 [a33af91c] CompositionsBase v0.1.2 [8f4d0f93] Conda v1.10.2 [88cd18e8] ConsoleProgressMonitor v0.1.2 [187b0558] ConstructionBase v1.5.8 [f65535da] Convex v0.16.4 [adafc99b] CpuId v0.3.1 [a8cc5b0e] Crayons v4.1.1 [9a962f9c] DataAPI v1.16.0 [a93c6f00] DataFrames v1.7.0 [864edb3b] DataStructures v0.18.22 [e2d170a0] DataValueInterfaces v1.0.0 [244e2a9f] DefineSingletons v0.1.2 [163ba53b] DiffResults v1.1.0 [b552c78f] DiffRules v1.15.1 [a0c0ee7d] DifferentiationInterface v0.6.52 [b4f34e82] Distances v0.10.12 [31c24e10] Distributions v0.25.119 [ffbed154] DocStringExtensions v0.9.4 [fdbdab4c] ElasticArrays v1.2.12 [2904ab23] ElasticPDMats v0.2.3 ⌃ [aa8a2aa5] EnsembleKalmanProcesses v2.4.0 [4e289a0a] EnumX v1.0.5 [c87230d0] FFMPEG v0.4.2 [7a1cc6ca] FFTW v1.8.1 ⌅ [442a2c76] FastGaussQuadrature v0.4.9 [1a297f60] FillArrays v1.13.0 [6a86dc24] FiniteDiff v2.27.0 [59287772] Formatting v0.4.3 ⌅ [f6369f11] ForwardDiff v0.10.38 [069b7b12] FunctionWrappers v1.1.3 [d9f16b24] Functors v0.5.2 [891a1506] GaussianProcesses v0.12.5 ⌃ [e4b2fa32] GaussianRandomFields v2.1.6 [3e5b6fbb] HostCPUFeatures v0.1.17 [615f187c] IfElse v0.1.1 [22cec73e] InitialValues v0.3.1 [842dd82b] InlineStrings v1.4.3 [a98d9a8b] Interpolations v0.15.1 [8197267c] IntervalSets v0.7.11 [3587e190] InverseFunctions v0.1.17 [41ab1584] InvertedIndices v1.3.1 ⌅ [92d709cd] IrrationalConstants v0.1.1 [c8e1da08] IterTools v1.10.0 [82899510] IteratorInterfaceExtensions v1.0.0 [692b3bcd] JLLWrappers v1.7.0 [682c06a0] JSON v0.21.4 [0f8b85d8] JSON3 v1.14.2 [5ab0869b] KernelDensity v0.6.9 [ec8451be] KernelFunctions v0.10.65 [40e66cde] LDLFactorizations v0.10.1 [b964fa9f] LaTeXStrings v1.4.0 [10f19ff3] LayoutPointers v0.1.17 [1d6d02ad] LeftChildRightSiblingTrees v0.2.0 [d3d80556] LineSearches v7.3.0 [6fdf6af0] LogDensityProblems v2.1.2 [2ab3a3ac] LogExpFunctions v0.3.29 [e6f89c97] LoggingExtras v1.1.0 [bdcacae8] LoopVectorization v0.12.172 ⌃ [c7f686f2] MCMCChains v5.7.1 ⌅ [be115224] MCMCDiagnosticTools v0.2.1 [e80e1ace] MLJModelInterface v1.11.1 [1914dd2f] MacroTools v0.5.16 [d125e4d3] ManualMemory v0.1.8 [b8f27783] MathOptInterface v1.40.0 ⌅ [128add7d] MicroCollections v0.1.4 [e1d29d7a] Missings v1.2.0 [d8a4904e] MutableArithmetics v1.6.4 [d41bc354] NLSolversBase v7.9.1 [77ba4419] NaNMath v1.1.3 [c020b1a1] NaturalSort v1.0.0 [6fe1bfb0] OffsetArrays v1.17.0 [429524aa] Optim v1.12.0 [bac558e1] OrderedCollections v1.8.0 [90014a1f] PDMats v0.11.34 [d96e819e] Parameters v0.12.3 [69de0a69] Parsers v2.8.3 [1d0040c9] PolyesterWeave v0.2.2 [2dfb63ee] PooledArrays v1.4.3 [85a6dd25] PositiveFactorizations v0.2.4 [aea7be01] PrecompileTools v1.3.2 [21216c6a] Preferences v1.4.3 [08abe8d2] PrettyTables v2.4.0 [49802e3a] ProgressBars v1.5.1 [33c8b6b6] ProgressLogging v0.1.4 [92933f4c] ProgressMeter v1.10.4 [43287f4e] PtrArrays v1.3.0 [438e738f] PyCall v1.96.4 [1fd47b50] QuadGK v2.11.2 [36c3bae2] RandomFeatures v0.3.4 [b3c3ace0] RangeArrays v0.3.2 [c84ed2f1] Ratios v0.4.5 [3cdcf5f2] RecipesBase v1.3.4 [189a3867] Reexport v1.2.2 [ae029012] Requires v1.3.1 [37e2e3b7] ReverseDiff v1.16.1 ⌅ [79098fc4] Rmath v0.7.1 [c946c3f1] SCS v2.1.0 [94e857df] SIMDTypes v0.1.0 [476501e8] SLEEFPirates v0.6.43 [30f210dd] ScientificTypesBase v3.0.0 [3646fa90] ScikitLearn v0.7.0 [6e75b9c4] ScikitLearnBase v0.5.0 [91c51154] SentinelArrays v1.4.8 [efcf1570] Setfield v1.1.2 [a2af1166] SortingAlgorithms v1.2.1 [276daf66] SpecialFunctions v2.5.1 [171d559e] SplittablesBase v0.1.15 [860ef19b] StableRNGs v1.0.2 [aedffcd0] Static v1.2.0 [0d7ed370] StaticArrayInterface v1.8.0 [90137ffa] StaticArrays v1.9.13 [1e83bf80] StaticArraysCore v1.4.3 [64bff920] StatisticalTraits v3.4.0 [10745b16] Statistics v1.11.1 [82ae8749] StatsAPI v1.7.0 ⌅ [2913bbd2] StatsBase v0.33.21 ⌅ [4c63d2b9] StatsFuns v0.9.18 [892a3eda] StringManipulation v0.4.1 [856f2bd8] StructTypes v1.11.0 [9449cd9e] TSVD v0.4.4 [3783bdb8] TableTraits v1.0.1 [bd369af6] Tables v1.12.0 [62fd8b95] TensorCore v0.1.1 [5d786b92] TerminalLoggers v0.1.7 [8290d209] ThreadingUtilities v0.5.3 [3bb67fe8] TranscodingStreams v0.11.3 ⌃ [28d57a85] Transducers v0.4.80 [bc48ee85] Tullio v0.3.8 [3a884ed6] UnPack v1.0.2 [3d5dd08c] VectorizationBase v0.21.71 [81def892] VersionParsing v1.3.0 [efce3f68] WoodburyMatrices v1.0.0 [700de1a5] ZygoteRules v0.2.7 ⌅ [68821587] Arpack_jll v3.5.1+1 [6e34b625] Bzip2_jll v1.0.9+0 ⌃ [83423d85] Cairo_jll v1.18.4+0 [2e619515] Expat_jll v2.6.5+0 ⌅ [b22a6f82] FFMPEG_jll v4.4.4+1 [f5851436] FFTW_jll v3.3.11+0 [a3f928ae] Fontconfig_jll v2.16.0+0 [d7e528f0] FreeType2_jll v2.13.4+0 [559328eb] FriBidi_jll v1.0.17+0 [78b55507] Gettext_jll v0.21.0+0 ⌃ [7746bdde] Glib_jll v2.82.4+0 [3b182d85] Graphite2_jll v1.3.15+0 [2e76f6c2] HarfBuzz_jll v8.5.0+0 [1d5cc7b8] IntelOpenMP_jll v2025.0.4+0 [c1c5ebd0] LAME_jll v3.100.2+0 [1d63c593] LLVMOpenMP_jll v18.1.8+0 [dd4b983a] LZO_jll v2.10.3+0 ⌅ [e9f186c6] Libffi_jll v3.2.2+2 [94ce4f54] Libiconv_jll v1.18.0+0 [4b2f31a3] Libmount_jll v2.41.0+0 [38a345b3] Libuuid_jll v2.41.0+0 [856f044c] MKL_jll v2025.0.1+1 [e7412a2a] Ogg_jll v1.3.5+1 [656ef2d0] OpenBLAS32_jll v0.3.29+0 [efe28fd5] OpenSpecFun_jll v0.5.6+0 [91d4177d] Opus_jll v1.3.3+0 [30392449] Pixman_jll v0.44.2+0 ⌅ [f50d1b31] Rmath_jll v0.4.3+0 [f4f2fc5b] SCS_jll v3.2.7+0 ⌅ [02c8fc9c] XML2_jll v2.13.6+1 [4f6342f7] Xorg_libX11_jll v1.8.12+0 [0c0b7dd1] Xorg_libXau_jll v1.0.13+0 [a3789734] Xorg_libXdmcp_jll v1.1.6+0 [1082639a] Xorg_libXext_jll v1.3.7+0 [ea2f1a96] Xorg_libXrender_jll v0.9.12+0 [c7cfdc94] Xorg_libxcb_jll v1.17.1+0 [c5fb5394] Xorg_xtrans_jll v1.6.0+0 [a4ae2306] libaom_jll v3.11.0+0 [0ac62f75] libass_jll v0.15.2+0 [f638f0a6] libfdk_aac_jll v2.0.3+0 [b53b4c65] libpng_jll v1.6.47+0 [f27f6e37] libvorbis_jll v1.3.7+2 [1317d2d5] oneTBB_jll v2022.0.0+0 ⌅ [1270edf5] x264_jll v2021.5.5+0 ⌅ [dfaa095f] x265_jll v3.5.0+0 [0dad84c5] ArgTools v1.1.2 [56f22d72] Artifacts v1.11.0 [2a0f44e3] Base64 v1.11.0 [ade2ca70] Dates v1.11.0 [8ba89e20] Distributed v1.11.0 [f43a241f] Downloads v1.6.0 [7b1f6079] FileWatching v1.11.0 [9fa8497b] Future v1.11.0 [b77e0a4c] InteractiveUtils v1.11.0 [ac6e5ff7] JuliaSyntaxHighlighting v1.12.0 [4af54fe1] LazyArtifacts v1.11.0 [b27032c2] LibCURL v0.6.4 [76f85450] LibGit2 v1.11.0 [8f399da3] Libdl v1.11.0 [37e2e46d] LinearAlgebra v1.12.0 [56ddb016] Logging v1.11.0 [d6f4376e] Markdown v1.11.0 [a63ad114] Mmap v1.11.0 [ca575930] NetworkOptions v1.3.0 [44cfe95a] Pkg v1.12.0 [de0858da] Printf v1.11.0 [9abbd945] Profile v1.11.0 [3fa0cd96] REPL v1.11.0 [9a3f8284] Random v1.11.0 [ea8e919c] SHA v0.7.0 [9e88b42a] Serialization v1.11.0 [1a1011a3] SharedArrays v1.11.0 [6462fe0b] Sockets v1.11.0 [2f01184e] SparseArrays v1.12.0 [f489334b] StyledStrings v1.11.0 [4607b0f0] SuiteSparse [fa267f1f] TOML v1.0.3 [a4e569a6] Tar v1.10.0 [8dfed614] Test v1.11.0 [cf7118a7] UUIDs v1.11.0 [4ec0a83e] Unicode v1.11.0 [e66e0078] CompilerSupportLibraries_jll v1.3.0+1 [deac9b47] LibCURL_jll v8.11.1+1 [e37daf67] LibGit2_jll v1.9.0+0 [29816b5a] LibSSH2_jll v1.11.3+1 [14a3606d] MozillaCACerts_jll v2025.2.25 [4536629a] OpenBLAS_jll v0.3.29+0 [05823500] OpenLibm_jll v0.8.5+0 [458c3c95] OpenSSL_jll v3.5.0+0 [efcefdf7] PCRE2_jll v10.44.0+1 [bea87d4a] SuiteSparse_jll v7.8.3+2 [83775a58] Zlib_jll v1.3.1+2 [8e850b90] libblastrampoline_jll v5.12.0+0 [8e850ede] nghttp2_jll v1.64.0+1 [3f19e933] p7zip_jll v17.5.0+2 Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. Testing Running tests... Starting tests for Emulator [ Info: fit successful WARNING: llvmcall with integer pointers is deprecated. Use actual pointers instead, replacing i32 or i64 with i8* or ptr in initialize_task(Any) at /home/pkgeval/.julia/packages/ThreadingUtilities/nn4y1/src/ThreadingUtilities.jl SVD truncated at k: 3/6 [ Info: reducing input dimension from 10 to rank(input_cov) during low rank in normalization Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 SVD truncated at k: 2/6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 2 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 3 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 4 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 5 kernel in GaussianProcess: Type: GaussianProcesses.SumKernel{GaussianProcesses.SEArd{Float64}, GaussianProcesses.Noise{Float64}} Type: GaussianProcesses.SEArd{Float64}, Params: [-0.0, -0.0, -0.0, 0.0] Type: GaussianProcesses.Noise{Float64}, Params: [0.0] created GP: 6 Completed tests for Emulator, 114 seconds elapsed Starting tests for GaussianProcess ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 Using user-defined kernelType: SEIso{Float64}, Params: [0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: SumKernel{SEIso{Float64}, Noise{Float64}} Type: SEIso{Float64}, Params: [0.0, 0.0] Type: Noise{Float64}, Params: [0.0] created GP: 1 ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 ┌ Warning: GaussianProcess already built. skipping... └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/GaussianProcess.jl:151 optimized hyperparameters of GP: 1 Type: SumKernel{SEIso{Float64}, Noise{Float64}} Type: SEIso{Float64}, Params: [0.4671112501723779, -0.11637219097092133] Type: Noise{Float64}, Params: [-2.7795647959619263] ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 ┌ Warning: implicit `obsdim=2` argument is deprecated and now has to be passed explicitly to specify that each column corresponds to one observation │ caller = #_#1 at finite_gp_projection.jl:36 [inlined] └ @ Core ~/.julia/packages/AbstractGPs/lWdNB/src/finite_gp_projection.jl:36 optimised GP: 1 Sum of 2 kernels: Squared Exponential Kernel (metric = Distances.Euclidean(0.0)) - ARD Transform (dims: 1) - σ² = 0.7923560881646375 White Kernel - σ² = 0.0038521278620295635 [ Info: AbstractGP already built. Continuing... ┌ Warning: implicit `obsdim=2` argument is deprecated and now has to be passed explicitly to specify that each column corresponds to one observation │ caller = #_#1 at finite_gp_projection.jl:36 [inlined] └ @ Core ~/.julia/packages/AbstractGPs/lWdNB/src/finite_gp_projection.jl:36 ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 Using user-defined kernelType: SEIso{Float64}, Params: [0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: SumKernel{SEIso{Float64}, Noise{Float64}} Type: SEIso{Float64}, Params: [0.0, 0.0] Type: Noise{Float64}, Params: [0.0] created GP: 1 optimized hyperparameters of GP: 1 Type: SumKernel{SEIso{Float64}, Noise{Float64}} Type: SEIso{Float64}, Params: [0.467111250159053, -0.11637219100329507] Type: Noise{Float64}, Params: [-2.912614529741828] ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 Using user-defined kernelPyObject 1**2 * RBF(length_scale=1) Learning additive white noise [ Info: Training kernel 1, [ Info: PyObject 1**2 * RBF(length_scale=1) + WhiteKernel(noise_level=1) ┌ Warning: The covariance of the observational noise (a.k.a obs_noise_cov) is useful for data processing. Large approximation errors can occur without it. If possible, please provide it using the keyword obs_noise_cov. └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/Emulator.jl:121 ┌ Warning: GaussianProcess already built. skipping... └ @ CalibrateEmulateSample.Emulators ~/.julia/packages/CalibrateEmulateSample/H2455/src/GaussianProcess.jl:271 SKlearn, already trained. continuing... Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: SEArd{Float64}, Params: [-0.0, -0.0, 0.0] Learning additive white noise kernel in GaussianProcess: Type: SumKernel{SEArd{Float64}, Noise{Float64}} Type: SEArd{Float64}, Params: [-0.0, -0.0, 0.0] Type: Noise{Float64}, Params: [0.0] created GP: 1 kernel in GaussianProcess: Type: SumKernel{SEArd{Float64}, Noise{Float64}} Type: SEArd{Float64}, Params: [-0.0, -0.0, 0.0] Type: Noise{Float64}, Params: [0.0] created GP: 2 Using default squared exponential kernel, learning length scale and variance parameters Using default squared exponential kernel: Type: SEArd{Float64}, Params: [-0.0, -0.0, 0.0] kernel in GaussianProcess: Type: SEArd{Float64}, Params: [-0.0, -0.0, 0.0] created GP: 1 kernel in GaussianProcess: Type: SEArd{Float64}, Params: [-0.0, -0.0, 0.0] created GP: 2 optimized hyperparameters of GP: 1 Type: SumKernel{SEArd{Float64}, Noise{Float64}} Type: SEArd{Float64}, Params: [-0.034010033457988934, 2.6947372695174536, 1.9374032687140703] Type: Noise{Float64}, Params: [-0.19545576083347674] optimized hyperparameters of GP: 2 Type: SumKernel{SEArd{Float64}, Noise{Float64}} Type: SEArd{Float64}, Params: [2.040064043166128, -0.263116528583071, 2.0697093362932244] Type: Noise{Float64}, Params: [-0.08245764941529067] optimized hyperparameters of GP: 1 Type: SEArd{Float64}, Params: [-0.070755506795137, 2.7805790822912098, 1.879857339393294] optimized hyperparameters of GP: 2 Type: SEArd{Float64}, Params: [2.0918473623519653, -0.15767169342096962, 2.145456577252389] optimised GP: 1 Sum of 2 kernels: Squared Exponential Kernel (metric = Distances.Euclidean(0.0)) - ARD Transform (dims: 2) - σ² = 48.17337764399554 White Kernel - σ² = 0.6764400036755718 optimised GP: 2 Sum of 2 kernels: Squared Exponential Kernel (metric = Distances.Euclidean(0.0)) - ARD Transform (dims: 2) - σ² = 62.76632305723065 White Kernel - σ² = 0.8479655247177975 Completed tests for GaussianProcess, 43 seconds elapsed Starting tests for RandomFeature ┌ Info: Shrinkage scale: 0.9185497035002698, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 1.8594255907781603 [ Info: NICE-adjusted covariance condition number: 3.9256544515833234 [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("inflation" => 0.0001, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "cov_correction" => "shrinkage", "verbose" => false, "multithread" => "ensemble", "n_ensemble" => 40, "cov_sample_multiplier" => 10.0, "n_features_opt" => 200, "train_fraction" => 0.8, "n_cross_val_sets" => 2, "n_iteration" => 10) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("inflation" => 0.0001, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "cov_correction" => "shrinkage", "verbose" => false, "multithread" => "ensemble", "n_ensemble" => 70, "cov_sample_multiplier" => 10.0, "n_features_opt" => 200, "train_fraction" => 0.8, "n_cross_val_sets" => 2, "n_iteration" => 10) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("inflation" => 0.0001, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "cov_correction" => "shrinkage", "verbose" => false, "multithread" => "ensemble", "n_ensemble" => 90, "cov_sample_multiplier" => 10.0, "n_features_opt" => 200, "train_fraction" => 0.8, "n_cross_val_sets" => 2, "n_iteration" => 10, "tikhonov" => 0) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("inflation" => 0.0001, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "cov_correction" => "shrinkage", "verbose" => false, "multithread" => "ensemble", "n_ensemble" => 100, "cov_sample_multiplier" => 10.0, "n_features_opt" => 200, "train_fraction" => 0.8, "n_cross_val_sets" => 2, "n_iteration" => 10, "tikhonov" => 0) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("inflation" => 0.0001, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "cov_correction" => "shrinkage", "verbose" => false, "multithread" => "ensemble", "n_ensemble" => 20, "cov_sample_multiplier" => 10.0, "n_features_opt" => 100, "train_fraction" => 0.8, "n_cross_val_sets" => 0, "n_iteration" => 10) [ Info: hyperparameter optimization with EKI configured with Dict{Any, Any}("inflation" => 0.0001, "localization" => EnsembleKalmanProcesses.Localizers.NoLocalization(), "accelerator" => NesterovAccelerator{Float64}(Float64[], 1.0), "scheduler" => DataMisfitController{Float64, String}(Int64[], 1000.0, "stop"), "cov_correction" => "shrinkage", "verbose" => false, "multithread" => "ensemble", "n_ensemble" => 30, "cov_sample_multiplier" => 10.0, "n_features_opt" => 100, "train_fraction" => 0.8, "n_cross_val_sets" => 0, "n_iteration" => 10, "tikhonov" => 0) [ Info: hyperparameter learning for 1 models using 50 training points, 50 validation points and 100 features estimate cov with 520 iterations... WARNING: llvmcall with integer pointers is deprecated. Use actual pointers instead, replacing i32 or i64 with i8* or ptr in tile_halves(F, Type{T}, Tuple, Tuple, Tuple, Any, Any) where {F<:Function, T} at /home/pkgeval/.julia/packages/Tullio/2zyFP/src/threads.jl WARNING: llvmcall with integer pointers is deprecated. Use actual pointers instead, replacing i32 or i64 with i8* or ptr in _turbo_!(Base.Val{var"#UNROLL#"}, Base.Val{var"#OPS#"}, Base.Val{var"#ARF#"}, Base.Val{var"#AM#"}, Base.Val{var"#LPSYM#"}, Base.Val{Tuple{var"#LB#", var"#V#"}}, Vararg{Any, var"#num#vargs#"}) where {var"#UNROLL#", var"#OPS#", var"#ARF#", var"#AM#", var"#LPSYM#", var"#LB#", var"#V#", var"#num#vargs#"} at /home/pkgeval/.julia/packages/LoopVectorization/ImqiY/src/reconstruct_loopset.jl ┌ Info: Shrinkage scale: 0.006708995328631771, (0 = none, 1 = revert to scaled Identity) └ shrinkage covariance condition number: 4643.107708706032 calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... calculating 20 ensemble members... [1041] signal 11 (1): Segmentation fault in expression starting at /home/pkgeval/.julia/packages/CalibrateEmulateSample/H2455/test/RandomFeature/runtests.jl:14 _PyInterpreterState_GET at /usr/local/src/conda/python-3.12.9/Include/internal/pycore_pystate.h:133 [inlined] dict_dealloc at /usr/local/src/conda/python-3.12.9/Objects/dictobject.c:2353 _Py_Dealloc at /usr/local/src/conda/python-3.12.9/Objects/object.c:2640 [inlined] Py_DECREF at /usr/local/src/conda/python-3.12.9/Include/object.h:705 [inlined] Py_XDECREF at /usr/local/src/conda/python-3.12.9/Include/object.h:798 [inlined] subtype_dealloc at /usr/local/src/conda/python-3.12.9/Objects/typeobject.c:2026 _Py_Dealloc at /usr/local/src/conda/python-3.12.9/Objects/object.c:2640 [inlined] Py_DECREF at /usr/local/src/conda/python-3.12.9/Include/object.h:705 [inlined] Py_XDECREF at /usr/local/src/conda/python-3.12.9/Include/object.h:798 [inlined] _PyObject_FreeInstanceAttributes at /usr/local/src/conda/python-3.12.9/Objects/dictobject.c:5576 [inlined] subtype_dealloc at /usr/local/src/conda/python-3.12.9/Objects/typeobject.c:2023 pydecref_ at /home/pkgeval/.julia/packages/PyCall/1gn3u/src/PyCall.jl:118 [inlined] pydecref at /home/pkgeval/.julia/packages/PyCall/1gn3u/src/PyCall.jl:123 unknown function (ip: 0x7dafd7116b85) at (unknown file) _jl_invoke at /source/src/gf.c:3503 [inlined] ijl_apply_generic at /source/src/gf.c:3703 run_finalizer at /source/src/gc-common.c:180 jl_gc_run_finalizers_in_list at /source/src/gc-common.c:270 run_finalizers at /source/src/gc-common.c:316 ijl_gc_collect at /source/src/gc-stock.c:3481 maybe_collect at /source/src/gc-stock.c:349 [inlined] jl_gc_small_alloc_inner at /source/src/gc-stock.c:725 jl_gc_small_alloc_noinline at /source/src/gc-stock.c:783 [inlined] jl_gc_alloc_ at /source/src/gc-stock.c:797 jl_alloc_genericmemory_unchecked at /source/src/genericmemory.c:41 GenericMemory at ./boot.jl:588 [inlined] new_as_memoryref at ./boot.jl:604 [inlined] Array at ./boot.jl:651 [inlined] Array at ./boot.jl:661 [inlined] similar at ./array.jl:377 [inlined] eigencopy_oftype at /source/usr/share/julia/stdlib/v1.12/LinearAlgebra/src/symmetriceigen.jl:5 [inlined] cholcopy at /source/usr/share/julia/stdlib/v1.12/LinearAlgebra/src/cholesky.jl:182 [inlined] #cholesky#170 at /source/usr/share/julia/stdlib/v1.12/LinearAlgebra/src/cholesky.jl:542 [inlined] cholesky at /source/usr/share/julia/stdlib/v1.12/LinearAlgebra/src/cholesky.jl:542 [inlined] cholesky at /source/usr/share/julia/stdlib/v1.12/LinearAlgebra/src/cholesky.jl:542 [inlined] isposdef at /source/usr/share/julia/stdlib/v1.12/LinearAlgebra/src/dense.jl:93 #Decomposition#9 at /home/pkgeval/.julia/packages/RandomFeatures/d156W/src/Utilities.jl:119 Decomposition at /home/pkgeval/.julia/packages/RandomFeatures/d156W/src/Utilities.jl:103 unknown function (ip: 0x7dafbc105839) at (unknown file) _jl_invoke at /source/src/gf.c:3503 [inlined] ijl_apply_generic at /source/src/gf.c:3703 #fit#5 at /home/pkgeval/.julia/packages/RandomFeatures/d156W/src/Methods.jl:241 fit at /home/pkgeval/.julia/packages/RandomFeatures/d156W/src/Methods.jl:190 unknown function (ip: 0x7daf7cbb5f70) at (unknown file) _jl_invoke at /source/src/gf.c:3503 [inlined] ijl_apply_generic at /source/src/gf.c:3703 calculate_mean_cov_and_coeffs at /home/pkgeval/.julia/packages/CalibrateEmulateSample/H2455/src/RandomFeature.jl:477 unknown function (ip: 0x7daf7cb6f95e) at (unknown file) _jl_invoke at /source/src/gf.c:3503 [inlined] ijl_apply_generic at /source/src/gf.c:3703 macro expansion at /home/pkgeval/.julia/packages/CalibrateEmulateSample/H2455/src/RandomFeature.jl:977 [inlined] #53 at ./threadingconstructs.jl:276 #51 at ./threadingconstructs.jl:243 [inlined] #threading_run##0 at ./threadingconstructs.jl:177 unknown function (ip: 0x7dafbc18d67d) at (unknown file) _jl_invoke at /source/src/gf.c:3503 [inlined] ijl_apply_generic at /source/src/gf.c:3703 jl_apply at /source/src/julia.h:2350 [inlined] start_task at /source/src/task.c:1249 Allocations: 233201744 (Pool: 233199466; Big: 2278); GC: 134 Testing failed after 304.32s ERROR: LoadError: Package CalibrateEmulateSample errored during testing (received signal: 11) Stacktrace: [1] pkgerror(msg::String) @ Pkg.Types /opt/julia/share/julia/stdlib/v1.12/Pkg/src/Types.jl:68 [2] test(ctx::Pkg.Types.Context, pkgs::Vector{PackageSpec}; coverage::Bool, julia_args::Cmd, test_args::Cmd, test_fn::Nothing, force_latest_compatible_version::Bool, allow_earlier_backwards_compatible_versions::Bool, allow_reresolve::Bool) @ Pkg.Operations /opt/julia/share/julia/stdlib/v1.12/Pkg/src/Operations.jl:2365 [3] test @ /opt/julia/share/julia/stdlib/v1.12/Pkg/src/Operations.jl:2220 [inlined] [4] test(ctx::Pkg.Types.Context, pkgs::Vector{PackageSpec}; coverage::Bool, test_fn::Nothing, julia_args::Cmd, test_args::Cmd, force_latest_compatible_version::Bool, allow_earlier_backwards_compatible_versions::Bool, allow_reresolve::Bool, kwargs::@Kwargs{io::IOContext{IO}}) @ Pkg.API /opt/julia/share/julia/stdlib/v1.12/Pkg/src/API.jl:486 [5] test(pkgs::Vector{PackageSpec}; io::IOContext{IO}, kwargs::@Kwargs{julia_args::Cmd}) @ Pkg.API /opt/julia/share/julia/stdlib/v1.12/Pkg/src/API.jl:164 [6] test(pkgs::Vector{String}; kwargs::@Kwargs{julia_args::Cmd}) @ Pkg.API /opt/julia/share/julia/stdlib/v1.12/Pkg/src/API.jl:152 [7] test @ /opt/julia/share/julia/stdlib/v1.12/Pkg/src/API.jl:152 [inlined] [8] #test#81 @ /opt/julia/share/julia/stdlib/v1.12/Pkg/src/API.jl:151 [inlined] [9] top-level scope @ /PkgEval.jl/scripts/evaluate.jl:219 [10] include(mod::Module, _path::String) @ Base ./Base.jl:303 [11] exec_options(opts::Base.JLOptions) @ Base ./client.jl:328 [12] _start() @ Base ./client.jl:561 in expression starting at /PkgEval.jl/scripts/evaluate.jl:210 PkgEval crashed after 2478.92s: GC corruption was detected