Package evaluation to test Finch on Julia 1.14.0-DEV.1372 (893635dc59*) started at 2025-12-16T16:32:21.623 ################################################################################ # Set-up # Installing PkgEval dependencies (TestEnv)... Activating project at `~/.julia/environments/v1.14` Set-up completed after 9.54s ################################################################################ # Installation # Installing Finch... Resolving package versions... Updating `~/.julia/environments/v1.14/Project.toml` [9177782c] + Finch v1.2.12 Updating `~/.julia/environments/v1.14/Manifest.toml` [1520ce14] + AbstractTrees v0.4.5 [66dad0bd] + AliasTables v1.1.3 [15f4f7f2] + AutoHashEquals v2.2.0 [34da2185] + Compat v4.18.1 [9a962f9c] + DataAPI v1.16.0 [864edb3b] + DataStructures v0.19.3 [31c24e10] + Distributions v0.25.122 [ffbed154] + DocStringExtensions v0.9.5 [1a297f60] + FillArrays v1.15.0 [9177782c] + Finch v1.2.12 [34004b35] + HypergeometricFunctions v0.3.28 [92d709cd] + IrrationalConstants v0.2.6 [c8e1da08] + IterTools v1.10.0 [692b3bcd] + JLLWrappers v1.7.1 [682c06a0] + JSON v1.3.0 [2ab3a3ac] + LogExpFunctions v0.3.29 [e1d29d7a] + Missings v1.2.0 [bac558e1] + OrderedCollections v1.8.1 [90014a1f] + PDMats v0.11.36 [69de0a69] + Parsers v2.8.3 [aea7be01] + PrecompileTools v1.3.3 [21216c6a] + Preferences v1.5.0 [43287f4e] + PtrArrays v1.3.0 [1fd47b50] + QuadGK v2.11.2 [189a3867] + Reexport v1.2.2 [ae029012] + Requires v1.3.1 [5969e224] + RewriteTools v0.2.4 [79098fc4] + Rmath v0.9.0 [a2af1166] + SortingAlgorithms v1.2.2 [276daf66] + SpecialFunctions v2.6.1 [10745b16] + Statistics v1.11.1 [82ae8749] + StatsAPI v1.8.0 [2913bbd2] + StatsBase v0.34.9 [4c63d2b9] + StatsFuns v1.5.2 [ec057cc2] + StructUtils v2.6.0 [b33eeca9] + SyntaxInterface v0.2.1 [013be700] + UnsafeAtomics v0.3.0 [efe28fd5] + OpenSpecFun_jll v0.5.6+0 [f50d1b31] + Rmath_jll v0.5.1+0 [56f22d72] + Artifacts v1.11.0 [ade2ca70] + Dates v1.11.0 [8f399da3] + Libdl v1.11.0 [37e2e46d] + LinearAlgebra v1.13.0 [56ddb016] + Logging v1.11.0 [de0858da] + Printf v1.11.0 [9a3f8284] + Random v1.11.0 [ea8e919c] + SHA v1.0.0 [9e88b42a] + Serialization v1.11.0 [2f01184e] + SparseArrays v1.13.0 [4607b0f0] + SuiteSparse [fa267f1f] + TOML v1.0.3 [cf7118a7] + UUIDs v1.11.0 [4ec0a83e] + Unicode v1.11.0 [e66e0078] + CompilerSupportLibraries_jll v1.3.0+1 [4536629a] + OpenBLAS_jll v0.3.29+0 [05823500] + OpenLibm_jll v0.8.7+0 [bea87d4a] + SuiteSparse_jll v7.10.1+0 [8e850b90] + libblastrampoline_jll v5.15.0+0 Installation completed after 4.75s ################################################################################ # Precompilation # ERROR: LoadError: MethodError: no method matching setindex!(::Base.ScopedValues.ScopedValue{IO}, ::Nothing) The function `setindex!` exists, but no method is defined for this combination of argument types. Stacktrace: [1] top-level scope @ /PkgEval.jl/scripts/precompile.jl:10 [2] include(mod::Module, _path::String) @ Base ./Base.jl:309 [3] exec_options(opts::Base.JLOptions) @ Base ./client.jl:344 [4] _start() @ Base ./client.jl:577 in expression starting at /PkgEval.jl/scripts/precompile.jl:6 caused by: MethodError: no method matching setindex!(::Base.ScopedValues.ScopedValue{IO}, ::Base.DevNull) The function `setindex!` exists, but no method is defined for this combination of argument types. Stacktrace: [1] top-level scope @ /PkgEval.jl/scripts/precompile.jl:7 [2] include(mod::Module, _path::String) @ Base ./Base.jl:309 [3] exec_options(opts::Base.JLOptions) @ Base ./client.jl:344 [4] _start() @ Base ./client.jl:577 Precompilation failed after 13.13s ################################################################################ # Testing # Testing Finch Test Could not use exact versions of packages in manifest, re-resolving. Note: if you do not check your manifest file into source control, then you can probably ignore this message. However, if you do check your manifest file into source control, then you probably want to pass the `allow_reresolve = false` kwarg when calling the `Pkg.test` function. Updating `/tmp/jl_nMFQ9D/Project.toml` [c7e460c6] + ArgParse v1.2.0 [6e4b80f9] + BenchmarkTools v1.6.3 [5a98b6c4] + CIndices v0.3.0 ⌅ [864edb3b] ↓ DataStructures v0.19.3 ⇒ v0.18.22 [e30172f5] + Documenter v1.16.1 ⌃ [86223c79] + Graphs v1.13.1 [f67ccb44] + HDF5 v0.17.2 ⌅ [98e50ef6] + JuliaFormatter v1.0.62 [b51810bb] + MatrixDepot v1.0.13 [4d4711f2] + MatrixMarket v0.5.2 [15e1cf62] + NPZ v0.4.3 [817f1d60] + ReTestItems v1.35.0 [47aef6b3] + SimpleWeightedGraphs v1.5.1 [860ef19b] + StableRNGs v1.0.4 [8b7d4fe7] + TensorMarket v0.2.2 [7b1f6079] ~ FileWatching ⇒ v1.11.0 [44cfe95a] ~ Pkg ⇒ v1.14.0 [8dfed614] ~ Test ⇒ v1.11.0 Updating `/tmp/jl_nMFQ9D/Manifest.toml` [a4c015fc] + ANSIColoredPrinters v0.0.1 [c7e460c6] + ArgParse v1.2.0 [ec485272] + ArnoldiMethod v0.4.0 [6e4b80f9] + BenchmarkTools v1.6.3 [e1450e63] + BufferedStreams v1.2.2 [5a98b6c4] + CIndices v0.3.0 [00ebfdb7] + CSTParser v3.4.3 [79a69506] + ChannelBuffers v0.4.2 [944b1d66] + CodecZlib v0.7.8 ⌅ [a80b9123] + CommonMark v0.8.16 [a8cc5b0e] + Crayons v4.1.1 [a93c6f00] + DataFrames v1.8.1 ⌅ [864edb3b] ↓ DataStructures v0.19.3 ⇒ v0.18.22 [e2d170a0] + DataValueInterfaces v1.0.0 [e30172f5] + Documenter v1.16.1 [5789e2e9] + FileIO v1.17.1 [d7ba0133] + Git v1.5.0 [c27321d9] + Glob v1.4.0 ⌃ [86223c79] + Graphs v1.13.1 [f67ccb44] + HDF5 v0.17.2 [b5f81e59] + IOCapture v1.0.0 [d25df0c9] + Inflate v0.1.5 [842dd82b] + InlineStrings v1.4.5 [41ab1584] + InvertedIndices v1.3.1 [82899510] + IteratorInterfaceExtensions v1.0.0 ⌅ [98e50ef6] + JuliaFormatter v1.0.62 [b964fa9f] + LaTeXStrings v1.4.0 [0e77f7df] + LazilyInitializedFields v1.3.0 ⌅ [23992714] + MAT v0.10.7 [3da0fdf6] + MPIPreferences v0.1.11 [1914dd2f] + MacroTools v0.5.16 [d0879d2d] + MarkdownAST v0.1.2 [b51810bb] + MatrixDepot v1.0.13 [4d4711f2] + MatrixMarket v0.5.2 [15e1cf62] + NPZ v0.4.3 [2dfb63ee] + PooledArrays v1.4.3 [08abe8d2] + PrettyTables v3.1.2 [817f1d60] + ReTestItems v1.35.0 [2792f1a3] + RegistryInstances v0.1.0 [6c6a2e73] + Scratch v1.3.0 [91c51154] + SentinelArrays v1.4.8 [699a6c99] + SimpleTraits v0.9.5 [47aef6b3] + SimpleWeightedGraphs v1.5.1 [860ef19b] + StableRNGs v1.0.4 [90137ffa] + StaticArrays v1.9.15 [1e83bf80] + StaticArraysCore v1.4.4 [892a3eda] + StringManipulation v0.4.2 [3783bdb8] + TableTraits v1.0.1 [bd369af6] + Tables v1.12.1 [8b7d4fe7] + TensorMarket v0.2.2 [1e6cf692] + TestEnv v1.103.0 [b718987f] + TextWrap v1.0.2 [0796e94c] + Tokenize v0.5.29 [3bb67fe8] + TranscodingStreams v0.11.3 [a5390f91] + ZipFile v0.10.1 [2e619515] + Expat_jll v2.7.3+0 [020c3dae] + Git_LFS_jll v3.7.0+0 [f8c6e375] + Git_jll v2.52.0+0 ⌅ [0234f1f7] + HDF5_jll v1.14.6+0 [e33a78d0] + Hwloc_jll v2.12.2+0 [94ce4f54] + Libiconv_jll v1.18.0+0 [7cb0a576] + MPICH_jll v4.3.2+0 [f1f71cc9] + MPItrampoline_jll v5.5.4+0 [9237b28f] + MicrosoftMPI_jll v10.1.4+3 [fe0851c0] + OpenMPI_jll v5.0.9+0 [9bd350c2] + OpenSSH_jll v10.2.1+0 ⌅ [02c8fc9c] + XML2_jll v2.13.9+0 [a65dc6b1] + Xorg_libpciaccess_jll v0.18.1+0 [477f73a3] + libaec_jll v1.1.4+0 [0dad84c5] + ArgTools v1.1.2 [2a0f44e3] + Base64 v1.11.0 [8ba89e20] + Distributed v1.11.0 [f43a241f] + Downloads v1.7.0 [7b1f6079] ~ FileWatching ⇒ v1.11.0 [9fa8497b] + Future v1.11.0 [b77e0a4c] + InteractiveUtils v1.11.0 [ac6e5ff7] + JuliaSyntaxHighlighting v1.13.0 [4af54fe1] + LazyArtifacts v1.11.0 [b27032c2] + LibCURL v1.0.0 [76f85450] + LibGit2 v1.11.0 [d6f4376e] + Markdown v1.11.0 [a63ad114] + Mmap v1.11.0 [ca575930] + NetworkOptions v1.3.0 [44cfe95a] ~ Pkg ⇒ v1.14.0 [9abbd945] + Profile v1.11.0 [3fa0cd96] + REPL v1.11.0 [1a1011a3] + SharedArrays v1.11.0 [6462fe0b] + Sockets v1.11.0 [f489334b] + StyledStrings v1.13.0 [a4e569a6] + Tar v1.10.0 [8dfed614] ~ Test ⇒ v1.11.0 [deac9b47] + LibCURL_jll v8.17.0+0 [e37daf67] + LibGit2_jll v1.9.2+0 [29816b5a] + LibSSH2_jll v1.11.3+1 [14a3606d] + MozillaCACerts_jll v2025.12.2 [458c3c95] + OpenSSL_jll v3.5.4+0 [efcefdf7] + PCRE2_jll v10.47.0+0 [83775a58] + Zlib_jll v1.3.1+2 [3161d3a3] + Zstd_jll v1.5.7+1 [8e850ede] + nghttp2_jll v1.68.0+1 [3f19e933] + p7zip_jll v17.7.0+0 Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m` Test Successfully re-resolved Status `/tmp/jl_nMFQ9D/Project.toml` [1520ce14] AbstractTrees v0.4.5 [c7e460c6] ArgParse v1.2.0 [6e4b80f9] BenchmarkTools v1.6.3 [5a98b6c4] CIndices v0.3.0 [34da2185] Compat v4.18.1 ⌅ [864edb3b] DataStructures v0.18.22 [e30172f5] Documenter v1.16.1 [9177782c] Finch v1.2.12 ⌃ [86223c79] Graphs v1.13.1 [f67ccb44] HDF5 v0.17.2 [682c06a0] JSON v1.3.0 ⌅ [98e50ef6] JuliaFormatter v1.0.62 [b51810bb] MatrixDepot v1.0.13 [4d4711f2] MatrixMarket v0.5.2 [15e1cf62] NPZ v0.4.3 [aea7be01] PrecompileTools v1.3.3 [817f1d60] ReTestItems v1.35.0 [ae029012] Requires v1.3.1 [5969e224] RewriteTools v0.2.4 [47aef6b3] SimpleWeightedGraphs v1.5.1 [860ef19b] StableRNGs v1.0.4 [10745b16] Statistics v1.11.1 [2913bbd2] StatsBase v0.34.9 [b33eeca9] SyntaxInterface v0.2.1 [8b7d4fe7] TensorMarket v0.2.2 [7b1f6079] FileWatching v1.11.0 [37e2e46d] LinearAlgebra v1.13.0 [44cfe95a] Pkg v1.14.0 [9a3f8284] Random v1.11.0 [2f01184e] SparseArrays v1.13.0 [8dfed614] Test v1.11.0 Status `/tmp/jl_nMFQ9D/Manifest.toml` [a4c015fc] ANSIColoredPrinters v0.0.1 [1520ce14] AbstractTrees v0.4.5 [66dad0bd] AliasTables v1.1.3 [c7e460c6] ArgParse v1.2.0 [ec485272] ArnoldiMethod v0.4.0 [15f4f7f2] AutoHashEquals v2.2.0 [6e4b80f9] BenchmarkTools v1.6.3 [e1450e63] BufferedStreams v1.2.2 [5a98b6c4] CIndices v0.3.0 [00ebfdb7] CSTParser v3.4.3 [79a69506] ChannelBuffers v0.4.2 [944b1d66] CodecZlib v0.7.8 ⌅ [a80b9123] CommonMark v0.8.16 [34da2185] Compat v4.18.1 [a8cc5b0e] Crayons v4.1.1 [9a962f9c] DataAPI v1.16.0 [a93c6f00] DataFrames v1.8.1 ⌅ [864edb3b] DataStructures v0.18.22 [e2d170a0] DataValueInterfaces v1.0.0 [31c24e10] Distributions v0.25.122 [ffbed154] DocStringExtensions v0.9.5 [e30172f5] Documenter v1.16.1 [5789e2e9] FileIO v1.17.1 [1a297f60] FillArrays v1.15.0 [9177782c] Finch v1.2.12 [d7ba0133] Git v1.5.0 [c27321d9] Glob v1.4.0 ⌃ [86223c79] Graphs v1.13.1 [f67ccb44] HDF5 v0.17.2 [34004b35] HypergeometricFunctions v0.3.28 [b5f81e59] IOCapture v1.0.0 [d25df0c9] Inflate v0.1.5 [842dd82b] InlineStrings v1.4.5 [41ab1584] InvertedIndices v1.3.1 [92d709cd] IrrationalConstants v0.2.6 [c8e1da08] IterTools v1.10.0 [82899510] IteratorInterfaceExtensions v1.0.0 [692b3bcd] JLLWrappers v1.7.1 [682c06a0] JSON v1.3.0 ⌅ [98e50ef6] JuliaFormatter v1.0.62 [b964fa9f] LaTeXStrings v1.4.0 [0e77f7df] LazilyInitializedFields v1.3.0 [2ab3a3ac] LogExpFunctions v0.3.29 ⌅ [23992714] MAT v0.10.7 [3da0fdf6] MPIPreferences v0.1.11 [1914dd2f] MacroTools v0.5.16 [d0879d2d] MarkdownAST v0.1.2 [b51810bb] MatrixDepot v1.0.13 [4d4711f2] MatrixMarket v0.5.2 [e1d29d7a] Missings v1.2.0 [15e1cf62] NPZ v0.4.3 [bac558e1] OrderedCollections v1.8.1 [90014a1f] PDMats v0.11.36 [69de0a69] Parsers v2.8.3 [2dfb63ee] PooledArrays v1.4.3 [aea7be01] PrecompileTools v1.3.3 [21216c6a] Preferences v1.5.0 [08abe8d2] PrettyTables v3.1.2 [43287f4e] PtrArrays v1.3.0 [1fd47b50] QuadGK v2.11.2 [817f1d60] ReTestItems v1.35.0 [189a3867] Reexport v1.2.2 [2792f1a3] RegistryInstances v0.1.0 [ae029012] Requires v1.3.1 [5969e224] RewriteTools v0.2.4 [79098fc4] Rmath v0.9.0 [6c6a2e73] Scratch v1.3.0 [91c51154] SentinelArrays v1.4.8 [699a6c99] SimpleTraits v0.9.5 [47aef6b3] SimpleWeightedGraphs v1.5.1 [a2af1166] SortingAlgorithms v1.2.2 [276daf66] SpecialFunctions v2.6.1 [860ef19b] StableRNGs v1.0.4 [90137ffa] StaticArrays v1.9.15 [1e83bf80] StaticArraysCore v1.4.4 [10745b16] Statistics v1.11.1 [82ae8749] StatsAPI v1.8.0 [2913bbd2] StatsBase v0.34.9 [4c63d2b9] StatsFuns v1.5.2 [892a3eda] StringManipulation v0.4.2 [ec057cc2] StructUtils v2.6.0 [b33eeca9] SyntaxInterface v0.2.1 [3783bdb8] TableTraits v1.0.1 [bd369af6] Tables v1.12.1 [8b7d4fe7] TensorMarket v0.2.2 [1e6cf692] TestEnv v1.103.0 [b718987f] TextWrap v1.0.2 [0796e94c] Tokenize v0.5.29 [3bb67fe8] TranscodingStreams v0.11.3 [013be700] UnsafeAtomics v0.3.0 [a5390f91] ZipFile v0.10.1 [2e619515] Expat_jll v2.7.3+0 [020c3dae] Git_LFS_jll v3.7.0+0 [f8c6e375] Git_jll v2.52.0+0 ⌅ [0234f1f7] HDF5_jll v1.14.6+0 [e33a78d0] Hwloc_jll v2.12.2+0 [94ce4f54] Libiconv_jll v1.18.0+0 [7cb0a576] MPICH_jll v4.3.2+0 [f1f71cc9] MPItrampoline_jll v5.5.4+0 [9237b28f] MicrosoftMPI_jll v10.1.4+3 [fe0851c0] OpenMPI_jll v5.0.9+0 [9bd350c2] OpenSSH_jll v10.2.1+0 [efe28fd5] OpenSpecFun_jll v0.5.6+0 [f50d1b31] Rmath_jll v0.5.1+0 ⌅ [02c8fc9c] XML2_jll v2.13.9+0 [a65dc6b1] Xorg_libpciaccess_jll v0.18.1+0 [477f73a3] libaec_jll v1.1.4+0 [0dad84c5] ArgTools v1.1.2 [56f22d72] Artifacts v1.11.0 [2a0f44e3] Base64 v1.11.0 [ade2ca70] Dates v1.11.0 [8ba89e20] Distributed v1.11.0 [f43a241f] Downloads v1.7.0 [7b1f6079] FileWatching v1.11.0 [9fa8497b] Future v1.11.0 [b77e0a4c] InteractiveUtils v1.11.0 [ac6e5ff7] JuliaSyntaxHighlighting v1.13.0 [4af54fe1] LazyArtifacts v1.11.0 [b27032c2] LibCURL v1.0.0 [76f85450] LibGit2 v1.11.0 [8f399da3] Libdl v1.11.0 [37e2e46d] LinearAlgebra v1.13.0 [56ddb016] Logging v1.11.0 [d6f4376e] Markdown v1.11.0 [a63ad114] Mmap v1.11.0 [ca575930] NetworkOptions v1.3.0 [44cfe95a] Pkg v1.14.0 [de0858da] Printf v1.11.0 [9abbd945] Profile v1.11.0 [3fa0cd96] REPL v1.11.0 [9a3f8284] Random v1.11.0 [ea8e919c] SHA v1.0.0 [9e88b42a] Serialization v1.11.0 [1a1011a3] SharedArrays v1.11.0 [6462fe0b] Sockets v1.11.0 [2f01184e] SparseArrays v1.13.0 [f489334b] StyledStrings v1.13.0 [4607b0f0] SuiteSparse [fa267f1f] TOML v1.0.3 [a4e569a6] Tar v1.10.0 [8dfed614] Test v1.11.0 [cf7118a7] UUIDs v1.11.0 [4ec0a83e] Unicode v1.11.0 [e66e0078] CompilerSupportLibraries_jll v1.3.0+1 [deac9b47] LibCURL_jll v8.17.0+0 [e37daf67] LibGit2_jll v1.9.2+0 [29816b5a] LibSSH2_jll v1.11.3+1 [14a3606d] MozillaCACerts_jll v2025.12.2 [4536629a] OpenBLAS_jll v0.3.29+0 [05823500] OpenLibm_jll v0.8.7+0 [458c3c95] OpenSSL_jll v3.5.4+0 [efcefdf7] PCRE2_jll v10.47.0+0 [bea87d4a] SuiteSparse_jll v7.10.1+0 [83775a58] Zlib_jll v1.3.1+2 [3161d3a3] Zstd_jll v1.5.7+1 [8e850b90] libblastrampoline_jll v5.15.0+0 [8e850ede] nghttp2_jll v1.68.0+1 [3f19e933] p7zip_jll v17.7.0+0 Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. Testing Running tests... [ Info: Running enhanced Finch.jl precompilation... (to disable, run `using Preferences; Preferences.set_preferences!("Finch", "precompile_workload" => false; force=true)`). See https://julialang.github.io/PrecompileTools.jl/stable/#Package-developers:-reducing-the-cost-of-precompilation-during-development for more. [ Info: Precompiling common tensor formats... [ Info: Precompiling common tensor formats... (estimated: 0 minutes and 8 seconds) [ Info: Precompiling common tensor operations...  [ Info: Precompiling common tensor operations... (estimated: 6 minutes and 50 seconds)  [ Info: Precompiling common tensor operations... (estimated: 3 minutes and 58 seconds)  [ Info: Precompiling common tensor operations... (estimated: 3 minutes and 27 seconds)  [ Info: Precompiling common tensor operations... (estimated: 2 minutes and 33 seconds)  [ Info: Precompiling common tensor operations... (estimated: 2 minutes and 0 seconds)  [ Info: Precompiling common tensor operations... (estimated: 1 minutes and 25 seconds)  [ Info: Precompiling common tensor operations... (estimated: 0 minutes and 59 seconds)  [ Info: Precompiling common tensor operations... (estimated: 0 minutes and 37 seconds)  [ Info: Precompiling common tensor operations... (estimated: 0 minutes and 18 seconds)  [ Info: Done! 1 dependency had output during precompilation: ┌ Finch │ [Output was shown above] └ ┌ Warning: The call to compilecache failed to create a usable precompiled cache file for Finch [9177782c-1635-4eb9-9bfb-d9dfa25e6bce] │ exception = Required dependency Base.PkgId(Base.UUID("864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"), "DataStructures") failed to load from a cache file. └ @ Base loading.jl:2891 [ Info: Running enhanced Finch.jl precompilation... (to disable, run `using Preferences; Preferences.set_preferences!("Finch", "precompile_workload" => false; force=true)`). See https://julialang.github.io/PrecompileTools.jl/stable/#Package-developers:-reducing-the-cost-of-precompilation-during-development for more. [ Info: Precompiling common tensor formats... [ Info: Precompiling common tensor formats... (estimated: 0 minutes and 8 seconds) [ Info: Precompiling common tensor operations...  [ Info: Precompiling common tensor operations... (estimated: 6 minutes and 57 seconds)  [ Info: Precompiling common tensor operations... (estimated: 4 minutes and 1 seconds)  [ Info: Precompiling common tensor operations... (estimated: 3 minutes and 28 seconds)  [ Info: Precompiling common tensor operations... (estimated: 2 minutes and 34 seconds)  [ Info: Precompiling common tensor operations... (estimated: 2 minutes and 1 seconds)  [ Info: Precompiling common tensor operations... (estimated: 1 minutes and 26 seconds)  [ Info: Precompiling common tensor operations... (estimated: 0 minutes and 59 seconds)  [ Info: Precompiling common tensor operations... (estimated: 0 minutes and 38 seconds)  [ Info: Precompiling common tensor operations... (estimated: 0 minutes and 18 seconds)  [ Info: Done! ERROR: LoadError: Precompiled image Base.PkgId(Base.UUID("9177782c-1635-4eb9-9bfb-d9dfa25e6bce"), "Finch") not available with flags CacheFlags(; use_pkgimages=false, debug_level=1, check_bounds=0, inline=true, opt_level=0) Stacktrace:  [1] error(s::String)  @ Base ./error.jl:44  [2] __require_prelocked(pkg::Base.PkgId, env::String)  @ Base ./loading.jl:2813  [3] _require_prelocked(uuidkey::Base.PkgId, env::String)  @ Base ./loading.jl:2665  [4] macro expansion  @ ./loading.jl:2593 [inlined]  [5] macro expansion  @ ./lock.jl:376 [inlined]  [6] __require(into::Module, mod::Symbol)  @ Base ./loading.jl:2557  [7] require  @ ./loading.jl:2533 [inlined]  [8] eval_import_path  @ ./module.jl:36 [inlined]  [9] eval_import_path_all(at::Module, path::Expr, keyword::String)  @ Base ./module.jl:60  [10] _eval_using  @ ./module.jl:137 [inlined]  [11] _eval_using(to::Module, path::Expr)  @ Base ./module.jl:137  [12] top-level scope  @ ~/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:3  [13] include(mod::Module, _path::String)  @ Base ./Base.jl:309  [14] include_package_for_output(pkg::Base.PkgId, input::String, syntax_version::VersionNumber, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing)  @ Base ./loading.jl:3250  [15] top-level scope  @ stdin:5  [16] eval(m::Module, e::Any)  @ Core ./boot.jl:489  [17] include_string(mapexpr::typeof(identity), mod::Module, code::String, filename::String)  @ Base ./loading.jl:3092  [18] include_string  @ ./loading.jl:3102 [inlined]  [19] exec_options(opts::Base.JLOptions)  @ Base ./client.jl:342  [20] _start()  @ Base ./client.jl:577 in expression starting at /home/pkgeval/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:1 in expression starting at stdin:5 2 dependencies had output during precompilation: ┌ Finch → SparseArraysExt │ [Output was shown above] └ ┌ Finch │ [ Info: Running enhanced Finch.jl precompilation... (to disable, run `using Preferences; Preferences.set_preferences!("Finch", "precompile_workload" => false; force=true)`). See https://julialang.github.io/PrecompileTools.jl/stable/#Package-developers:-reducing-the-cost-of-precompilation-during-development for more. │ [ Info: Precompiling common tensor formats... │ [ Info: Precompiling common tensor formats... (estimated: 0 minutes and 8 seconds) │ [ Info: Precompiling common tensor operations... │ │ [ Info: Precompiling common tensor operations... (estimated: 6 minutes and 57 seconds) │ │ [ Info: Precompiling common tensor operations... (estimated: 4 minutes and 1 seconds) │ │ [ Info: Precompiling common tensor operations... (estimated: 3 minutes and 28 seconds) │ │ [ Info: Precompiling common tensor operations... (estimated: 2 minutes and 34 seconds) │ │ [ Info: Precompiling common tensor operations... (estimated: 2 minutes and 1 seconds) │ │ [ Info: Precompiling common tensor operations... (estimated: 1 minutes and 26 seconds) │ │ [ Info: Precompiling common tensor operations... (estimated: 0 minutes and 59 seconds) │ │ [ Info: Precompiling common tensor operations... (estimated: 0 minutes and 38 seconds) │ │ [ Info: Precompiling common tensor operations... (estimated: 0 minutes and 18 seconds) │ │ [ Info: Done! └ ┌ Error: Error during loading of extension SparseArraysExt of Finch, use `Base.retry_load_extensions()` to retry. │ exception = │ 1-element ExceptionStack: │ The following 1 package failed to precompile: │ │ SparseArraysExt │ Failed to precompile SparseArraysExt [9408d6b2-d5e1-5f3b-85be-6bbaf7a4cef0] to "/home/pkgeval/.julia/compiled/v1.14/SparseArraysExt/jl_GRq49x" (ProcessExited(1)). │ └ @ Base loading.jl:1721 [ Info: Running test item: algebra WARNING: Method definition assemble!_generator(Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:40 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition assemble!_code(Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:44 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition assemble!(Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:48 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition getindex_helper_generator(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:40 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition getindex_helper_code(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:44 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition getindex_helper(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:48 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition setindex_helper_generator(Any, Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:40 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition setindex_helper_code(Any, Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:44 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition setindex_helper(Any, Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:48 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition helper_equal_generator(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:40 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition helper_equal_code(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:44 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition helper_equal(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:48 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition helper_isequal_generator(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:40 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition helper_isequal_code(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:44 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition helper_isequal(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:48 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition copyto_helper!_generator(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:40 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition copyto_helper!_code(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:44 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition copyto_helper!(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:48 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition dropfills_helper!_generator(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:40 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition dropfills_helper!_code(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:44 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition dropfills_helper!(Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:48 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition reshape_constructor_generator(Any, Any, Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:40 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition reshape_constructor_code(Any, Any, Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:44 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition reshape_constructor(Any, Any, Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:48 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition reshape_kernel_generator(Any, Any, Any, Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:40 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition reshape_kernel_code(Any, Any, Any, Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:44 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition reshape_kernel(Any, Any, Any, Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:48 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition execute_impl_generator(Any, Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:40 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition execute_impl_code(Any, Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:44 overwritten on the same line (check for duplicate calls to `include`). WARNING: Method definition execute_impl(Any, Any, Any) in module Finch at /home/pkgeval/.julia/packages/Finch/ucnKV/src/util/staging.jl:48 overwritten on the same line (check for duplicate calls to `include`). [ Info: Running test item: constructors disagreement with reference output reference Tensor(SparseDict(Element(0))) constructors: initialized tensor: Tensor(SparseDict{Int64}(Element{0.0, Float64, Int64}([2.0, 2.0, 3.0, 3.0]), 6, [1, 5], [2, 3, 5, 6], [1, 2, 3, 4], Dict((1, 2) => 1, (1, 3) => 2, (1, 6) => 4, (1, 5) => 3), Int64[])) initialized tensor: Tensor(SparseDict{Int16}(Element{0.0, Float64, Int64}([2.0, 2.0, 3.0, 3.0]), 6, [1, 5], Int16[2, 3, 5, 6], [1, 2, 3, 4], Dict{Tuple{Int64, Int16}, Int64}((1, 2) => 1, (1, 3) => 2, (1, 6) => 4, (1, 5) => 3), Int64[])) sized tensor: Tensor(SparseDict{Int64}(Element{0.0, Float64, Int64}(Float64[]), 7, [1], Int64[], Int64[], Dict{Tuple{Int64, Int64}, Int64}(), Int64[])) sized tensor: Tensor(SparseDict{Int16}(Element{0.0, Float64, Int64}(Float64[]), 7, [1], Int16[], Int64[], Dict{Tuple{Int64, Int16}, Int64}(), Int64[])) empty tensor: Tensor(SparseDict{Int64}(Element{0.0, Float64, Int64}(Float64[]), 0, [1], Int64[], Int64[], Dict{Tuple{Int64, Int64}, Int64}(), Int64[])) empty tensor: Tensor(SparseDict{Int16}(Element{0.0, Float64, Int64}(Float64[]), 0, [1], Int16[], Int64[], Dict{Tuple{Int64, Int16}, Int64}(), Int64[])) result Tensor(SparseDict(Element(0))) constructors: initialized tensor: Tensor(SparseDict{Int64}(Element{0.0, Float64, Int64}([2.0, 2.0, 3.0, 3.0]), 6, [1, 5], [2, 3, 5, 6], [1, 2, 3, 4], Dict((1, 2) => 1, (1, 3) => 2, (1, 5) => 3, (1, 6) => 4), Int64[])) initialized tensor: Tensor(SparseDict{Int16}(Element{0.0, Float64, Int64}([2.0, 2.0, 3.0, 3.0]), 6, [1, 5], Int16[2, 3, 5, 6], [1, 2, 3, 4], Dict{Tuple{Int64, Int16}, Int64}((1, 2) => 1, (1, 3) => 2, (1, 5) => 3, (1, 6) => 4), Int64[])) sized tensor: Tensor(SparseDict{Int64}(Element{0.0, Float64, Int64}(Float64[]), 7, [1], Int64[], Int64[], Dict{Tuple{Int64, Int64}, Int64}(), Int64[])) sized tensor: Tensor(SparseDict{Int16}(Element{0.0, Float64, Int64}(Float64[]), 7, [1], Int16[], Int64[], Dict{Tuple{Int64, Int16}, Int64}(), Int64[])) empty tensor: Tensor(SparseDict{Int64}(Element{0.0, Float64, Int64}(Float64[]), 0, [1], Int64[], Int64[], Dict{Tuple{Int64, Int64}, Int64}(), Int64[])) empty tensor: Tensor(SparseDict{Int16}(Element{0.0, Float64, Int64}(Float64[]), 0, [1], Int16[], Int64[], Dict{Tuple{Int64, Int16}, Int64}(), Int64[])) Construct SparseDict: Test Failed at /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/constructors_tests.jl:116 Expression: check_output("constructors/format_$(key).txt", String(take!(io))) Evaluated: check_output("constructors/format_SparseDict.txt", "Tensor(SparseDict(Element(0))) constructors:\ninitialized tensor: Tensor(SparseDict{Int64}(Element{0.0, Float64, Int64}([2.0, 2.0, 3.0, 3.0]), 6, [1, 5], [2, 3, 5, 6], [1, 2, 3, 4], Dict((1, 2) => 1, (1, 3) => 2, (1, 5) => 3, (1, 6) => 4), Int64[]))\ninitialized tensor: Tensor(SparseDict{Int16}(Element{0.0, Float64, Int64}([2.0, 2.0, 3.0, 3.0]), 6, [1, 5], Int16[2, 3, 5, 6], [1, 2, 3, 4], Dict{Tuple{Int64, Int16}, Int64}((1, 2) => 1, (1, 3) => 2, (1, 5) => 3, (1, 6) => 4), Int64[]))\nsized tensor: Tensor(SparseDict{Int64}(Element{0.0, Float64, Int64}(Float64[]), 7, [1], Int64[], Int64[], Dict{Tuple{Int64, Int64}, Int64}(), Int64[]))\nsized tensor: Tensor(SparseDict{Int16}(Element{0.0, Float64, Int64}(Float64[]), 7, [1], Int16[], Int64[], Dict{Tuple{Int64, Int16}, Int64}(), Int64[]))\nempty tensor: Tensor(SparseDict{Int64}(Element{0.0, Float64, Int64}(Float64[]), 0, [1], Int64[], Int64[], Dict{Tuple{Int64, Int64}, Int64}(), Int64[]))\nempty tensor: Tensor(SparseDict{Int16}(Element{0.0, Float64, Int64}(Float64[]), 0, [1], Int16[], Int64[], Dict{Tuple{Int64, Int16}, Int64}(), Int64[]))\n") Stacktrace: [1] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/suites/constructors_tests.jl:27 [2] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [3] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/suites/constructors_tests.jl:116 [inlined] [4] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:753 [inlined] [ Info: Running test item: continuous_examples [ Info: Running test item: docs ┌ Warning: Unable to determine HTML(edit_link = ...) from remote HEAD branch, defaulting to "master". │ Calling `git remote` failed with an exception. Set JULIA_DEBUG=Documenter to see the error. │ Unless this is due to a configuration error, the relevant variable should be set explicitly. └ @ Documenter ~/.julia/packages/Documenter/xvqbW/src/utilities/utilities.jl:680 [ Info: SetupBuildDirectory: setting up build directory. [ Info: Doctest: running doctests. ┌ Error: doctest failure in ../src/tensors/tensors.jl:59-65 │ │ ```jldoctest │ julia> println(summary(Tensor(sparse([1 0; 0 1])))) │ 2×2 Tensor(Dense(SparseList(Element(0)))) │ │ julia> println(summary(Tensor(ones(3, 2, 4)))) │ 3×2×4 Tensor(Dense(Dense(Dense(Element(0.0))))) │ ``` │ │ Subexpression: │ │ println(summary(Tensor(sparse([1 0; 0 1])))) │ │ Evaluated output: │ │ 2×2 Tensor(Dense(Dense(Element(0)))) │ │ Expected output: │ │ 2×2 Tensor(Dense(SparseList(Element(0)))) │ │ diff = │ Warning: Diff output requires color. │ 2×2 Tensor(Dense(SparseList(Element(0))))Tensor(Dense(Dense(Element(0)))) └ @ Documenter ~/.julia/packages/Finch/ucnKV/src/tensors/tensors.jl:59 ┌ Error: Doctesting failed │ exception = │ `makedocs` encountered a doctest error. Terminating build │ Stacktrace: │ [1] error(s::String) │ @ Base ./error.jl:44 │ [2] runner(::Type{Documenter.Builder.Doctest}, doc::Documenter.Document) │ @ Documenter ~/.julia/packages/Documenter/xvqbW/src/builder_pipeline.jl:213 │ [3] dispatch(::Type{Documenter.Builder.DocumentPipeline}, x::Documenter.Document) │ @ Documenter.Selectors ~/.julia/packages/Documenter/xvqbW/src/utilities/Selectors.jl:170 │ [4] #89 │ @ ~/.julia/packages/Documenter/xvqbW/src/makedocs.jl:283 [inlined] │ [5] withenv(::Documenter.var"#89#90"{Documenter.Document}, ::Pair{String, Nothing}, ::Vararg{Pair{String, Nothing}}) │ @ Base ./env.jl:265 │ [6] #87 │ @ ~/.julia/packages/Documenter/xvqbW/src/makedocs.jl:282 [inlined] │ [7] cd(f::Documenter.var"#87#88"{Documenter.Document}, dir::String) │ @ Base.Filesystem ./file.jl:112 │ [8] makedocs(; debug::Bool, format::Documenter.HTMLWriter.HTML, kwargs::@Kwargs{root::String, source::String, sitename::String, doctest::Symbol, modules::Vector{Module}, doctestfilters::Vector{Regex}, remotes::Nothing, plugins::Vector{Documenter.Plugin}}) │ @ Documenter ~/.julia/packages/Documenter/xvqbW/src/makedocs.jl:281 │ [9] kwcall(::@NamedTuple{root::String, source::String, sitename::String, doctest::Symbol, modules::Vector{Module}, doctestfilters::Vector{Regex}, remotes::Nothing, plugins::Vector{Documenter.Plugin}}, ::typeof(makedocs)) │ @ Documenter ~/.julia/packages/Documenter/xvqbW/src/makedocs.jl:274 │ [10] (::Documenter.var"#all_doctests#141"{Bool, Vector{Regex}, Vector{Documenter.Plugin}, Vector{Module}})() │ @ Documenter ~/.julia/packages/Documenter/xvqbW/src/doctest.jl:85 │ [11] macro expansion │ @ ~/.julia/packages/Documenter/xvqbW/src/doctest.jl:110 [inlined] │ [12] macro expansion │ @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:750 [inlined] │ [13] macro expansion │ @ ~/.julia/packages/Documenter/xvqbW/src/doctest.jl:110 [inlined] │ [14] macro expansion │ @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] │ [15] doctest(source::String, modules::Vector{Module}; fix::Bool, testset::String, doctestfilters::Vector{Regex}, plugins::Vector{Documenter.Plugin}) │ @ Documenter ~/.julia/packages/Documenter/xvqbW/src/doctest.jl:110 │ [16] doctest │ @ ~/.julia/packages/Documenter/xvqbW/src/doctest.jl:69 [inlined] │ [17] doctest(package::Module; manual::Bool, testset::Nothing, kwargs::@Kwargs{}) │ @ Documenter ~/.julia/packages/Documenter/xvqbW/src/doctest.jl:37 │ [18] doctest(package::Module) │ @ Documenter ~/.julia/packages/Documenter/xvqbW/src/doctest.jl:20 │ [19] top-level scope │ @ ~/.julia/packages/Finch/ucnKV/docs/test.jl:16 │ [20] include(mapexpr::Function, mod::Module, _path::String) │ @ Base ./Base.jl:310 │ [21] top-level scope │ @ ~/.julia/packages/Finch/ucnKV/test/suites/docs_tests.jl:10 │ [22] eval(m::Module, e::Any) │ @ Core ./boot.jl:489 │ [23] top-level scope │ @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:136 │ [24] macro expansion │ @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] │ [25] macro expansion │ @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:137 [inlined] │ [26] include(mapexpr::Function, mod::Module, _path::String) │ @ Base ./Base.jl:310 │ [27] top-level scope │ @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:159 │ [28] macro expansion │ @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] │ [29] macro expansion │ @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:164 [inlined] │ [30] include(mapexpr::Function, mod::Module, _path::String) │ @ Base ./Base.jl:310 │ [31] top-level scope │ @ none:6 │ [32] eval(m::Module, e::Any) │ @ Core ./boot.jl:489 │ [33] exec_options(opts::Base.JLOptions) │ @ Base ./client.jl:310 │ [34] _start() │ @ Base ./client.jl:577 └ @ Documenter ~/.julia/packages/Documenter/xvqbW/src/doctest.jl:99 Doctests: Finch: Test Failed at /home/pkgeval/.julia/packages/Documenter/xvqbW/src/doctest.jl:110 Expression: all_doctests() Stacktrace: [1] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:753 [inlined] [2] macro expansion @ ~/.julia/packages/Documenter/xvqbW/src/doctest.jl:110 [inlined] [3] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [4] doctest(source::String, modules::Vector{Module}; fix::Bool, testset::String, doctestfilters::Vector{Regex}, plugins::Vector{Documenter.Plugin}) @ Documenter ~/.julia/packages/Documenter/xvqbW/src/doctest.jl:110 [ Info: Running test item: examples pagerank: Test Failed at /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/examples_tests.jl:30 Expression: maximum(abs.(output .- expected)) < tol Evaluated: 0.012212010923306649 < 1.0e-6 Stacktrace: [1] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:753 [inlined] [2] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/suites/examples_tests.jl:30 [inlined] [3] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [4] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/suites/examples_tests.jl:17 [ Info: Running test item: fileio ERROR: LoadError: Precompiled image Base.PkgId(Base.UUID("9177782c-1635-4eb9-9bfb-d9dfa25e6bce"), "Finch") not available with flags CacheFlags(; use_pkgimages=false, debug_level=1, check_bounds=0, inline=true, opt_level=0) Stacktrace:  [1] error(s::String)  @ Base ./error.jl:44  [2] __require_prelocked(pkg::Base.PkgId, env::String)  @ Base ./loading.jl:2813  [3] _require_prelocked(uuidkey::Base.PkgId, env::String)  @ Base ./loading.jl:2665  [4] macro expansion  @ ./loading.jl:2593 [inlined]  [5] macro expansion  @ ./lock.jl:376 [inlined]  [6] __require(into::Module, mod::Symbol)  @ Base ./loading.jl:2557  [7] require  @ ./loading.jl:2533 [inlined]  [8] eval_import_path  @ ./module.jl:36 [inlined]  [9] eval_import_path_all(at::Module, path::Expr, keyword::String)  @ Base ./module.jl:60  [10] _eval_using  @ ./module.jl:137 [inlined]  [11] _eval_using(to::Module, path::Expr)  @ Base ./module.jl:137  [12] top-level scope  @ ~/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:3  [13] include(mod::Module, _path::String)  @ Base ./Base.jl:309  [14] include_package_for_output(pkg::Base.PkgId, input::String, syntax_version::VersionNumber, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing)  @ Base ./loading.jl:3250  [15] top-level scope  @ stdin:5  [16] eval(m::Module, e::Any)  @ Core ./boot.jl:489  [17] include_string(mapexpr::typeof(identity), mod::Module, code::String, filename::String)  @ Base ./loading.jl:3092  [18] include_string  @ ./loading.jl:3102 [inlined]  [19] exec_options(opts::Base.JLOptions)  @ Base ./client.jl:342  [20] _start()  @ Base ./client.jl:577 in expression starting at /home/pkgeval/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:1 in expression starting at stdin:5 ERROR: LoadError: Precompiled image Base.PkgId(Base.UUID("9177782c-1635-4eb9-9bfb-d9dfa25e6bce"), "Finch") not available with flags CacheFlags(; use_pkgimages=false, debug_level=1, check_bounds=0, inline=true, opt_level=0) Stacktrace:  [1] error(s::String)  @ Base ./error.jl:44  [2] __require_prelocked(pkg::Base.PkgId, env::String)  @ Base ./loading.jl:2813  [3] _require_prelocked(uuidkey::Base.PkgId, env::String)  @ Base ./loading.jl:2665  [4] macro expansion  @ ./loading.jl:2593 [inlined]  [5] macro expansion  @ ./lock.jl:376 [inlined]  [6] __require(into::Module, mod::Symbol)  @ Base ./loading.jl:2557  [7] require  @ ./loading.jl:2533 [inlined]  [8] eval_import_path  @ ./module.jl:36 [inlined]  [9] eval_import_path_all(at::Module, path::Expr, keyword::String)  @ Base ./module.jl:60  [10] _eval_using  @ ./module.jl:137 [inlined]  [11] _eval_using(to::Module, path::Expr)  @ Base ./module.jl:137  [12] top-level scope  @ ~/.julia/packages/Finch/ucnKV/ext/HDF5Ext.jl:3  [13] include(mod::Module, _path::String)  @ Base ./Base.jl:309  [14] include_package_for_output(pkg::Base.PkgId, input::String, syntax_version::VersionNumber, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing)  @ Base ./loading.jl:3250  [15] top-level scope  @ stdin:5  [16] eval(m::Module, e::Any)  @ Core ./boot.jl:489  [17] include_string(mapexpr::typeof(identity), mod::Module, code::String, filename::String)  @ Base ./loading.jl:3092  [18] include_string  @ ./loading.jl:3102 [inlined]  [19] exec_options(opts::Base.JLOptions)  @ Base ./client.jl:342  [20] _start()  @ Base ./client.jl:577 in expression starting at /home/pkgeval/.julia/packages/Finch/ucnKV/ext/HDF5Ext.jl:1 in expression starting at stdin:5 2 dependencies had output during precompilation: ┌ Finch → SparseArraysExt │ ERROR: LoadError: Precompiled image Base.PkgId(Base.UUID("9177782c-1635-4eb9-9bfb-d9dfa25e6bce"), "Finch") not available with flags CacheFlags(; use_pkgimages=false, debug_level=1, check_bounds=0, inline=true, opt_level=0) │ Stacktrace: │ [1] error(s::String) │ @ Base ./error.jl:44 │ [2] __require_prelocked(pkg::Base.PkgId, env::String) │ @ Base ./loading.jl:2813 │ [3] _require_prelocked(uuidkey::Base.PkgId, env::String) │ @ Base ./loading.jl:2665 │ [4] macro expansion │ @ ./loading.jl:2593 [inlined] │ [5] macro expansion │ @ ./lock.jl:376 [inlined] │ [6] __require(into::Module, mod::Symbol) │ @ Base ./loading.jl:2557 │ [7] require │ @ ./loading.jl:2533 [inlined] │ [8] eval_import_path │ @ ./module.jl:36 [inlined] │ [9] eval_import_path_all(at::Module, path::Expr, keyword::String) │ @ Base ./module.jl:60 │ [10] _eval_using │ @ ./module.jl:137 [inlined] │ [11] _eval_using(to::Module, path::Expr) │ @ Base ./module.jl:137 │ [12] top-level scope │ @ ~/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:3 │ [13] include(mod::Module, _path::String) │ @ Base ./Base.jl:309 │ [14] include_package_for_output(pkg::Base.PkgId, input::String, syntax_version::VersionNumber, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing) │ @ Base ./loading.jl:3250 │ [15] top-level scope │ @ stdin:5 │ [16] eval(m::Module, e::Any) │ @ Core ./boot.jl:489 │ [17] include_string(mapexpr::typeof(identity), mod::Module, code::String, filename::String) │ @ Base ./loading.jl:3092 │ [18] include_string │ @ ./loading.jl:3102 [inlined] │ [19] exec_options(opts::Base.JLOptions) │ @ Base ./client.jl:342 │ [20] _start() │ @ Base ./client.jl:577 │ in expression starting at /home/pkgeval/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:1 │ in expression starting at stdin:5 └ ┌ Finch → HDF5Ext │ [Output was shown above] └ ┌ Error: Error during loading of extension HDF5Ext of Finch, use `Base.retry_load_extensions()` to retry. │ exception = │ 1-element ExceptionStack: │ The following 2 packages failed to precompile: │ │ SparseArraysExt │ Failed to precompile SparseArraysExt [9408d6b2-d5e1-5f3b-85be-6bbaf7a4cef0] to "/home/pkgeval/.julia/compiled/v1.14/SparseArraysExt/jl_oGo7KX" (ProcessExited(1)). │ │ HDF5Ext │ Failed to precompile HDF5Ext [a758ecf6-f631-530b-85bc-2cfb0f269268] to "/home/pkgeval/.julia/compiled/v1.14/HDF5Ext/jl_ZP8CvF" (ProcessExited(1)). │ └ @ Base loading.jl:1721 h5 binsparse: Error During Test at /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/fileio_tests.jl:6 Got exception outside of a @test Finch.FinchExtensionError("HDF5.jl must be loaded to write .bsp.h5 files (hint: `using HDF5`)") Stacktrace: [1] bspwrite_h5(::String, ::Vararg{Any}) @ Finch ~/.julia/packages/Finch/ucnKV/src/interface/fileio/binsparse.jl:247 [2] bspwrite(fname::String, arr::Tensor{SparseCOOLevel{2, Tuple{Int64, Int64}, Vector{Int64}, Tuple{Vector{Int64}, Vector{Int64}}, ElementLevel{0.0, Float64, Int64, Vector{Float64}}}}, attrs::OrderedCollections.OrderedDict{Any, Any}) @ Finch ~/.julia/packages/Finch/ucnKV/src/interface/fileio/binsparse.jl:263 [3] bspwrite(fname::String, arr::Tensor{SparseCOOLevel{2, Tuple{Int64, Int64}, Vector{Int64}, Tuple{Vector{Int64}, Vector{Int64}}, ElementLevel{0.0, Float64, Int64, Vector{Float64}}}}) @ Finch ~/.julia/packages/Finch/ucnKV/src/interface/fileio/binsparse.jl:262 [4] fwrite(filename::String, tns::Tensor{SparseCOOLevel{2, Tuple{Int64, Int64}, Vector{Int64}, Tuple{Vector{Int64}, Vector{Int64}}, ElementLevel{0.0, Float64, Int64, Vector{Float64}}}}) @ Finch ~/.julia/packages/Finch/ucnKV/src/interface/fileio/fileio.jl:20 [5] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/suites/fileio_tests.jl:7 [6] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [7] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/suites/fileio_tests.jl:14 [inlined] [8] eval(m::Module, e::Any) @ Core ./boot.jl:489 [9] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:136 [10] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [11] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:137 [inlined] [12] include(mapexpr::Function, mod::Module, _path::String) @ Base ./Base.jl:310 [13] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:159 [14] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [15] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:166 [inlined] [16] include(mapexpr::Function, mod::Module, _path::String) @ Base ./Base.jl:310 [17] top-level scope @ none:6 [18] eval(m::Module, e::Any) @ Core ./boot.jl:489 [19] exec_options(opts::Base.JLOptions) @ Base ./client.jl:310 [20] _start() @ Base ./client.jl:577 ERROR: LoadError: Precompiled image Base.PkgId(Base.UUID("9177782c-1635-4eb9-9bfb-d9dfa25e6bce"), "Finch") not available with flags CacheFlags(; use_pkgimages=false, debug_level=1, check_bounds=0, inline=true, opt_level=0) Stacktrace:  [1] error(s::String)  @ Base ./error.jl:44  [2] __require_prelocked(pkg::Base.PkgId, env::String)  @ Base ./loading.jl:2813  [3] _require_prelocked(uuidkey::Base.PkgId, env::String)  @ Base ./loading.jl:2665  [4] macro expansion  @ ./loading.jl:2593 [inlined]  [5] macro expansion  @ ./lock.jl:376 [inlined]  [6] __require(into::Module, mod::Symbol)  @ Base ./loading.jl:2557  [7] require  @ ./loading.jl:2533 [inlined]  [8] eval_import_path  @ ./module.jl:36 [inlined]  [9] eval_import_path_all(at::Module, path::Expr, keyword::String)  @ Base ./module.jl:60  [10] _eval_using  @ ./module.jl:137 [inlined]  [11] _eval_using(to::Module, path::Expr)  @ Base ./module.jl:137  [12] top-level scope  @ ~/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:3  [13] include(mod::Module, _path::String)  @ Base ./Base.jl:309  [14] include_package_for_output(pkg::Base.PkgId, input::String, syntax_version::VersionNumber, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing)  @ Base ./loading.jl:3250  [15] top-level scope  @ stdin:5  [16] eval(m::Module, e::Any)  @ Core ./boot.jl:489  [17] include_string(mapexpr::typeof(identity), mod::Module, code::String, filename::String)  @ Base ./loading.jl:3092  [18] include_string  @ ./loading.jl:3102 [inlined]  [19] exec_options(opts::Base.JLOptions)  @ Base ./client.jl:342  [20] _start()  @ Base ./client.jl:577 in expression starting at /home/pkgeval/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:1 in expression starting at stdin:5 ERROR: LoadError: Precompiled image Base.PkgId(Base.UUID("9177782c-1635-4eb9-9bfb-d9dfa25e6bce"), "Finch") not available with flags CacheFlags(; use_pkgimages=false, debug_level=1, check_bounds=0, inline=true, opt_level=0) Stacktrace:  [1] error(s::String)  @ Base ./error.jl:44  [2] __require_prelocked(pkg::Base.PkgId, env::String)  @ Base ./loading.jl:2813  [3] _require_prelocked(uuidkey::Base.PkgId, env::String)  @ Base ./loading.jl:2665  [4] macro expansion  @ ./loading.jl:2593 [inlined]  [5] macro expansion  @ ./lock.jl:376 [inlined]  [6] __require(into::Module, mod::Symbol)  @ Base ./loading.jl:2557  [7] require  @ ./loading.jl:2533 [inlined]  [8] eval_import_path  @ ./module.jl:36 [inlined]  [9] eval_import_path_all(at::Module, path::Expr, keyword::String)  @ Base ./module.jl:60  [10] _eval_using  @ ./module.jl:137 [inlined]  [11] _eval_using(to::Module, path::Expr)  @ Base ./module.jl:137  [12] top-level scope  @ ~/.julia/packages/Finch/ucnKV/ext/NPZExt.jl:3  [13] include(mod::Module, _path::String)  @ Base ./Base.jl:309  [14] include_package_for_output(pkg::Base.PkgId, input::String, syntax_version::VersionNumber, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing)  @ Base ./loading.jl:3250  [15] top-level scope  @ stdin:5  [16] eval(m::Module, e::Any)  @ Core ./boot.jl:489  [17] include_string(mapexpr::typeof(identity), mod::Module, code::String, filename::String)  @ Base ./loading.jl:3092  [18] include_string  @ ./loading.jl:3102 [inlined]  [19] exec_options(opts::Base.JLOptions)  @ Base ./client.jl:342  [20] _start()  @ Base ./client.jl:577 in expression starting at /home/pkgeval/.julia/packages/Finch/ucnKV/ext/NPZExt.jl:1 in expression starting at stdin:5 2 dependencies had output during precompilation: ┌ Finch → SparseArraysExt │ ERROR: LoadError: Precompiled image Base.PkgId(Base.UUID("9177782c-1635-4eb9-9bfb-d9dfa25e6bce"), "Finch") not available with flags CacheFlags(; use_pkgimages=false, debug_level=1, check_bounds=0, inline=true, opt_level=0) │ Stacktrace: │ [1] error(s::String) │ @ Base ./error.jl:44 │ [2] __require_prelocked(pkg::Base.PkgId, env::String) │ @ Base ./loading.jl:2813 │ [3] _require_prelocked(uuidkey::Base.PkgId, env::String) │ @ Base ./loading.jl:2665 │ [4] macro expansion │ @ ./loading.jl:2593 [inlined] │ [5] macro expansion │ @ ./lock.jl:376 [inlined] │ [6] __require(into::Module, mod::Symbol) │ @ Base ./loading.jl:2557 │ [7] require │ @ ./loading.jl:2533 [inlined] │ [8] eval_import_path │ @ ./module.jl:36 [inlined] │ [9] eval_import_path_all(at::Module, path::Expr, keyword::String) │ @ Base ./module.jl:60 │ [10] _eval_using │ @ ./module.jl:137 [inlined] │ [11] _eval_using(to::Module, path::Expr) │ @ Base ./module.jl:137 │ [12] top-level scope │ @ ~/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:3 │ [13] include(mod::Module, _path::String) │ @ Base ./Base.jl:309 │ [14] include_package_for_output(pkg::Base.PkgId, input::String, syntax_version::VersionNumber, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing) │ @ Base ./loading.jl:3250 │ [15] top-level scope │ @ stdin:5 │ [16] eval(m::Module, e::Any) │ @ Core ./boot.jl:489 │ [17] include_string(mapexpr::typeof(identity), mod::Module, code::String, filename::String) │ @ Base ./loading.jl:3092 │ [18] include_string │ @ ./loading.jl:3102 [inlined] │ [19] exec_options(opts::Base.JLOptions) │ @ Base ./client.jl:342 │ [20] _start() │ @ Base ./client.jl:577 │ in expression starting at /home/pkgeval/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:1 │ in expression starting at stdin:5 └ ┌ Finch → NPZExt │ [Output was shown above] └ ┌ Error: Error during loading of extension NPZExt of Finch, use `Base.retry_load_extensions()` to retry. │ exception = │ 1-element ExceptionStack: │ The following 2 packages failed to precompile: │ │ SparseArraysExt │ Failed to precompile SparseArraysExt [9408d6b2-d5e1-5f3b-85be-6bbaf7a4cef0] to "/home/pkgeval/.julia/compiled/v1.14/SparseArraysExt/jl_QCwsCN" (ProcessExited(1)). │ │ NPZExt │ Failed to precompile NPZExt [b1063bb2-796d-599f-9fbf-55925323154b] to "/home/pkgeval/.julia/compiled/v1.14/NPZExt/jl_YMQpE4" (ProcessExited(1)). │ └ @ Base loading.jl:1721 npy binsparse: Error During Test at /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/fileio_tests.jl:71 Got exception outside of a @test Finch.FinchExtensionError("NPZ.jl must be loaded to write .bspnpy files (hint: `using NPZ`)") Stacktrace: [1] bspwrite_bspnpy(::String, ::Vararg{Any}) @ Finch ~/.julia/packages/Finch/ucnKV/src/interface/fileio/binsparse.jl:254 [2] bspwrite(fname::String, arr::Tensor{SparseCOOLevel{2, Tuple{Int64, Int64}, Vector{Int64}, Tuple{Vector{Int64}, Vector{Int64}}, ElementLevel{0.0, Float64, Int64, Vector{Float64}}}}, attrs::OrderedCollections.OrderedDict{Any, Any}) @ Finch ~/.julia/packages/Finch/ucnKV/src/interface/fileio/binsparse.jl:265 [3] bspwrite(fname::String, arr::Tensor{SparseCOOLevel{2, Tuple{Int64, Int64}, Vector{Int64}, Tuple{Vector{Int64}, Vector{Int64}}, ElementLevel{0.0, Float64, Int64, Vector{Float64}}}}) @ Finch ~/.julia/packages/Finch/ucnKV/src/interface/fileio/binsparse.jl:262 [4] fwrite(filename::String, tns::Tensor{SparseCOOLevel{2, Tuple{Int64, Int64}, Vector{Int64}, Tuple{Vector{Int64}, Vector{Int64}}, ElementLevel{0.0, Float64, Int64, Vector{Float64}}}}) @ Finch ~/.julia/packages/Finch/ucnKV/src/interface/fileio/fileio.jl:20 [5] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/suites/fileio_tests.jl:72 [6] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [7] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/suites/fileio_tests.jl:79 [inlined] [8] eval(m::Module, e::Any) @ Core ./boot.jl:489 [9] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:136 [10] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [11] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:137 [inlined] [12] include(mapexpr::Function, mod::Module, _path::String) @ Base ./Base.jl:310 [13] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:159 [14] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [15] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:166 [inlined] [16] include(mapexpr::Function, mod::Module, _path::String) @ Base ./Base.jl:310 [17] top-level scope @ none:6 [18] eval(m::Module, e::Any) @ Core ./boot.jl:489 [19] exec_options(opts::Base.JLOptions) @ Base ./client.jl:310 [20] _start() @ Base ./client.jl:577 ERROR: LoadError: Precompiled image Base.PkgId(Base.UUID("9177782c-1635-4eb9-9bfb-d9dfa25e6bce"), "Finch") not available with flags CacheFlags(; use_pkgimages=false, debug_level=1, check_bounds=0, inline=true, opt_level=0) Stacktrace:  [1] error(s::String)  @ Base ./error.jl:44  [2] __require_prelocked(pkg::Base.PkgId, env::String)  @ Base ./loading.jl:2813  [3] _require_prelocked(uuidkey::Base.PkgId, env::String)  @ Base ./loading.jl:2665  [4] macro expansion  @ ./loading.jl:2593 [inlined]  [5] macro expansion  @ ./lock.jl:376 [inlined]  [6] __require(into::Module, mod::Symbol)  @ Base ./loading.jl:2557  [7] require  @ ./loading.jl:2533 [inlined]  [8] eval_import_path  @ ./module.jl:36 [inlined]  [9] eval_import_path_all(at::Module, path::Expr, keyword::String)  @ Base ./module.jl:60  [10] _eval_using  @ ./module.jl:137 [inlined]  [11] _eval_using(to::Module, path::Expr)  @ Base ./module.jl:137  [12] top-level scope  @ ~/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:3  [13] include(mod::Module, _path::String)  @ Base ./Base.jl:309  [14] include_package_for_output(pkg::Base.PkgId, input::String, syntax_version::VersionNumber, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing)  @ Base ./loading.jl:3250  [15] top-level scope  @ stdin:5  [16] eval(m::Module, e::Any)  @ Core ./boot.jl:489  [17] include_string(mapexpr::typeof(identity), mod::Module, code::String, filename::String)  @ Base ./loading.jl:3092  [18] include_string  @ ./loading.jl:3102 [inlined]  [19] exec_options(opts::Base.JLOptions)  @ Base ./client.jl:342  [20] _start()  @ Base ./client.jl:577 in expression starting at /home/pkgeval/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:1 in expression starting at stdin:5 ERROR: LoadError: Precompiled image Base.PkgId(Base.UUID("9177782c-1635-4eb9-9bfb-d9dfa25e6bce"), "Finch") not available with flags CacheFlags(; use_pkgimages=false, debug_level=1, check_bounds=0, inline=true, opt_level=0) Stacktrace:  [1] error(s::String)  @ Base ./error.jl:44  [2] __require_prelocked(pkg::Base.PkgId, env::String)  @ Base ./loading.jl:2813  [3] _require_prelocked(uuidkey::Base.PkgId, env::String)  @ Base ./loading.jl:2665  [4] macro expansion  @ ./loading.jl:2593 [inlined]  [5] macro expansion  @ ./lock.jl:376 [inlined]  [6] __require(into::Module, mod::Symbol)  @ Base ./loading.jl:2557  [7] require  @ ./loading.jl:2533 [inlined]  [8] eval_import_path  @ ./module.jl:36 [inlined]  [9] eval_import_path_all(at::Module, path::Expr, keyword::String)  @ Base ./module.jl:60  [10] _eval_using  @ ./module.jl:137 [inlined]  [11] _eval_using(to::Module, path::Expr)  @ Base ./module.jl:137  [12] top-level scope  @ ~/.julia/packages/Finch/ucnKV/ext/TensorMarketExt.jl:3  [13] include(mod::Module, _path::String)  @ Base ./Base.jl:309  [14] include_package_for_output(pkg::Base.PkgId, input::String, syntax_version::VersionNumber, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing)  @ Base ./loading.jl:3250  [15] top-level scope  @ stdin:5  [16] eval(m::Module, e::Any)  @ Core ./boot.jl:489  [17] include_string(mapexpr::typeof(identity), mod::Module, code::String, filename::String)  @ Base ./loading.jl:3092  [18] include_string  @ ./loading.jl:3102 [inlined]  [19] exec_options(opts::Base.JLOptions)  @ Base ./client.jl:342  [20] _start()  @ Base ./client.jl:577 in expression starting at /home/pkgeval/.julia/packages/Finch/ucnKV/ext/TensorMarketExt.jl:1 in expression starting at stdin:5 2 dependencies had output during precompilation: ┌ Finch → SparseArraysExt │ ERROR: LoadError: Precompiled image Base.PkgId(Base.UUID("9177782c-1635-4eb9-9bfb-d9dfa25e6bce"), "Finch") not available with flags CacheFlags(; use_pkgimages=false, debug_level=1, check_bounds=0, inline=true, opt_level=0) │ Stacktrace: │ [1] error(s::String) │ @ Base ./error.jl:44 │ [2] __require_prelocked(pkg::Base.PkgId, env::String) │ @ Base ./loading.jl:2813 │ [3] _require_prelocked(uuidkey::Base.PkgId, env::String) │ @ Base ./loading.jl:2665 │ [4] macro expansion │ @ ./loading.jl:2593 [inlined] │ [5] macro expansion │ @ ./lock.jl:376 [inlined] │ [6] __require(into::Module, mod::Symbol) │ @ Base ./loading.jl:2557 │ [7] require │ @ ./loading.jl:2533 [inlined] │ [8] eval_import_path │ @ ./module.jl:36 [inlined] │ [9] eval_import_path_all(at::Module, path::Expr, keyword::String) │ @ Base ./module.jl:60 │ [10] _eval_using │ @ ./module.jl:137 [inlined] │ [11] _eval_using(to::Module, path::Expr) │ @ Base ./module.jl:137 │ [12] top-level scope │ @ ~/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:3 │ [13] include(mod::Module, _path::String) │ @ Base ./Base.jl:309 │ [14] include_package_for_output(pkg::Base.PkgId, input::String, syntax_version::VersionNumber, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing) │ @ Base ./loading.jl:3250 │ [15] top-level scope │ @ stdin:5 │ [16] eval(m::Module, e::Any) │ @ Core ./boot.jl:489 │ [17] include_string(mapexpr::typeof(identity), mod::Module, code::String, filename::String) │ @ Base ./loading.jl:3092 │ [18] include_string │ @ ./loading.jl:3102 [inlined] │ [19] exec_options(opts::Base.JLOptions) │ @ Base ./client.jl:342 │ [20] _start() │ @ Base ./client.jl:577 │ in expression starting at /home/pkgeval/.julia/packages/Finch/ucnKV/ext/SparseArraysExt.jl:1 │ in expression starting at stdin:5 └ ┌ Finch → TensorMarketExt │ [Output was shown above] └ ┌ Error: Error during loading of extension TensorMarketExt of Finch, use `Base.retry_load_extensions()` to retry. │ exception = │ 1-element ExceptionStack: │ The following 2 packages failed to precompile: │ │ SparseArraysExt │ Failed to precompile SparseArraysExt [9408d6b2-d5e1-5f3b-85be-6bbaf7a4cef0] to "/home/pkgeval/.julia/compiled/v1.14/SparseArraysExt/jl_APKELi" (ProcessExited(1)). │ │ TensorMarketExt │ Failed to precompile TensorMarketExt [54040247-0909-5821-a256-61e2e9969419] to "/home/pkgeval/.julia/compiled/v1.14/TensorMarketExt/jl_IxmEvo" (ProcessExited(1)). │ └ @ Base loading.jl:1721 fileio: Error During Test at /home/pkgeval/.julia/packages/Finch/ucnKV/test/runtests.jl:135 Got exception outside of a @test Finch.FinchExtensionError("TensorMarket.jl must be loaded to use write .ttx files (hint: `using TensorMarket`)") Stacktrace: [1] fttwrite(::String, ::Vararg{Any}) @ Finch ~/.julia/packages/Finch/ucnKV/src/interface/fileio/tensormarket.jl:10 [2] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/suites/fileio_tests.jl:139 [3] eval(m::Module, e::Any) @ Core ./boot.jl:489 [4] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:136 [5] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [6] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:137 [inlined] [7] include(mapexpr::Function, mod::Module, _path::String) @ Base ./Base.jl:310 [8] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:159 [9] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [10] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:166 [inlined] [11] include(mapexpr::Function, mod::Module, _path::String) @ Base ./Base.jl:310 [12] top-level scope @ none:6 [13] eval(m::Module, e::Any) @ Core ./boot.jl:489 [14] exec_options(opts::Base.JLOptions) @ Base ./client.jl:310 [15] _start() @ Base ./client.jl:577 [ Info: Running test item: galley ┌ Warning: Conversion to OrderedDict is deprecated for unordered associative containers (in this case, Dict{Symbol, Int64}). Use an ordered or sorted associative type, such as SortedDict and OrderedDict. │ caller = TensorDef(index_set::StableSet{Symbol}, dim_sizes::Dict{Symbol, Int64}, fill_val::Float64, level_formats::Nothing, index_order::Nothing, index_protocols::Nothing) at tensor-stats.jl:6 └ @ Core ~/.julia/packages/Finch/ucnKV/src/Galley/TensorStats/tensor-stats.jl:6 ┌ Warning: Conversion to OrderedDict is deprecated for unordered associative containers (in this case, Dict{Symbol, Int64}). Use an ordered or sorted associative type, such as SortedDict and OrderedDict. │ caller = DCStats(def::TensorDef, idx_2_int::Dict{Symbol, Int64}, int_2_idx::Dict{Int64, Symbol}, dcs::StableSet{Finch.Galley.DegreeConstraint}) at tensor-stats.jl:267 └ @ Core ~/.julia/packages/Finch/ucnKV/src/Galley/TensorStats/tensor-stats.jl:267 ┌ Warning: Conversion to OrderedDict is deprecated for unordered associative containers (in this case, Dict{Int64, Symbol}). Use an ordered or sorted associative type, such as SortedDict and OrderedDict. │ caller = DCStats(def::TensorDef, idx_2_int::Dict{Symbol, Int64}, int_2_idx::Dict{Int64, Symbol}, dcs::StableSet{Finch.Galley.DegreeConstraint}) at tensor-stats.jl:267 └ @ Core ~/.julia/packages/Finch/ucnKV/src/Galley/TensorStats/tensor-stats.jl:267 ┌ Warning: Conversion to OrderedDict is deprecated for unordered associative containers (in this case, Dict{Symbol, Int64}). Use an ordered or sorted associative type, such as SortedDict and OrderedDict. │ caller = TensorDef(index_set::StableSet{Symbol}, dim_sizes::Dict{Symbol, Int64}, fill_val::Int64, level_formats::Nothing, index_order::Nothing, index_protocols::Nothing) at tensor-stats.jl:6 └ @ Core ~/.julia/packages/Finch/ucnKV/src/Galley/TensorStats/tensor-stats.jl:6 [ Info: Running test item: index disagreement with reference output reference begin C_lvl = ((ex.bodies[1]).bodies[1]).tns.bind.lvl C_lvl_ptr = C_lvl.ptr C_lvl_idx = C_lvl.idx C_lvl_stop = C_lvl.shape C_lvl_2 = C_lvl.lvl C_lvl_2_val = C_lvl_2.val A_lvl = (((ex.bodies[1]).bodies[2]).body.rhs.args[1]).tns.bind.lvl A_lvl_ptr = A_lvl.ptr A_lvl_idx = A_lvl.idx A_lvl_stop = A_lvl.shape A_lvl_2 = A_lvl.lvl A_lvl_2_val = A_lvl_2.val B_lvl = (((ex.bodies[1]).bodies[2]).body.rhs.args[2]).tns.bind.lvl B_lvl_ptr = B_lvl.ptr B_lvl_idx = B_lvl.idx B_lvl_stop = B_lvl.shape B_lvl_2 = B_lvl.lvl B_lvl_2_val = B_lvl_2.val C_lvl_qos_stop = 0 Finch.resize_if_smaller!(C_lvl_ptr, 1 + 1) Finch.fill_range!(C_lvl_ptr, 0, 1 + 1, 1 + 1) C_lvl_qos = 0 + 1 0 < 1 || throw((Finch.FinchProtocolError)("SparseListLevels cannot be updated multiple times")) phase_stop = min(C_lvl_stop, 0) if phase_stop >= 1 for i_6 = 1:phase_stop if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = missing C_lvl_idx[C_lvl_qos] = i_6 C_lvl_qos += 1 end end phase_stop_2 = min(C_lvl_stop, 10, A_lvl_stop) if phase_stop_2 >= 1 A_lvl_q = A_lvl_ptr[1] A_lvl_q_stop = A_lvl_ptr[1 + 1] if A_lvl_q < A_lvl_q_stop A_lvl_i1 = A_lvl_idx[A_lvl_q_stop - 1] else A_lvl_i1 = 0 end phase_stop_3 = min(A_lvl_i1, phase_stop_2) if phase_stop_3 >= 1 if A_lvl_idx[A_lvl_q] < 1 A_lvl_q = Finch.scansearch(A_lvl_idx, 1, A_lvl_q, A_lvl_q_stop - 1) end while true A_lvl_i = A_lvl_idx[A_lvl_q] if A_lvl_i < phase_stop_3 A_lvl_2_val_2 = A_lvl_2_val[A_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = A_lvl_2_val_2 C_lvl_idx[C_lvl_qos] = A_lvl_i C_lvl_qos += 1 A_lvl_q += 1 else phase_stop_5 = min(phase_stop_3, A_lvl_i) if A_lvl_i == phase_stop_5 A_lvl_2_val_2 = A_lvl_2_val[A_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = A_lvl_2_val_2 C_lvl_idx[C_lvl_qos] = phase_stop_5 C_lvl_qos += 1 A_lvl_q += 1 end break end end end end phase_start_6 = max(1, 1 + A_lvl_stop) phase_stop_7 = min(C_lvl_stop, 10) if phase_stop_7 >= phase_start_6 for i_14 = phase_start_6:phase_stop_7 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = missing C_lvl_idx[C_lvl_qos] = i_14 C_lvl_qos += 1 end end phase_stop_8 = min(C_lvl_stop, 0, 10 + B_lvl_stop) if phase_stop_8 >= 11 B_lvl_q = B_lvl_ptr[1] B_lvl_q_stop = B_lvl_ptr[1 + 1] if B_lvl_q < B_lvl_q_stop B_lvl_i1 = B_lvl_idx[B_lvl_q_stop - 1] else B_lvl_i1 = 0 end phase_stop_9 = min(phase_stop_8, 10 + B_lvl_i1) if phase_stop_9 >= 11 if B_lvl_idx[B_lvl_q] < -10 + 11 B_lvl_q = Finch.scansearch(B_lvl_idx, -10 + 11, B_lvl_q, B_lvl_q_stop - 1) end while true B_lvl_i = B_lvl_idx[B_lvl_q] phase_stop_10 = 10 + B_lvl_i if phase_stop_10 < phase_stop_9 B_lvl_2_val_2 = B_lvl_2_val[B_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = B_lvl_2_val_2 C_lvl_idx[C_lvl_qos] = phase_stop_10 C_lvl_qos += 1 B_lvl_q += 1 else phase_stop_11 = min(phase_stop_9, 10 + B_lvl_i) if B_lvl_i == -10 + phase_stop_11 B_lvl_2_val_2 = B_lvl_2_val[B_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = B_lvl_2_val_2 C_lvl_idx[C_lvl_qos] = phase_stop_11 C_lvl_qos += 1 B_lvl_q += 1 end break end end end end phase_stop_13 = min(C_lvl_stop, A_lvl_stop, 10 + B_lvl_stop) if phase_stop_13 >= 11 A_lvl_q = A_lvl_ptr[1] A_lvl_q_stop = A_lvl_ptr[1 + 1] if A_lvl_q < A_lvl_q_stop A_lvl_i1 = A_lvl_idx[A_lvl_q_stop - 1] else A_lvl_i1 = 0 end B_lvl_q = B_lvl_ptr[1] B_lvl_q_stop = B_lvl_ptr[1 + 1] if B_lvl_q < B_lvl_q_stop B_lvl_i1 = B_lvl_idx[B_lvl_q_stop - 1] else B_lvl_i1 = 0 end phase_stop_14 = min(A_lvl_i1, 10 + B_lvl_i1, phase_stop_13) if phase_stop_14 >= 11 i = 11 if A_lvl_idx[A_lvl_q] < 11 A_lvl_q = Finch.scansearch(A_lvl_idx, 11, A_lvl_q, A_lvl_q_stop - 1) end if B_lvl_idx[B_lvl_q] < -10 + 11 B_lvl_q = Finch.scansearch(B_lvl_idx, -10 + 11, B_lvl_q, B_lvl_q_stop - 1) end while i <= phase_stop_14 A_lvl_i = A_lvl_idx[A_lvl_q] B_lvl_i = B_lvl_idx[B_lvl_q] phase_stop_15 = min(A_lvl_i, 10 + B_lvl_i, phase_stop_14) if A_lvl_i == phase_stop_15 && B_lvl_i == -10 + phase_stop_15 A_lvl_2_val_3 = A_lvl_2_val[A_lvl_q] B_lvl_2_val_3 = B_lvl_2_val[B_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = coalesce(A_lvl_2_val_3, B_lvl_2_val_3) C_lvl_idx[C_lvl_qos] = phase_stop_15 C_lvl_qos += 1 A_lvl_q += 1 B_lvl_q += 1 elseif B_lvl_i == -10 + phase_stop_15 B_lvl_q += 1 elseif A_lvl_i == phase_stop_15 A_lvl_2_val_3 = A_lvl_2_val[A_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = coalesce(A_lvl_2_val_3, 0.0) C_lvl_idx[C_lvl_qos] = phase_stop_15 C_lvl_qos += 1 A_lvl_q += 1 end i = phase_stop_15 + 1 end end phase_start_16 = max(11 + B_lvl_i1, 11) phase_stop_17 = min(A_lvl_i1, phase_stop_13) if phase_stop_17 >= phase_start_16 if A_lvl_idx[A_lvl_q] < phase_start_16 A_lvl_q = Finch.scansearch(A_lvl_idx, phase_start_16, A_lvl_q, A_lvl_q_stop - 1) end while true A_lvl_i = A_lvl_idx[A_lvl_q] if A_lvl_i < phase_stop_17 A_lvl_2_val_4 = A_lvl_2_val[A_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = coalesce(A_lvl_2_val_4, 0.0) C_lvl_idx[C_lvl_qos] = A_lvl_i C_lvl_qos += 1 A_lvl_q += 1 else phase_stop_19 = min(A_lvl_i, phase_stop_17) if A_lvl_i == phase_stop_19 A_lvl_2_val_4 = A_lvl_2_val[A_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = coalesce(A_lvl_2_val_4, 0.0) C_lvl_idx[C_lvl_qos] = phase_stop_19 C_lvl_qos += 1 A_lvl_q += 1 end break end end end end phase_start_19 = max(11, 1 + A_lvl_stop) phase_stop_21 = min(C_lvl_stop, 10 + B_lvl_stop) if phase_stop_21 >= phase_start_19 B_lvl_q = B_lvl_ptr[1] B_lvl_q_stop = B_lvl_ptr[1 + 1] if B_lvl_q < B_lvl_q_stop B_lvl_i1 = B_lvl_idx[B_lvl_q_stop - 1] else B_lvl_i1 = 0 end phase_stop_22 = min(10 + B_lvl_i1, phase_stop_21) if phase_stop_22 >= phase_start_19 if B_lvl_idx[B_lvl_q] < -10 + phase_start_19 B_lvl_q = Finch.scansearch(B_lvl_idx, -10 + phase_start_19, B_lvl_q, B_lvl_q_stop - 1) end while true B_lvl_i = B_lvl_idx[B_lvl_q] phase_stop_23 = 10 + B_lvl_i if phase_stop_23 < phase_stop_22 B_lvl_2_val_5 = B_lvl_2_val[B_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = B_lvl_2_val_5 C_lvl_idx[C_lvl_qos] = phase_stop_23 C_lvl_qos += 1 B_lvl_q += 1 else phase_stop_24 = min(10 + B_lvl_i, phase_stop_22) if B_lvl_i == -10 + phase_stop_24 B_lvl_2_val_5 = B_lvl_2_val[B_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = B_lvl_2_val_5 C_lvl_idx[C_lvl_qos] = phase_stop_24 C_lvl_qos += 1 B_lvl_q += 1 end break end end end end phase_start_24 = max(1, 11 + B_lvl_stop) phase_stop_26 = min(C_lvl_stop, 0) if phase_stop_26 >= phase_start_24 for i_39 = phase_start_24:phase_stop_26 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = missing C_lvl_idx[C_lvl_qos] = i_39 C_lvl_qos += 1 end end phase_start_25 = max(1, 11 + B_lvl_stop) phase_stop_27 = min(C_lvl_stop, A_lvl_stop) if phase_stop_27 >= phase_start_25 A_lvl_q = A_lvl_ptr[1] A_lvl_q_stop = A_lvl_ptr[1 + 1] if A_lvl_q < A_lvl_q_stop A_lvl_i1 = A_lvl_idx[A_lvl_q_stop - 1] else A_lvl_i1 = 0 end phase_stop_28 = min(A_lvl_i1, phase_stop_27) if phase_stop_28 >= phase_start_25 if A_lvl_idx[A_lvl_q] < phase_start_25 A_lvl_q = Finch.scansearch(A_lvl_idx, phase_start_25, A_lvl_q, A_lvl_q_stop - 1) end while true A_lvl_i = A_lvl_idx[A_lvl_q] if A_lvl_i < phase_stop_28 A_lvl_2_val_5 = A_lvl_2_val[A_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = A_lvl_2_val_5 C_lvl_idx[C_lvl_qos] = A_lvl_i C_lvl_qos += 1 A_lvl_q += 1 else phase_stop_30 = min(A_lvl_i, phase_stop_28) if A_lvl_i == phase_stop_30 A_lvl_2_val_5 = A_lvl_2_val[A_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = A_lvl_2_val_5 C_lvl_idx[C_lvl_qos] = phase_stop_30 C_lvl_qos += 1 A_lvl_q += 1 end break end end end end phase_start_29 = max(1, 1 + A_lvl_stop, 11 + B_lvl_stop) if C_lvl_stop >= phase_start_29 for i_47 = phase_start_29:C_lvl_stop if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = missing C_lvl_idx[C_lvl_qos] = i_47 C_lvl_qos += 1 end end C_lvl_ptr[1 + 1] += (C_lvl_qos - 0) - 1 resize!(C_lvl_ptr, 1 + 1) for p = 1:1 C_lvl_ptr[p + 1] += C_lvl_ptr[p] end qos_stop = C_lvl_ptr[1 + 1] - 1 resize!(C_lvl_idx, qos_stop) resize!(C_lvl_2_val, qos_stop) (C = Tensor((SparseListLevel){Int64}(ElementLevel{0.0, Float64, Int64}(C_lvl_2_val), C_lvl_stop, C_lvl_ptr, C_lvl_idx)),) end result begin C_lvl = ((ex.bodies[1]).bodies[1]).tns.bind.lvl C_lvl_ptr = C_lvl.ptr C_lvl_idx = C_lvl.idx C_lvl_stop = C_lvl.shape C_lvl_2 = C_lvl.lvl C_lvl_2_val = C_lvl_2.val A_lvl = (((ex.bodies[1]).bodies[2]).body.rhs.args[1]).tns.bind.lvl A_lvl_stop = A_lvl.shape A_lvl_2 = A_lvl.lvl A_lvl_2_val = A_lvl_2.val B_lvl = (((ex.bodies[1]).bodies[2]).body.rhs.args[2]).tns.bind.lvl B_lvl_stop = B_lvl.shape B_lvl_2 = B_lvl.lvl B_lvl_2_val = B_lvl_2.val C_lvl_qos_stop = 0 Finch.resize_if_smaller!(C_lvl_ptr, 1 + 1) Finch.fill_range!(C_lvl_ptr, 0, 1 + 1, 1 + 1) C_lvl_qos = 0 + 1 0 < 1 || throw((Finch.FinchProtocolError)("SparseListLevels cannot be updated multiple times")) phase_stop = min(C_lvl_stop, 0) if phase_stop >= 1 for i_6 = 1:phase_stop if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = missing C_lvl_idx[C_lvl_qos] = i_6 C_lvl_qos += 1 end end phase_stop_2 = min(C_lvl_stop, 10, A_lvl_stop) if phase_stop_2 >= 1 for i_8 = 1:phase_stop_2 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end A_lvl_q = (1 - 1) * A_lvl_stop + i_8 A_lvl_2_val_2 = A_lvl_2_val[A_lvl_q] C_lvl_2_val[C_lvl_qos] = A_lvl_2_val_2 C_lvl_idx[C_lvl_qos] = i_8 C_lvl_qos += 1 end end phase_start_3 = max(1, 1 + A_lvl_stop) phase_stop_3 = min(C_lvl_stop, 10) if phase_stop_3 >= phase_start_3 for i_10 = phase_start_3:phase_stop_3 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = missing C_lvl_idx[C_lvl_qos] = i_10 C_lvl_qos += 1 end end phase_stop_4 = min(C_lvl_stop, 0, 10 + B_lvl_stop) if phase_stop_4 >= 11 for i_12 = 11:phase_stop_4 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end B_lvl_q = (1 - 1) * B_lvl_stop + (-10 + i_12) B_lvl_2_val_2 = B_lvl_2_val[B_lvl_q] C_lvl_2_val[C_lvl_qos] = B_lvl_2_val_2 C_lvl_idx[C_lvl_qos] = i_12 C_lvl_qos += 1 end end phase_stop_5 = min(C_lvl_stop, A_lvl_stop, 10 + B_lvl_stop) if phase_stop_5 >= 11 for i_14 = 11:phase_stop_5 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end A_lvl_q = (1 - 1) * A_lvl_stop + i_14 B_lvl_q = (1 - 1) * B_lvl_stop + (-10 + i_14) A_lvl_2_val_3 = A_lvl_2_val[A_lvl_q] B_lvl_2_val_3 = B_lvl_2_val[B_lvl_q] C_lvl_2_val[C_lvl_qos] = coalesce(A_lvl_2_val_3, B_lvl_2_val_3) C_lvl_idx[C_lvl_qos] = i_14 C_lvl_qos += 1 end end phase_start_6 = max(1 + A_lvl_stop, 11) phase_stop_6 = min(C_lvl_stop, 10 + B_lvl_stop) if phase_stop_6 >= phase_start_6 for i_16 = phase_start_6:phase_stop_6 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end B_lvl_q = (1 - 1) * B_lvl_stop + (-10 + i_16) B_lvl_2_val_4 = B_lvl_2_val[B_lvl_q] C_lvl_2_val[C_lvl_qos] = B_lvl_2_val_4 C_lvl_idx[C_lvl_qos] = i_16 C_lvl_qos += 1 end end phase_start_7 = max(1, 11 + B_lvl_stop) phase_stop_7 = min(C_lvl_stop, 0) if phase_stop_7 >= phase_start_7 for i_18 = phase_start_7:phase_stop_7 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = missing C_lvl_idx[C_lvl_qos] = i_18 C_lvl_qos += 1 end end phase_start_8 = max(1, 11 + B_lvl_stop) phase_stop_8 = min(C_lvl_stop, A_lvl_stop) if phase_stop_8 >= phase_start_8 for i_20 = phase_start_8:phase_stop_8 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end A_lvl_q = (1 - 1) * A_lvl_stop + i_20 A_lvl_2_val_4 = A_lvl_2_val[A_lvl_q] C_lvl_2_val[C_lvl_qos] = A_lvl_2_val_4 C_lvl_idx[C_lvl_qos] = i_20 C_lvl_qos += 1 end end phase_start_9 = max(1, 1 + A_lvl_stop, 11 + B_lvl_stop) if C_lvl_stop >= phase_start_9 for i_22 = phase_start_9:C_lvl_stop if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = missing C_lvl_idx[C_lvl_qos] = i_22 C_lvl_qos += 1 end end C_lvl_ptr[1 + 1] += (C_lvl_qos - 0) - 1 resize!(C_lvl_ptr, 1 + 1) for p = 1:1 C_lvl_ptr[p + 1] += C_lvl_ptr[p] end qos_stop = C_lvl_ptr[1 + 1] - 1 resize!(C_lvl_idx, qos_stop) resize!(C_lvl_2_val, qos_stop) (C = Tensor((SparseListLevel){Int64}(ElementLevel{0.0, Float64, Int64}(C_lvl_2_val), C_lvl_stop, C_lvl_ptr, C_lvl_idx)),) end index: Test Failed at /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:134 Expression: check_output("index/concat_offset_permit.jl", #= /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:136 =# @finch_code(begin C .= 0 #= /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:137 =# for i = _ #= /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:138 =# C[i] = coalesce(A[~i], B[~(i - 10)]) #= /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:139 =# end end)) Evaluated: check_output("index/concat_offset_permit.jl", begin C_lvl = ((ex.bodies[1]).bodies[1]).tns.bind.lvl C_lvl_ptr = C_lvl.ptr C_lvl_idx = C_lvl.idx C_lvl_stop = C_lvl.shape C_lvl_2 = C_lvl.lvl C_lvl_2_val = C_lvl_2.val A_lvl = (((ex.bodies[1]).bodies[2]).body.rhs.args[1]).tns.bind.lvl A_lvl_stop = A_lvl.shape A_lvl_2 = A_lvl.lvl A_lvl_2_val = A_lvl_2.val B_lvl = (((ex.bodies[1]).bodies[2]).body.rhs.args[2]).tns.bind.lvl B_lvl_stop = B_lvl.shape B_lvl_2 = B_lvl.lvl B_lvl_2_val = B_lvl_2.val C_lvl_qos_stop = 0 Finch.resize_if_smaller!(C_lvl_ptr, 1 + 1) Finch.fill_range!(C_lvl_ptr, 0, 1 + 1, 1 + 1) C_lvl_qos = 0 + 1 0 < 1 || throw((Finch.FinchProtocolError)("SparseListLevels cannot be updated multiple times")) phase_stop = min(C_lvl_stop, 0) if phase_stop >= 1 for i_6 = 1:phase_stop if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = missing C_lvl_idx[C_lvl_qos] = i_6 C_lvl_qos += 1 end end phase_stop_2 = min(C_lvl_stop, 10, A_lvl_stop) if phase_stop_2 >= 1 for i_8 = 1:phase_stop_2 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end A_lvl_q = (1 - 1) * A_lvl_stop + i_8 A_lvl_2_val_2 = A_lvl_2_val[A_lvl_q] C_lvl_2_val[C_lvl_qos] = A_lvl_2_val_2 C_lvl_idx[C_lvl_qos] = i_8 C_lvl_qos += 1 end end phase_start_3 = max(1, 1 + A_lvl_stop) phase_stop_3 = min(C_lvl_stop, 10) if phase_stop_3 >= phase_start_3 for i_10 = phase_start_3:phase_stop_3 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = missing C_lvl_idx[C_lvl_qos] = i_10 C_lvl_qos += 1 end end phase_stop_4 = min(C_lvl_stop, 0, 10 + B_lvl_stop) if phase_stop_4 >= 11 for i_12 = 11:phase_stop_4 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end B_lvl_q = (1 - 1) * B_lvl_stop + (-10 + i_12) B_lvl_2_val_2 = B_lvl_2_val[B_lvl_q] C_lvl_2_val[C_lvl_qos] = B_lvl_2_val_2 C_lvl_idx[C_lvl_qos] = i_12 C_lvl_qos += 1 end end phase_stop_5 = min(C_lvl_stop, A_lvl_stop, 10 + B_lvl_stop) if phase_stop_5 >= 11 for i_14 = 11:phase_stop_5 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end A_lvl_q = (1 - 1) * A_lvl_stop + i_14 B_lvl_q = (1 - 1) * B_lvl_stop + (-10 + i_14) A_lvl_2_val_3 = A_lvl_2_val[A_lvl_q] B_lvl_2_val_3 = B_lvl_2_val[B_lvl_q] C_lvl_2_val[C_lvl_qos] = coalesce(A_lvl_2_val_3, B_lvl_2_val_3) C_lvl_idx[C_lvl_qos] = i_14 C_lvl_qos += 1 end end phase_start_6 = max(1 + A_lvl_stop, 11) phase_stop_6 = min(C_lvl_stop, 10 + B_lvl_stop) if phase_stop_6 >= phase_start_6 for i_16 = phase_start_6:phase_stop_6 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end B_lvl_q = (1 - 1) * B_lvl_stop + (-10 + i_16) B_lvl_2_val_4 = B_lvl_2_val[B_lvl_q] C_lvl_2_val[C_lvl_qos] = B_lvl_2_val_4 C_lvl_idx[C_lvl_qos] = i_16 C_lvl_qos += 1 end end phase_start_7 = max(1, 11 + B_lvl_stop) phase_stop_7 = min(C_lvl_stop, 0) if phase_stop_7 >= phase_start_7 for i_18 = phase_start_7:phase_stop_7 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = missing C_lvl_idx[C_lvl_qos] = i_18 C_lvl_qos += 1 end end phase_start_8 = max(1, 11 + B_lvl_stop) phase_stop_8 = min(C_lvl_stop, A_lvl_stop) if phase_stop_8 >= phase_start_8 for i_20 = phase_start_8:phase_stop_8 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end A_lvl_q = (1 - 1) * A_lvl_stop + i_20 A_lvl_2_val_4 = A_lvl_2_val[A_lvl_q] C_lvl_2_val[C_lvl_qos] = A_lvl_2_val_4 C_lvl_idx[C_lvl_qos] = i_20 C_lvl_qos += 1 end end phase_start_9 = max(1, 1 + A_lvl_stop, 11 + B_lvl_stop) if C_lvl_stop >= phase_start_9 for i_22 = phase_start_9:C_lvl_stop if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = missing C_lvl_idx[C_lvl_qos] = i_22 C_lvl_qos += 1 end end C_lvl_ptr[1 + 1] += (C_lvl_qos - 0) - 1 resize!(C_lvl_ptr, 1 + 1) for p = 1:1 C_lvl_ptr[p + 1] += C_lvl_ptr[p] end qos_stop = C_lvl_ptr[1 + 1] - 1 resize!(C_lvl_idx, qos_stop) resize!(C_lvl_2_val, qos_stop) (C = Tensor((SparseListLevel){Int64}(ElementLevel{0.0, Float64, Int64}(C_lvl_2_val), C_lvl_stop, C_lvl_ptr, C_lvl_idx)),) end) Stacktrace: [1] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:753 [inlined] [2] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:134 [3] eval(m::Module, e::Any) @ Core ./boot.jl:489 [4] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:136 [5] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [6] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:137 [inlined] disagreement with reference output reference begin C_lvl = ((ex.bodies[1]).bodies[1]).tns.bind.lvl C_lvl_ptr = C_lvl.ptr C_lvl_idx = C_lvl.idx C_lvl_2 = C_lvl.lvl C_lvl_2_val = C_lvl_2.val A_lvl = ((((ex.bodies[1]).bodies[2]).body.body.rhs.args[1]).args[1]).tns.bind.lvl A_lvl_ptr = A_lvl.ptr A_lvl_idx = A_lvl.idx A_lvl_stop = A_lvl.shape A_lvl_2 = A_lvl.lvl A_lvl_2_val = A_lvl_2.val F_lvl = (((ex.bodies[1]).bodies[2]).body.body.rhs.args[3]).tns.bind.lvl F_lvl_stop = F_lvl.shape F_lvl_2 = F_lvl.lvl F_lvl_2_val = F_lvl_2.val C_lvl_qos_stop = 0 Finch.resize_if_smaller!(C_lvl_ptr, 1 + 1) Finch.fill_range!(C_lvl_ptr, 0, 1 + 1, 1 + 1) C_lvl_qos = 0 + 1 0 < 1 || throw((Finch.FinchProtocolError)("SparseListLevels cannot be updated multiple times")) A_lvl_q = A_lvl_ptr[1] A_lvl_q_stop = A_lvl_ptr[1 + 1] if A_lvl_q < A_lvl_q_stop A_lvl_i1 = A_lvl_idx[A_lvl_q_stop - 1] else A_lvl_i1 = 0 end phase_stop = min(A_lvl_i1, A_lvl_stop) if phase_stop >= 1 if A_lvl_idx[A_lvl_q] < 1 A_lvl_q = Finch.scansearch(A_lvl_idx, 1, A_lvl_q, A_lvl_q_stop - 1) end while true A_lvl_i = A_lvl_idx[A_lvl_q] if A_lvl_i < phase_stop A_lvl_2_val_2 = A_lvl_2_val[A_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvldirty = false v_3 = -A_lvl_i phase_start_3 = max(1, -v_3 + -2) phase_stop_4 = min(F_lvl_stop, A_lvl_stop + -v_3 + -3) if phase_stop_4 >= phase_start_3 A_lvl_q_2 = A_lvl_ptr[1] A_lvl_q_stop_2 = A_lvl_ptr[1 + 1] if A_lvl_q_2 < A_lvl_q_stop_2 A_lvl_i1_2 = A_lvl_idx[A_lvl_q_stop_2 - 1] else A_lvl_i1_2 = 0 end phase_stop_5 = min(phase_stop_4, -v_3 + -3 + A_lvl_i1_2) if phase_stop_5 >= phase_start_3 if A_lvl_idx[A_lvl_q_2] < 3 + v_3 + phase_start_3 A_lvl_q_2 = Finch.scansearch(A_lvl_idx, 3 + v_3 + phase_start_3, A_lvl_q_2, A_lvl_q_stop_2 - 1) end while true A_lvl_i_2 = A_lvl_idx[A_lvl_q_2] phase_stop_6 = -v_3 + -3 + A_lvl_i_2 if phase_stop_6 < phase_stop_5 A_lvl_2_val_3 = A_lvl_2_val[A_lvl_q_2] F_lvl_q = (1 - 1) * F_lvl_stop + phase_stop_6 F_lvl_2_val_2 = F_lvl_2_val[F_lvl_q] C_lvldirty = true C_lvl_2_val[C_lvl_qos] = (A_lvl_2_val_2 != 0) * F_lvl_2_val_2 * coalesce(A_lvl_2_val_3, 0) + C_lvl_2_val[C_lvl_qos] A_lvl_q_2 += 1 else phase_stop_7 = min(phase_stop_5, -v_3 + -3 + A_lvl_i_2) if A_lvl_i_2 == 3 + v_3 + phase_stop_7 A_lvl_2_val_3 = A_lvl_2_val[A_lvl_q_2] F_lvl_q = (1 - 1) * F_lvl_stop + phase_stop_7 F_lvl_2_val_3 = F_lvl_2_val[F_lvl_q] C_lvldirty = true C_lvl_2_val[C_lvl_qos] += (A_lvl_2_val_2 != 0) * F_lvl_2_val_3 * coalesce(A_lvl_2_val_3, 0) A_lvl_q_2 += 1 end break end end end end if C_lvldirty C_lvl_idx[C_lvl_qos] = A_lvl_i C_lvl_qos += 1 end A_lvl_q += 1 else phase_stop_10 = min(phase_stop, A_lvl_i) if A_lvl_i == phase_stop_10 A_lvl_2_val_2 = A_lvl_2_val[A_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvldirty = false v_5 = -phase_stop_10 phase_start_11 = max(1, -2 + -v_5) phase_stop_12 = min(F_lvl_stop, A_lvl_stop + -3 + -v_5) if phase_stop_12 >= phase_start_11 A_lvl_q_3 = A_lvl_ptr[1] A_lvl_q_stop_3 = A_lvl_ptr[1 + 1] if A_lvl_q_3 < A_lvl_q_stop_3 A_lvl_i1_3 = A_lvl_idx[A_lvl_q_stop_3 - 1] else A_lvl_i1_3 = 0 end phase_stop_13 = min(phase_stop_12, -3 + -v_5 + A_lvl_i1_3) if phase_stop_13 >= phase_start_11 if A_lvl_idx[A_lvl_q_3] < 3 + v_5 + phase_start_11 A_lvl_q_3 = Finch.scansearch(A_lvl_idx, 3 + v_5 + phase_start_11, A_lvl_q_3, A_lvl_q_stop_3 - 1) end while true A_lvl_i_3 = A_lvl_idx[A_lvl_q_3] phase_stop_14 = -3 + -v_5 + A_lvl_i_3 if phase_stop_14 < phase_stop_13 A_lvl_2_val_4 = A_lvl_2_val[A_lvl_q_3] F_lvl_q_2 = (1 - 1) * F_lvl_stop + phase_stop_14 F_lvl_2_val_4 = F_lvl_2_val[F_lvl_q_2] C_lvldirty = true C_lvl_2_val[C_lvl_qos] += (A_lvl_2_val_2 != 0) * F_lvl_2_val_4 * coalesce(A_lvl_2_val_4, 0) A_lvl_q_3 += 1 else phase_stop_15 = min(phase_stop_13, -3 + -v_5 + A_lvl_i_3) if A_lvl_i_3 == 3 + v_5 + phase_stop_15 A_lvl_2_val_4 = A_lvl_2_val[A_lvl_q_3] F_lvl_q_2 = (1 - 1) * F_lvl_stop + phase_stop_15 F_lvl_2_val_5 = F_lvl_2_val[F_lvl_q_2] C_lvldirty = true C_lvl_2_val[C_lvl_qos] += (A_lvl_2_val_2 != 0) * F_lvl_2_val_5 * coalesce(A_lvl_2_val_4, 0) A_lvl_q_3 += 1 end break end end end end if C_lvldirty C_lvl_idx[C_lvl_qos] = phase_stop_10 C_lvl_qos += 1 end A_lvl_q += 1 end break end end end C_lvl_ptr[1 + 1] += (C_lvl_qos - 0) - 1 resize!(C_lvl_ptr, 1 + 1) for p = 1:1 C_lvl_ptr[p + 1] += C_lvl_ptr[p] end qos_stop = C_lvl_ptr[1 + 1] - 1 resize!(C_lvl_idx, qos_stop) resize!(C_lvl_2_val, qos_stop) (C = Tensor((SparseListLevel){Int64}(ElementLevel{0.0, Float64, Int64}(C_lvl_2_val), A_lvl_stop, C_lvl_ptr, C_lvl_idx)),) end result begin C_lvl = ((ex.bodies[1]).bodies[1]).tns.bind.lvl C_lvl_ptr = C_lvl.ptr C_lvl_idx = C_lvl.idx C_lvl_2 = C_lvl.lvl C_lvl_2_val = C_lvl_2.val A_lvl = ((((ex.bodies[1]).bodies[2]).body.body.rhs.args[1]).args[1]).tns.bind.lvl A_lvl_stop = A_lvl.shape A_lvl_2 = A_lvl.lvl A_lvl_2_val = A_lvl_2.val F_lvl = (((ex.bodies[1]).bodies[2]).body.body.rhs.args[3]).tns.bind.lvl F_lvl_stop = F_lvl.shape F_lvl_2 = F_lvl.lvl F_lvl_2_val = F_lvl_2.val C_lvl_qos_stop = 0 Finch.resize_if_smaller!(C_lvl_ptr, 1 + 1) Finch.fill_range!(C_lvl_ptr, 0, 1 + 1, 1 + 1) C_lvl_qos = 0 + 1 0 < 1 || throw((Finch.FinchProtocolError)("SparseListLevels cannot be updated multiple times")) for i_5 = 1:A_lvl_stop if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvldirty = false A_lvl_q = (1 - 1) * A_lvl_stop + i_5 A_lvl_2_val_2 = A_lvl_2_val[A_lvl_q] v_2 = -i_5 phase_start_2 = max(1, -v_2 + -2) phase_stop_2 = min(F_lvl_stop, -v_2 + -3 + A_lvl_stop) if phase_stop_2 >= phase_start_2 for j_6 = phase_start_2:phase_stop_2 A_lvl_q_2 = (1 - 1) * A_lvl_stop + (v_2 + j_6 + 3) F_lvl_q = (1 - 1) * F_lvl_stop + j_6 A_lvl_2_val_3 = A_lvl_2_val[A_lvl_q_2] F_lvl_2_val_2 = F_lvl_2_val[F_lvl_q] C_lvldirty = true C_lvl_2_val[C_lvl_qos] = (A_lvl_2_val_2 != 0) * coalesce(A_lvl_2_val_3, 0) * F_lvl_2_val_2 + C_lvl_2_val[C_lvl_qos] end end if C_lvldirty C_lvl_idx[C_lvl_qos] = i_5 C_lvl_qos += 1 end end C_lvl_ptr[1 + 1] += (C_lvl_qos - 0) - 1 resize!(C_lvl_ptr, 1 + 1) for p = 1:1 C_lvl_ptr[p + 1] += C_lvl_ptr[p] end qos_stop = C_lvl_ptr[1 + 1] - 1 resize!(C_lvl_idx, qos_stop) resize!(C_lvl_2_val, qos_stop) (C = Tensor((SparseListLevel){Int64}(ElementLevel{0.0, Float64, Int64}(C_lvl_2_val), A_lvl_stop, C_lvl_ptr, C_lvl_idx)),) end index: Test Failed at /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:149 Expression: check_output("index/sparse_conv.jl", #= /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:151 =# @finch_code(begin C .= 0 #= /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:152 =# for i = _, j = _ #= /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:153 =# C[i] += (A[i] != 0) * coalesce(A[(j - i) + 3], 0) * F[j] #= /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:154 =# end end)) Evaluated: check_output("index/sparse_conv.jl", begin C_lvl = ((ex.bodies[1]).bodies[1]).tns.bind.lvl C_lvl_ptr = C_lvl.ptr C_lvl_idx = C_lvl.idx C_lvl_2 = C_lvl.lvl C_lvl_2_val = C_lvl_2.val A_lvl = ((((ex.bodies[1]).bodies[2]).body.body.rhs.args[1]).args[1]).tns.bind.lvl A_lvl_stop = A_lvl.shape A_lvl_2 = A_lvl.lvl A_lvl_2_val = A_lvl_2.val F_lvl = (((ex.bodies[1]).bodies[2]).body.body.rhs.args[3]).tns.bind.lvl F_lvl_stop = F_lvl.shape F_lvl_2 = F_lvl.lvl F_lvl_2_val = F_lvl_2.val C_lvl_qos_stop = 0 Finch.resize_if_smaller!(C_lvl_ptr, 1 + 1) Finch.fill_range!(C_lvl_ptr, 0, 1 + 1, 1 + 1) C_lvl_qos = 0 + 1 0 < 1 || throw((Finch.FinchProtocolError)("SparseListLevels cannot be updated multiple times")) for i_5 = 1:A_lvl_stop if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvldirty = false A_lvl_q = (1 - 1) * A_lvl_stop + i_5 A_lvl_2_val_2 = A_lvl_2_val[A_lvl_q] v_2 = -i_5 phase_start_2 = max(1, -v_2 + -2) phase_stop_2 = min(F_lvl_stop, -v_2 + -3 + A_lvl_stop) if phase_stop_2 >= phase_start_2 for j_6 = phase_start_2:phase_stop_2 A_lvl_q_2 = (1 - 1) * A_lvl_stop + (v_2 + j_6 + 3) F_lvl_q = (1 - 1) * F_lvl_stop + j_6 A_lvl_2_val_3 = A_lvl_2_val[A_lvl_q_2] F_lvl_2_val_2 = F_lvl_2_val[F_lvl_q] C_lvldirty = true C_lvl_2_val[C_lvl_qos] = (A_lvl_2_val_2 != 0) * coalesce(A_lvl_2_val_3, 0) * F_lvl_2_val_2 + C_lvl_2_val[C_lvl_qos] end end if C_lvldirty C_lvl_idx[C_lvl_qos] = i_5 C_lvl_qos += 1 end end C_lvl_ptr[1 + 1] += (C_lvl_qos - 0) - 1 resize!(C_lvl_ptr, 1 + 1) for p = 1:1 C_lvl_ptr[p + 1] += C_lvl_ptr[p] end qos_stop = C_lvl_ptr[1 + 1] - 1 resize!(C_lvl_idx, qos_stop) resize!(C_lvl_2_val, qos_stop) (C = Tensor((SparseListLevel){Int64}(ElementLevel{0.0, Float64, Int64}(C_lvl_2_val), A_lvl_stop, C_lvl_ptr, C_lvl_idx)),) end) Stacktrace: [1] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:753 [inlined] [2] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:149 [3] eval(m::Module, e::Any) @ Core ./boot.jl:489 [4] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:136 [5] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [6] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:137 [inlined] disagreement with reference output reference begin C_lvl = ((ex.bodies[1]).bodies[1]).tns.bind.lvl C_lvl_ptr = C_lvl.ptr C_lvl_idx = C_lvl.idx C_lvl_2 = C_lvl.lvl C_lvl_2_val = C_lvl_2.val A_lvl = ((ex.bodies[1]).bodies[2]).body.rhs.tns.bind.lvl A_lvl_ptr = A_lvl.ptr A_lvl_idx = A_lvl.idx A_lvl_2 = A_lvl.lvl A_lvl_2_val = A_lvl_2.val C_lvl_qos_stop = 0 Finch.resize_if_smaller!(C_lvl_ptr, 1 + 1) Finch.fill_range!(C_lvl_ptr, 0, 1 + 1, 1 + 1) C_lvl_qos = 0 + 1 0 < 1 || throw((Finch.FinchProtocolError)("SparseListLevels cannot be updated multiple times")) A_lvl_q = A_lvl_ptr[1] A_lvl_q_stop = A_lvl_ptr[1 + 1] if A_lvl_q < A_lvl_q_stop A_lvl_i1 = A_lvl_idx[A_lvl_q_stop - 1] else A_lvl_i1 = 0 end phase_stop = min(3, A_lvl_i1 + -1) if phase_stop >= 1 if A_lvl_idx[A_lvl_q] < 1 + 1 A_lvl_q = Finch.scansearch(A_lvl_idx, 1 + 1, A_lvl_q, A_lvl_q_stop - 1) end while true A_lvl_i = A_lvl_idx[A_lvl_q] phase_stop_2 = -1 + A_lvl_i if phase_stop_2 < phase_stop A_lvl_2_val_2 = A_lvl_2_val[A_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = A_lvl_2_val_2 C_lvl_idx[C_lvl_qos] = phase_stop_2 C_lvl_qos += 1 A_lvl_q += 1 else phase_stop_3 = min(phase_stop, -1 + A_lvl_i) if A_lvl_i == 1 + phase_stop_3 A_lvl_2_val_2 = A_lvl_2_val[A_lvl_q] if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end C_lvl_2_val[C_lvl_qos] = A_lvl_2_val_2 C_lvl_idx[C_lvl_qos] = phase_stop_3 C_lvl_qos += 1 A_lvl_q += 1 end break end end end C_lvl_ptr[1 + 1] += (C_lvl_qos - 0) - 1 resize!(C_lvl_ptr, 1 + 1) for p = 1:1 C_lvl_ptr[p + 1] += C_lvl_ptr[p] end qos_stop = C_lvl_ptr[1 + 1] - 1 resize!(C_lvl_idx, qos_stop) resize!(C_lvl_2_val, qos_stop) (C = Tensor((SparseListLevel){Int64}(ElementLevel{0.0, Float64, Int64}(C_lvl_2_val), 3, C_lvl_ptr, C_lvl_idx)),) end result begin C_lvl = ((ex.bodies[1]).bodies[1]).tns.bind.lvl C_lvl_ptr = C_lvl.ptr C_lvl_idx = C_lvl.idx C_lvl_2 = C_lvl.lvl C_lvl_2_val = C_lvl_2.val A_lvl = ((ex.bodies[1]).bodies[2]).body.rhs.tns.bind.lvl A_lvl_stop = A_lvl.shape A_lvl_2 = A_lvl.lvl A_lvl_2_val = A_lvl_2.val C_lvl_qos_stop = 0 Finch.resize_if_smaller!(C_lvl_ptr, 1 + 1) Finch.fill_range!(C_lvl_ptr, 0, 1 + 1, 1 + 1) C_lvl_qos = 0 + 1 0 < 1 || throw((Finch.FinchProtocolError)("SparseListLevels cannot be updated multiple times")) for i_4 = 1:3 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end A_lvl_q = (1 - 1) * A_lvl_stop + (1 + i_4) A_lvl_2_val_2 = A_lvl_2_val[A_lvl_q] C_lvl_2_val[C_lvl_qos] = A_lvl_2_val_2 C_lvl_idx[C_lvl_qos] = i_4 C_lvl_qos += 1 end C_lvl_ptr[1 + 1] += (C_lvl_qos - 0) - 1 resize!(C_lvl_ptr, 1 + 1) for p = 1:1 C_lvl_ptr[p + 1] += C_lvl_ptr[p] end qos_stop = C_lvl_ptr[1 + 1] - 1 resize!(C_lvl_idx, qos_stop) resize!(C_lvl_2_val, qos_stop) (C = Tensor((SparseListLevel){Int64}(ElementLevel{0.0, Float64, Int64}(C_lvl_2_val), 3, C_lvl_ptr, C_lvl_idx)),) end index: Test Failed at /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:173 Expression: check_output("index/sparse_window.jl", #= /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:174 =# @finch_code(begin C .= 0 #= /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:175 =# for i = _ #= /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:176 =# C[i] = A[(2:4)(i)] #= /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:177 =# end end)) Evaluated: check_output("index/sparse_window.jl", begin C_lvl = ((ex.bodies[1]).bodies[1]).tns.bind.lvl C_lvl_ptr = C_lvl.ptr C_lvl_idx = C_lvl.idx C_lvl_2 = C_lvl.lvl C_lvl_2_val = C_lvl_2.val A_lvl = ((ex.bodies[1]).bodies[2]).body.rhs.tns.bind.lvl A_lvl_stop = A_lvl.shape A_lvl_2 = A_lvl.lvl A_lvl_2_val = A_lvl_2.val C_lvl_qos_stop = 0 Finch.resize_if_smaller!(C_lvl_ptr, 1 + 1) Finch.fill_range!(C_lvl_ptr, 0, 1 + 1, 1 + 1) C_lvl_qos = 0 + 1 0 < 1 || throw((Finch.FinchProtocolError)("SparseListLevels cannot be updated multiple times")) for i_4 = 1:3 if C_lvl_qos > C_lvl_qos_stop C_lvl_qos_stop = max(C_lvl_qos_stop << 1, 1) Finch.resize_if_smaller!(C_lvl_idx, C_lvl_qos_stop) Finch.resize_if_smaller!(C_lvl_2_val, C_lvl_qos_stop) Finch.fill_range!(C_lvl_2_val, 0.0, C_lvl_qos, C_lvl_qos_stop) end A_lvl_q = (1 - 1) * A_lvl_stop + (1 + i_4) A_lvl_2_val_2 = A_lvl_2_val[A_lvl_q] C_lvl_2_val[C_lvl_qos] = A_lvl_2_val_2 C_lvl_idx[C_lvl_qos] = i_4 C_lvl_qos += 1 end C_lvl_ptr[1 + 1] += (C_lvl_qos - 0) - 1 resize!(C_lvl_ptr, 1 + 1) for p = 1:1 C_lvl_ptr[p + 1] += C_lvl_ptr[p] end qos_stop = C_lvl_ptr[1 + 1] - 1 resize!(C_lvl_idx, qos_stop) resize!(C_lvl_2_val, qos_stop) (C = Tensor((SparseListLevel){Int64}(ElementLevel{0.0, Float64, Int64}(C_lvl_2_val), 3, C_lvl_ptr, C_lvl_idx)),) end) Stacktrace: [1] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:753 [inlined] [2] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/suites/index_tests.jl:173 [3] eval(m::Module, e::Any) @ Core ./boot.jl:489 [4] top-level scope @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:136 [5] macro expansion @ /opt/julia/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] [6] macro expansion @ ~/.julia/packages/Finch/ucnKV/test/runtests.jl:137 [inlined] [ Info: Running test item: interface_einsum [ Info: Running test item: interface_asmd ┌ Warning: Performance Warning: non-concordant traversal of A[i_5, i_3, i_4] (hint: most arrays prefer column major or first index fast, run in fast mode to ignore this warning) └ @ Finch ?:0 ┌ Warning: Performance Warning: non-concordant traversal of A[i_5, i_3, i_4] (hint: most arrays prefer column major or first index fast, run in fast mode to ignore this warning) └ @ Finch ?:0 ====================================================================================== Information request received. A stacktrace will print followed by a 1.0 second profile. --trace-compile is enabled during profile collection. ====================================================================================== cmd: /opt/julia/bin/julia 45 running 1 of 1 signal (10): User defined signal 1 _ZN4llvm22containsIrreducibleCFGIPKNS_10BasicBlockEKNS_25ReversePostOrderTraversalIPKNS_8FunctionENS_11GraphTraitsIS7_EEEEKNS_8LoopInfoENS8_IS3_EEEEbRT0_RKT1_ at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN4llvm28mayContainIrreducibleControlERKNS_8FunctionEPKNS_8LoopInfoE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN12_GLOBAL__N_18DSEStateC2ERN4llvm8FunctionERNS1_9AAResultsERNS1_9MemorySSAERNS1_13DominatorTreeERNS1_17PostDominatorTreeERKNS1_17TargetLibraryInfoERKNS1_8LoopInfoE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN12_GLOBAL__N_1L19eliminateDeadStoresERN4llvm8FunctionERNS0_9AAResultsERNS0_9MemorySSAERNS0_13DominatorTreeERNS0_17PostDominatorTreeERKNS0_17TargetLibraryInfoERKNS0_8LoopInfoE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) _ZN4llvm7DSEPass3runERNS_8FunctionERNS_15AnalysisManagerIS1_JEEE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) run at /source/usr/include/llvm/IR/PassManagerInternal.h:91 _ZN4llvm11PassManagerINS_8FunctionENS_15AnalysisManagerIS1_JEEEJEE3runERS1_RS3_ at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) run at /source/usr/include/llvm/IR/PassManagerInternal.h:91 _ZN4llvm27ModuleToFunctionPassAdaptor3runERNS_6ModuleERNS_15AnalysisManagerIS1_JEEE at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) run at /source/usr/include/llvm/IR/PassManagerInternal.h:91 _ZN4llvm11PassManagerINS_6ModuleENS_15AnalysisManagerIS1_JEEEJEE3runERS1_RS3_ at /opt/julia/bin/../lib/julia/libLLVM.so.20.1jl (unknown line) run at /source/src/pipeline.cpp:787 operator() at /source/src/jitlayers.cpp:1516 withModuleDo<(anonymous namespace)::sizedOptimizerT::operator()(llvm::orc::ThreadSafeModule) [with long unsigned int N = 4]:: > at /source/usr/include/llvm/ExecutionEngine/Orc/ThreadSafeModule.h:136 [inlined] operator() at /source/src/jitlayers.cpp:1477 [inlined] operator() at /source/src/jitlayers.cpp:1652 [inlined] addModule at /source/src/jitlayers.cpp:2110 jl_compile_codeinst_now at /source/src/jitlayers.cpp:685 jl_compile_codeinst_impl at /source/src/jitlayers.cpp:879 jl_compile_method_internal at /source/src/gf.c:3648 _jl_invoke at /source/src/gf.c:4108 [inlined] ijl_apply_generic at /source/src/gf.c:4313 collect at ./array.jl:833 unknown function (ip: 0x7bf7091f035e) at (unknown file) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 translate_rhs at /home/pkgeval/.julia/packages/Finch/ucnKV/src/Galley/ExecutionEngine/execution-engine.jl:159 #translate_rhs##8 at ./none (unknown line) [inlined] iterate at ./generator.jl:48 [inlined] collect at ./array.jl:828 unknown function (ip: 0x7bf7091f035e) at (unknown file) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 translate_rhs at /home/pkgeval/.julia/packages/Finch/ucnKV/src/Galley/ExecutionEngine/execution-engine.jl:159 unknown function (ip: 0x7bf7091ed0f8) at (unknown file) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 execute_query at /home/pkgeval/.julia/packages/Finch/ucnKV/src/Galley/ExecutionEngine/execution-engine.jl:196 get_execute_code at /home/pkgeval/.julia/packages/Finch/ucnKV/src/Galley/ExecutionEngine/execution-engine.jl:269 #galley#311 at /home/pkgeval/.julia/packages/Finch/ucnKV/src/Galley/Galley.jl:164 galley at /home/pkgeval/.julia/packages/Finch/ucnKV/src/Galley/Galley.jl:69 unknown function (ip: 0x7bf7091dbda1) at (unknown file) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 GalleyOptimizer at /home/pkgeval/.julia/packages/Finch/ucnKV/src/Galley/FinchCompat/executor.jl:21 unknown function (ip: 0x7bf7091b8f52) at (unknown file) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 #logic_executor_code##0 at /home/pkgeval/.julia/packages/Finch/ucnKV/src/scheduler/LogicExecutor.jl:54 #contain#17 at /home/pkgeval/.julia/packages/Finch/ucnKV/src/environment.jl:116 contain at /home/pkgeval/.julia/packages/Finch/ucnKV/src/environment.jl:111 [inlined] logic_executor_code at /home/pkgeval/.julia/packages/Finch/ucnKV/src/scheduler/LogicExecutor.jl:51 AdaptiveExecutor at /home/pkgeval/.julia/packages/Finch/ucnKV/src/Galley/FinchCompat/executor.jl:147 compute_parse at /home/pkgeval/.julia/packages/Finch/ucnKV/src/interface/lazy.jl:721 #compute#1199 at /home/pkgeval/.julia/packages/Finch/ucnKV/src/interface/lazy.jl:710 unknown function (ip: 0x7bf6ff1aa7af) at (unknown file) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 compute at /home/pkgeval/.julia/packages/Finch/ucnKV/src/interface/lazy.jl:709 unknown function (ip: 0x7bf6ff317d42) at (unknown file) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 #std#1229 at /home/pkgeval/.julia/packages/Finch/ucnKV/src/interface/eager.jl:310 std at /home/pkgeval/.julia/packages/Finch/ucnKV/src/interface/eager.jl:307 unknown function (ip: 0x7bf6ff2a45a3) at (unknown file) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 macro expansion at /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/interface_tests.jl:608 [inlined] macro expansion at /source/usr/share/julia/stdlib/v1.14/Test/src/Test.jl:1961 [inlined] #42 at /home/pkgeval/.julia/packages/Finch/ucnKV/test/suites/interface_tests.jl:379 with_scheduler at /home/pkgeval/.julia/packages/Finch/ucnKV/src/interface/lazy.jl:692 unknown function (ip: 0x7bf6ff1a96cc) at (unknown file) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 jl_apply at /source/src/julia.h:2285 [inlined] do_call at /source/src/interpreter.c:123 eval_value at /source/src/interpreter.c:243 eval_stmt_value at /source/src/interpreter.c:194 [inlined] eval_body at /source/src/interpreter.c:679 jl_interpret_toplevel_thunk at /source/src/interpreter.c:884 jl_toplevel_eval_flex at /source/src/toplevel.c:757 ijl_toplevel_eval at /source/src/toplevel.c:769 ijl_toplevel_eval_in at /source/src/toplevel.c:814 eval at ./boot.jl:489 EvalInto at ./boot.jl:494 jfptr_EvalInto_43243.1 at /opt/julia/lib/julia/sys.so (unknown line) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 jl_apply at /source/src/julia.h:2285 [inlined] do_call at /source/src/interpreter.c:123 eval_value at /source/src/interpreter.c:243 eval_stmt_value at /source/src/interpreter.c:194 [inlined] eval_body at /source/src/interpreter.c:679 eval_body at /source/src/interpreter.c:550 eval_body at /source/src/interpreter.c:558 eval_body at /source/src/interpreter.c:558 eval_body at /source/src/interpreter.c:558 jl_interpret_toplevel_thunk at /source/src/interpreter.c:884 jl_toplevel_eval_flex at /source/src/toplevel.c:757 jl_eval_toplevel_stmts at /source/src/toplevel.c:600 jl_toplevel_eval_flex at /source/src/toplevel.c:698 ijl_toplevel_eval at /source/src/toplevel.c:769 ijl_toplevel_eval_in at /source/src/toplevel.c:814 eval at ./boot.jl:489 include_string at ./loading.jl:3092 _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 _include at ./loading.jl:3152 include at ./Base.jl:310 IncludeInto at ./Base.jl:311 jfptr_IncludeInto_41151.1 at /opt/julia/lib/julia/sys.so (unknown line) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 jl_apply at /source/src/julia.h:2285 [inlined] do_call at /source/src/interpreter.c:123 eval_value at /source/src/interpreter.c:243 eval_stmt_value at /source/src/interpreter.c:194 [inlined] eval_body at /source/src/interpreter.c:679 eval_body at /source/src/interpreter.c:550 eval_body at /source/src/interpreter.c:558 eval_body at /source/src/interpreter.c:558 eval_body at /source/src/interpreter.c:558 jl_interpret_toplevel_thunk at /source/src/interpreter.c:884 jl_toplevel_eval_flex at /source/src/toplevel.c:757 jl_eval_toplevel_stmts at /source/src/toplevel.c:600 jl_toplevel_eval_flex at /source/src/toplevel.c:698 ijl_toplevel_eval at /source/src/toplevel.c:769 ijl_toplevel_eval_in at /source/src/toplevel.c:814 eval at ./boot.jl:489 include_string at ./loading.jl:3092 _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 _include at ./loading.jl:3152 include at ./Base.jl:310 IncludeInto at ./Base.jl:311 jfptr_IncludeInto_41151.1 at /opt/julia/lib/julia/sys.so (unknown line) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 jl_apply at /source/src/julia.h:2285 [inlined] do_call at /source/src/interpreter.c:123 eval_value at /source/src/interpreter.c:243 eval_stmt_value at /source/src/interpreter.c:194 [inlined] eval_body at /source/src/interpreter.c:679 jl_interpret_toplevel_thunk at /source/src/interpreter.c:884 jl_toplevel_eval_flex at /source/src/toplevel.c:757 jl_eval_toplevel_stmts at /source/src/toplevel.c:600 jl_toplevel_eval_flex at /source/src/toplevel.c:698 ijl_toplevel_eval at /source/src/toplevel.c:769 ijl_toplevel_eval_in at /source/src/toplevel.c:814 eval at ./boot.jl:489 exec_options at ./client.jl:310 _start at ./client.jl:577 jfptr__start_62684.1 at /opt/julia/lib/julia/sys.so (unknown line) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 jl_apply at /source/src/julia.h:2285 [inlined] true_main at /source/src/jlapi.c:971 jl_repl_entrypoint at /source/src/jlapi.c:1138 main at /source/cli/loader_exe.c:58 unknown function (ip: 0x7bf730047249) at /lib/x86_64-linux-gnu/libc.so.6 __libc_start_main at /lib/x86_64-linux-gnu/libc.so.6 (unknown line) unknown function (ip: 0x4010b8) at /workspace/srcdir/glibc-2.17/csu/../sysdeps/x86_64/start.S unknown function (ip: (nil)) at (unknown file) ============================================================== Profile collected. A report will print at the next yield point. Disabling --trace-compile ============================================================== [ Info: Running test item: interface_issues ====================================================================================== Information request received. A stacktrace will print followed by a 1.0 second profile. --trace-compile is enabled during profile collection. ====================================================================================== cmd: /opt/julia/bin/julia 1 running 0 of 1 signal (10): User defined signal 1 epoll_pwait at /lib/x86_64-linux-gnu/libc.so.6 (unknown line) uv__io_poll at /workspace/srcdir/libuv/src/unix/linux.c:1404 uv_run at /workspace/srcdir/libuv/src/unix/core.c:430 ijl_task_get_next at /source/src/scheduler.c:457 wait at ./task.jl:1246 wait_forever at ./task.jl:1168 jfptr_wait_forever_55793.1 at /opt/julia/lib/julia/sys.so (unknown line) _jl_invoke at /source/src/gf.c:4116 [inlined] ijl_apply_generic at /source/src/gf.c:4313 jl_apply at /source/src/julia.h:2285 [inlined] start_task at /source/src/task.c:1272 unknown function (ip: (nil)) at (unknown file) ============================================================== Profile collected. A report will print at the next yield point. Disabling --trace-compile ============================================================== ┌ Warning: There were no samples collected in one or more groups. │ This may be due to idle threads, or you may need to run your │ program longer (perhaps by running it multiple times), │ or adjust the delay between samples with `Profile.init()`. └ @ Profile /opt/julia/share/julia/stdlib/v1.14/Profile/src/Profile.jl:1361 Overhead ╎ [+additional indent] Count File:Line Function ========================================================= Thread 1 (default) Task 0x000073bf6675df00 Total snapshots: 425. Utilization: 0% ╎425 @Base/task.jl:1168 wait_forever() 424╎ 425 @Base/task.jl:1246 wait() PkgEval terminated after 2751.57s: test duration exceeded the time limit