Copyright 2025, Altinity Inc.. All Rights Reserved. All information contained herein is, and remains the property of Altinity Inc.. Any dissemination of this information or reproduction of this material is strictly forbidden unless prior written permission is obtained from Altinity Inc..
Date | Jun 05, 2025 17:40 |
Duration | 59m 20s |
Framework | TestFlows 2.0.250110.1002922 |
Test artifacts can be found at https://s3.amazonaws.com/altinity-build-artifacts/index.html#0/8c10ebeecd0aff961aaa123f740d37dfeed60d21/regression/aarch64/with_analyzer/zookeeper/without_thread_fuzzer/parquet/
project | Altinity/ClickHouse |
project.id | 159717931 |
package | https://s3.amazonaws.com/altinity-build-artifacts/PRs/825/8c10ebeecd0aff961aaa123f740d37dfeed60d21/package_aarch64/clickhouse-common-static_24.3.18.10425.altinitystable_arm64.deb |
version | 24.3.18.10425.altinitystable |
user.name | zvonand |
repository | https://github.com/Altinity/clickhouse-regression |
commit.hash | 0fdb555b36d0ea6a6affc5cf87e593b5d8944c0a |
job.name | Parquet |
job.retry | 1 |
job.url | https://github.com/Altinity/ClickHouse/actions/runs/15470038451 |
arch | aarch64 |
local | True |
clickhouse_version | None |
clickhouse_path | https://s3.amazonaws.com/altinity-build-artifacts/PRs/825/8c10ebeecd0aff961aaa123f740d37dfeed60d21/package_aarch64/clickhouse-common-static_24.3.18.10425.altinitystable_arm64.deb |
as_binary | False |
base_os | None |
keeper_path | None |
zookeeper_version | None |
use_keeper | False |
stress | False |
collect_service_logs | True |
thread_fuzzer | False |
with_analyzer | True |
reuse_env | False |
storages | None |
minio_uri | Secret(name='minio_uri') |
minio_root_user | Secret(name='minio_root_user') |
minio_root_password | Secret(name='minio_root_password') |
aws_s3_bucket | None |
aws_s3_region | Secret(name='aws_s3_region') |
aws_s3_key_id | Secret(name='aws_s3_key_id') |
aws_s3_access_key | Secret(name='aws_s3_access_key') |
gcs_uri | None |
gcs_key_id | None |
gcs_key_secret | None |
azure_account_name | None |
azure_storage_key | None |
azure_container | None |
native_parquet_reader | False |
stress_bloom | False |
Units | Skip | OK | Fail | Error | XFail | |
---|---|---|---|---|---|---|
Modules | ||||||
Suites | ||||||
Features | ||||||
Scenarios | ||||||
Checks | ||||||
Examples | ||||||
Steps |
Test Name | Result | Message |
---|---|---|
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine | XFail 41s 812ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 826, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 896, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__GZIP__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 896 in 'execute_query' 888\| with values() as that: 889\| snapshot_result = snapshot( 890\| "\n" + r.output.strip() + "\n", 891\| id=snapshot_id, 892\| name=snapshot_name, 893\| encoder=str, 894\| mode=snapshot.CHECK, 895\| ) 896\|> assert that(snapshot_result), error() |
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine | XFail 44s 185ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 826, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 896, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__NONE__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 896 in 'execute_query' 888\| with values() as that: 889\| snapshot_result = snapshot( 890\| "\n" + r.output.strip() + "\n", 891\| id=snapshot_id, 892\| name=snapshot_name, 893\| encoder=str, 894\| mode=snapshot.CHECK, 895\| ) 896\|> assert that(snapshot_result), error() |
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine | XFail 44s 425ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 826, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 896, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__LZ4__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 896 in 'execute_query' 888\| with values() as that: 889\| snapshot_result = snapshot( 890\| "\n" + r.output.strip() + "\n", 891\| id=snapshot_id, 892\| name=snapshot_name, 893\| encoder=str, 894\| mode=snapshot.CHECK, 895\| ) 896\|> assert that(snapshot_result), error() |
/parquet/chunked array | XFail 16s 122ms Not supported | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/chunked_array.py", line 30, in feature node.query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1188, in query assert False, error(r.output) ^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert False, error(r.output) Description Error on processing query: Code: 33. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/chunked_array_test_file.parquet): While executing ParquetBlockInputFormat: While executing File: data for INSERT was parsed from file. (CANNOT_READ_ALL_DATA) (version 24.3.18.10425.altinitystable (altinity build)) (query: INSERT INTO table_9ff2b09f_4238_11f0_877a_9600045a8824 FROM INFILE '/var/lib/clickhouse/user_files/chunked_array_test_file.parquet' FORMAT Parquet) Assertion values assert False, error(r.output) ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1188 in 'query' 1180\| assert message in r.output, error(r.output) 1181\| 1182\| if not ignore_exception: 1183\| if message is None or "Exception:" not in message: 1184\| with Then("check if output has exception") if steps else NullStep(): 1185\| if "Exception:" in r.output: 1186\| if raise_on_exception: 1187\| raise QueryRuntimeException(r.output) 1188\|> assert False, error(r.output) 1189\| 1190\| return r 1191\| |
/parquet/datatypes/large string map | XFail 7s 609ms Will fail until the, https://github.com/apache/arrow/pull/35825, gets merged. | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature scenario() File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 801, in large_string_map import_export(snapshot_name="large_string_map_structure", import_file=import_file) File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/outline.py", line 36, in import_export node.query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1188, in query assert False, error(r.output) ^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert False, error(r.output) Description Received exception from server (version 24.3.18): Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/arrow/large_string_map.brotli.parquet): While executing ParquetBlockInputFormat: While executing File. (CANNOT_READ_ALL_DATA) (query: CREATE TABLE table_b531aed4_4239_11f0_86e5_9600045a8824 ENGINE = MergeTree ORDER BY tuple() AS SELECT * FROM file('arrow/large_string_map.brotli.parquet', Parquet) LIMIT 100 FORMAT TabSeparated ) Assertion values assert False, error(r.output) ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1188 in 'query' 1180\| assert message in r.output, error(r.output) 1181\| 1182\| if not ignore_exception: 1183\| if message is None or "Exception:" not in message: 1184\| with Then("check if output has exception") if steps else NullStep(): 1185\| if "Exception:" in r.output: 1186\| if raise_on_exception: 1187\| raise QueryRuntimeException(r.output) 1188\|> assert False, error(r.output) 1189\| 1190\| return r 1191\| |
Test Name | Result | Duration |
---|---|---|
/parquet | OK | 59m 20s |
/parquet/file | OK | 36m 37s |
/parquet/file/engine | OK | 36m 37s |
/parquet/file/engine/insert into engine | OK | 23m 4s |
/parquet/file/function | OK | 18m 24s |
/parquet/file/engine/select from engine | OK | 11m 2s |
/parquet/file/engine/engine to file to engine | OK | 31m 45s |
/parquet/file/engine/insert into engine from file | OK | 22m 17s |
/parquet/file/function/insert into function manual cast types | OK | 18m 2s |
/parquet/file/engine/engine select output to file | OK | 36m 37s |
/parquet/file/function/insert into function auto cast types | OK | 18m 24s |
/parquet/query | OK | 48m 20s |
/parquet/file/function/select from function manual cast types | OK | 16m 56s |
/parquet/file/function/select from function auto cast types | OK | 11m 2s |
/parquet/query/compression type | OK | 48m 20s |
/parquet/list in multiple chunks | OK | 1m 10s |
/parquet/url | OK | 37m 38s |
/parquet/query/compression type/=NONE | OK | 48m 20s |
/parquet/query/compression type/=GZIP | OK | 48m 20s |
/parquet/query/compression type/=LZ4 | OK | 48m 20s |
/parquet/query/compression type/=NONE /insert into memory table from file | OK | 10m 38s |
/parquet/query/compression type/=GZIP /insert into memory table from file | OK | 10m 37s |
/parquet/query/compression type/=LZ4 /insert into memory table from file | OK | 10m 39s |
/parquet/url/engine | OK | 36m 34s |
/parquet/url/function | OK | 33m 39s |
/parquet/url/engine/insert into engine | OK | 23m 16s |
/parquet/url/engine/select from engine | OK | 10m 52s |
/parquet/url/function/insert into function | OK | 17m 43s |
/parquet/url/engine/engine to file to engine | OK | 31m 37s |
/parquet/url/function/select from function manual cast types | OK | 33m 39s |
/parquet/url/engine/insert into engine from file | OK | 30m 24s |
/parquet/url/function/select from function auto cast types | OK | 17m 25s |
/parquet/url/engine/engine select output to file | OK | 36m 34s |
/parquet/mysql | OK | 2m 2s |
/parquet/mysql/compression type | OK | 2m 2s |
/parquet/mysql/compression type/=NONE | OK | 2m 0s |
/parquet/mysql/compression type/=NONE /mysql engine to parquet file to mysql engine | OK | 1m 28s |
/parquet/mysql/compression type/=GZIP | OK | 1m 59s |
/parquet/mysql/compression type/=GZIP /mysql engine to parquet file to mysql engine | OK | 1m 22s |
/parquet/mysql/compression type/=LZ4 | OK | 2m 2s |
/parquet/mysql/compression type/=LZ4 /mysql engine to parquet file to mysql engine | OK | 1m 29s |
/parquet/mysql/compression type/=GZIP /mysql function to parquet file to mysql function | OK | 36s 983ms |
/parquet/mysql/compression type/=NONE /mysql function to parquet file to mysql function | OK | 31s 946ms |
/parquet/mysql/compression type/=LZ4 /mysql function to parquet file to mysql function | OK | 32s 273ms |
/parquet/postgresql | OK | 1m 21s |
/parquet/postgresql/compression type | OK | 1m 21s |
/parquet/postgresql/compression type/=NONE | OK | 1m 21s |
/parquet/postgresql/compression type/=GZIP | OK | 1m 17s |
/parquet/postgresql/compression type/=LZ4 | OK | 1m 20s |
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine | XFail | 41s 812ms |
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine | XFail | 44s 185ms |
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine | XFail | 44s 425ms |
/parquet/postgresql/compression type/=GZIP /postgresql function to parquet file to postgresql function | OK | 35s 450ms |
/parquet/postgresql/compression type/=NONE /postgresql function to parquet file to postgresql function | OK | 36s 949ms |
/parquet/postgresql/compression type/=LZ4 /postgresql function to parquet file to postgresql function | OK | 36s 266ms |
/parquet/remote | OK | 22m 47s |
/parquet/remote/compression type | OK | 22m 47s |
/parquet/remote/compression type/=NONE | OK | 22m 46s |
/parquet/remote/compression type/=GZIP | OK | 22m 47s |
/parquet/remote/compression type/=LZ4 | OK | 22m 45s |
/parquet/remote/compression type/=LZ4 /outline | OK | 22m 45s |
/parquet/remote/compression type/=NONE /outline | OK | 22m 46s |
/parquet/remote/compression type/=GZIP /outline | OK | 22m 46s |
/parquet/remote/compression type/=LZ4 /outline/insert into function | OK | 8m 4s |
/parquet/remote/compression type/=NONE /outline/insert into function | OK | 8m 4s |
/parquet/remote/compression type/=GZIP /outline/insert into function | OK | 8m 4s |
/parquet/query/compression type/=GZIP /insert into mergetree table from file | OK | 6m 29s |
/parquet/query/compression type/=NONE /insert into mergetree table from file | OK | 6m 28s |
/parquet/query/compression type/=LZ4 /insert into mergetree table from file | OK | 6m 27s |
/parquet/remote/compression type/=LZ4 /outline/select from function | OK | 14m 41s |
/parquet/remote/compression type/=GZIP /outline/select from function | OK | 14m 42s |
/parquet/remote/compression type/=NONE /outline/select from function | OK | 14m 42s |
/parquet/query/compression type/=LZ4 /insert into replicated mergetree table from file | OK | 4m 46s |
/parquet/query/compression type/=NONE /insert into replicated mergetree table from file | OK | 4m 45s |
/parquet/query/compression type/=GZIP /insert into replicated mergetree table from file | OK | 4m 45s |
/parquet/query/compression type/=NONE /insert into distributed table from file | OK | 4m 5s |
/parquet/query/compression type/=GZIP /insert into distributed table from file | OK | 4m 5s |
/parquet/query/compression type/=LZ4 /insert into distributed table from file | OK | 4m 4s |
/parquet/query/compression type/=LZ4 /select from memory table into file | OK | 6m 37s |
/parquet/query/compression type/=GZIP /select from memory table into file | OK | 6m 38s |
/parquet/query/compression type/=NONE /select from memory table into file | OK | 6m 38s |
/parquet/chunked array | XFail | 16s 122ms |
/parquet/broken | OK | 593ms |
/parquet/broken/file | Skip | 20ms |
/parquet/broken/read broken bigint | Skip | 29ms |
/parquet/broken/read broken date | Skip | 39ms |
/parquet/broken/read broken int | Skip | 18ms |
/parquet/broken/read broken smallint | Skip | 42ms |
/parquet/broken/read broken timestamp ms | Skip | 20ms |
/parquet/broken/read broken timestamp us | Skip | 25ms |
/parquet/broken/read broken tinyint | Skip | 26ms |
/parquet/broken/read broken ubigint | Skip | 22ms |
/parquet/broken/read broken uint | Skip | 9ms |
/parquet/broken/read broken usmallint | Skip | 17ms |
/parquet/broken/read broken utinyint | Skip | 47ms |
/parquet/broken/string | Skip | 21ms |
/parquet/encoding | OK | 1m 2s |
/parquet/encoding/deltabytearray1 | OK | 9s 154ms |
/parquet/encoding/deltabytearray2 | OK | 8s 163ms |
/parquet/encoding/deltalengthbytearray | OK | 8s 210ms |
/parquet/encoding/dictionary | OK | 9s 37ms |
/parquet/encoding/plain | OK | 8s 338ms |
/parquet/encoding/plainrlesnappy | OK | 11s 586ms |
/parquet/encoding/rleboolean | OK | 7s 845ms |
/parquet/compression | OK | 2m 45s |
/parquet/compression/arrow snappy | OK | 7s 771ms |
/parquet/compression/brotli | OK | 7s 932ms |
/parquet/compression/gzippages | OK | 15s 792ms |
/parquet/compression/largegzip | OK | 8s 507ms |
/parquet/compression/lz4 hadoop | OK | 7s 965ms |
/parquet/compression/lz4 hadoop large | OK | 7s 496ms |
/parquet/compression/lz4 non hadoop | OK | 7s 721ms |
/parquet/compression/lz4 raw | OK | 7s 972ms |
/parquet/compression/lz4 raw large | OK | 7s 935ms |
/parquet/compression/lz4pages | OK | 15s 630ms |
/parquet/compression/nonepages | OK | 16s 26ms |
/parquet/compression/snappypages | OK | 15s 912ms |
/parquet/compression/snappyplain | OK | 7s 801ms |
/parquet/compression/snappyrle | OK | 7s 753ms |
/parquet/compression/zstd | OK | 7s 642ms |
/parquet/compression/zstdpages | OK | 15s 773ms |
/parquet/datatypes | OK | 7m 28s |
/parquet/datatypes/arrowtimestamp | OK | 7s 578ms |
/parquet/datatypes/arrowtimestampms | OK | 7s 609ms |
/parquet/datatypes/binary | OK | 8s 613ms |
/parquet/datatypes/binary string | OK | 7s 742ms |
/parquet/datatypes/blob | OK | 7s 835ms |
/parquet/datatypes/boolean | OK | 10s 481ms |
/parquet/datatypes/byte array | OK | 10s 525ms |
/parquet/datatypes/columnname | OK | 7s 768ms |
/parquet/datatypes/columnwithnull | OK | 7s 101ms |
/parquet/query/compression type/=LZ4 /select from mergetree table into file | OK | 4m 12s |
/parquet/query/compression type/=GZIP /select from mergetree table into file | OK | 4m 12s |
/parquet/query/compression type/=NONE /select from mergetree table into file | OK | 4m 12s |
/parquet/datatypes/columnwithnull2 | OK | 10s 131ms |
/parquet/datatypes/date | OK | 13s 989ms |
/parquet/datatypes/decimal with filter | OK | 8s 103ms |
/parquet/datatypes/decimalvariousfilters | OK | 7s 525ms |
/parquet/datatypes/decimalwithfilter2 | OK | 7s 411ms |
/parquet/datatypes/enum | OK | 7s 704ms |
/parquet/datatypes/enum2 | OK | 7s 614ms |
/parquet/datatypes/fixed length decimal | OK | 7s 592ms |
/parquet/datatypes/fixed length decimal legacy | OK | 7s 352ms |
/parquet/datatypes/fixedstring | OK | 7s 120ms |
/parquet/datatypes/float16 | Skip | 2ms |
/parquet/datatypes/h2oai | OK | 7s 431ms |
/parquet/datatypes/hive | OK | 14s 748ms |
/parquet/datatypes/int32 | OK | 7s 230ms |
/parquet/datatypes/int32 decimal | OK | 7s 123ms |
/parquet/datatypes/int64 | OK | 7s 609ms |
/parquet/datatypes/int64 decimal | OK | 8s 212ms |
/parquet/datatypes/json | OK | 7s 616ms |
/parquet/datatypes/large string map | XFail | 7s 609ms |
/parquet/datatypes/largedouble | OK | 7s 541ms |
/parquet/datatypes/manydatatypes | OK | 7s 363ms |
/parquet/datatypes/manydatatypes2 | OK | 7s 254ms |
/parquet/datatypes/maps | OK | 7s 477ms |
/parquet/datatypes/nameswithemoji | OK | 7s 461ms |
/parquet/datatypes/nandouble | OK | 7s 428ms |
/parquet/datatypes/negativeint64 | OK | 8s 279ms |
/parquet/datatypes/nullbyte | OK | 7s 242ms |
/parquet/datatypes/nullbytemultiple | OK | 7s 296ms |
/parquet/datatypes/nullsinid | OK | 7s 288ms |
/parquet/datatypes/pandasdecimal | OK | 7s 368ms |
/parquet/datatypes/pandasdecimaldate | OK | 8s 939ms |
/parquet/complex | OK | 2m 35s |
/parquet/complex/arraystring | OK | 9s 563ms |
/parquet/datatypes/parquetgo | OK | 7s 681ms |
/parquet/complex/big tuple with nulls | OK | 7s 451ms |
/parquet/query/compression type/=LZ4 /select from replicated mergetree table into file | OK | 3m 18s |
/parquet/query/compression type/=NONE /select from replicated mergetree table into file | OK | 3m 17s |
/parquet/query/compression type/=GZIP /select from replicated mergetree table into file | OK | 3m 17s |
/parquet/datatypes/selectdatewithfilter | OK | 17s 718ms |
/parquet/complex/bytearraydictionary | OK | 15s 723ms |
/parquet/datatypes/singlenull | OK | 10s 57ms |
/parquet/complex/complex null | OK | 9s 989ms |
/parquet/complex/lagemap | OK | 6s 841ms |
/parquet/datatypes/sparkv21 | OK | 6s 959ms |
/parquet/complex/largenestedarray | OK | 6s 798ms |
/parquet/datatypes/sparkv22 | OK | 6s 762ms |
/parquet/complex/largestruct | OK | 7s 542ms |
/parquet/datatypes/statdecimal | OK | 7s 505ms |
/parquet/cache | OK | 15s 884ms |
/parquet/cache/cache1 | OK | 8s 872ms |
/parquet/complex/largestruct2 | OK | 7s 182ms |
/parquet/datatypes/string | OK | 7s 61ms |
/parquet/cache/cache2 | OK | 7s 2ms |
/parquet/datatypes/string int list inconsistent offset multiple batches | OK | 10s 697ms |
/parquet/complex/largestruct3 | OK | 7s 108ms |
/parquet/glob | OK | 39s 649ms |
/parquet/glob/fastparquet globs | OK | 1s 526ms |
/parquet/glob/glob1 | OK | 1s 806ms |
/parquet/complex/list | OK | 7s 58ms |
/parquet/glob/glob2 | OK | 2s 9ms |
/parquet/datatypes/stringtypes | OK | 6s 914ms |
/parquet/glob/glob with multiple elements | OK | 375ms |
/parquet/glob/million extensions | OK | 33s 895ms |
/parquet/complex/nested array | OK | 6s 894ms |
/parquet/datatypes/struct | OK | 6s 915ms |
/parquet/complex/nested map | OK | 7s 8ms |
/parquet/datatypes/supporteduuid | OK | 6s 894ms |
/parquet/complex/nestedallcomplex | OK | 7s 77ms |
/parquet/datatypes/timestamp1 | OK | 6s 684ms |
/parquet/complex/nestedarray2 | OK | 7s 15ms |
/parquet/datatypes/timestamp2 | OK | 6s 856ms |
/parquet/complex/nestedstruct | OK | 6s 894ms |
/parquet/datatypes/timezone | OK | 7s 32ms |
/parquet/rowgroups | OK | 14s 27ms |
/parquet/rowgroups/manyrowgroups | OK | 6s 873ms |
/parquet/complex/nestedstruct2 | OK | 6s 942ms |
/parquet/rowgroups/manyrowgroups2 | OK | 7s 152ms |
/parquet/datatypes/unsigned | OK | 14s 147ms |
/parquet/complex/nestedstruct3 | OK | 7s 62ms |
/parquet/encrypted | Skip | 2ms |
/parquet/fastparquet | OK | 20ms |
/parquet/fastparquet/airlines | Skip | 6ms |
/parquet/fastparquet/baz | Skip | 1ms |
/parquet/fastparquet/empty date | Skip | 6ms |
/parquet/fastparquet/evo | Skip | 1ms |
/parquet/fastparquet/fastparquet | Skip | 1ms |
/parquet/read and write | OK | 15m 4s |
/parquet/read and write/read and write parquet file | OK | 15m 4s |
/parquet/complex/nestedstruct4 | OK | 7s 262ms |
/parquet/datatypes/unsupportednull | OK | 282ms |
/parquet/column related errors | OK | 2s 169ms |
/parquet/column related errors/check error with 500 columns | OK | 2s 168ms |
/parquet/multi chunk upload | Skip | 1ms |
/parquet/complex/tupleofnulls | OK | 7s 177ms |
/parquet/complex/tuplewithdatetime | OK | 6s 911ms |
/parquet/query/compression type/=NONE /select from distributed table into file | OK | 3m 39s |
/parquet/query/compression type/=GZIP /select from distributed table into file | OK | 3m 39s |
/parquet/query/compression type/=LZ4 /select from distributed table into file | OK | 3m 37s |
/parquet/query/compression type/=LZ4 /select from mat view into file | OK | 3m 2s |
/parquet/query/compression type/=NONE /select from mat view into file | OK | 3m 2s |
/parquet/query/compression type/=GZIP /select from mat view into file | OK | 3m 2s |
/parquet/query/compression type/=LZ4 /insert into table with projection from file | OK | 1m 32s |
/parquet/query/compression type/=NONE /insert into table with projection from file | OK | 1m 32s |
/parquet/query/compression type/=GZIP /insert into table with projection from file | OK | 1m 32s |
Generated by TestFlows Open-Source Test Framework v2.0.250110.1002922