Skip to main content
Skip to main content

Troubleshooting & Best Practices

Common errors

AST is too big (code 168)

Error message:

code: 168, message: AST is too big. Maximum: 50000

or

code: 62, message: Max query size exceeded

Cause: Large UPDATE or DELETE batches generate SQL statements with very complex abstract syntax trees. Common with wide tables or history mode enabled.

Solution:

Lower mutation_batch_size and hard_delete_batch_size in the advanced configuration file. Both default to 1500 and accept values between 200 and 1500.


Memory limit exceeded / OOM (code 241)

Error message:

code: 241, message: (total) memory limit exceeded: would use 14.01 GiB

Cause: The INSERT operation requires more memory than available. Happens usually during large initial syncs, with wide tables, or concurrent batch operations.

Solutions:

  1. Reduce write_batch_size: Try lowering it to 50,000 for large tables.
  2. Reduce database load: Check the load on the ClickHouse Cloud service to see if it's overloaded.
  3. Scale up the ClickHouse Cloud service to provide more memory.

Unexpected EOF / Connection error

Error message:

ClickHouse connection error: unexpected EOF

Or FAILURE_WITH_TASK with no stack trace in Fivetran logs.

Cause:

  • IP access list not configured to allow Fivetran traffic.
  • Transient network issues between Fivetran and ClickHouse Cloud.
  • Corrupted or invalid source data causing the destination connector to crash.

Solutions:

  1. Check IP access list: In ClickHouse Cloud, go to Settings > Security and add the Fivetran IP addresses or allow access from anywhere.
  2. Retry: Recent connector versions automatically retry EOF errors. Sporadic errors (1–2 per day) are likely transient.
  3. If the issue persists: Open a support ticket with ClickHouse providing the error time window. Also ask Fivetran support to investigate source data quality.

Can't map type UInt64

Error message:

cause: can't map type UInt64 to Fivetran types

Cause: The connector maps LONG to Int64, never UInt64. This error occurs when a column type is manually altered in a Fivetran-managed table.

Solutions:

  1. Do not manually modify column types in Fivetran-managed tables.
  2. To recover: Alter the column back to the expected type (e.g., Int64) or delete and re-sync the table.
  3. For custom types: Create a materialized view on top of the Fivetran-managed table.

No primary keys for table

Error message:

Failed to alter table ... cause: no primary keys for table

Cause: Every ClickHouse table requires an ORDER BY. When the source has no primary key, Fivetran adds _fivetran_id automatically. This error occurs in edge cases where the source defines a PK but the data does not contain it.

Solutions:

  1. Contact Fivetran support to investigate the source pipeline.
  2. Check the source schema: Ensure primary key columns are present in the data.

Role-based grants failing

Error message:

user is missing the required grants on *.*: ALTER, CREATE DATABASE, CREATE TABLE, INSERT, SELECT

Cause: The connector checks grants with:

SELECT access_type, database, table, column FROM system.grants WHERE user_name = 'my_user'

This only returns direct grants. Privileges assigned via a ClickHouse role have user_name = NULL and role_name = 'my_role', so they are invisible to this check.

Solution:

Grant privileges directly to the Fivetran user:

GRANT CURRENT GRANTS ON *.* TO fivetran_user;

Best practices

Dedicated ClickHouse service for Fivetran

In case of high ingestion load, consider using ClickHouse Cloud's compute-compute separation to create a dedicated service for Fivetran write workloads. This isolates ingestion from analytical queries and prevents resource contention.

For example, the following architecture can be used:

  • Service A (writer): Fivetran destination + other ingestion tools (ClickPipes, Kafka connectors)
  • Service B (reader): BI tools, dashboards, ad-hoc queries

Duplicate records with ReplacingMergeTree

ClickHouse uses SharedReplacingMergeTree for Fivetran destination tables. Duplicate rows with the same primary key are normal — deduplication happens asynchronously during background merges.

Always use the FINAL modifier to get deduplicated results:

SELECT * FROM schema.table FINAL WHERE ...

See the table-structure reference for more details.

Primary key and ORDER BY optimization

Fivetran replicates the source table's primary key as the ClickHouse ORDER BY clause. When the source has no PK, _fivetran_id (a UUID) becomes the sorting key, which sometimes may lead to poor query performance because ClickHouse builds its sparse primary index from the ORDER BY columns.

Recommendations:

  1. Treat Fivetran tables as raw staging tables. Do not query them directly for analytics.
  2. Create materialized views with an ORDER BY optimized for your query patterns:
    CREATE MATERIALIZED VIEW schema.table_optimized
    ENGINE = ReplacingMergeTree()
    ORDER BY (user_id, event_date)
    AS SELECT * FROM schema.table_raw;
    

Don't manually modify Fivetran-managed tables

Avoid manual DDL changes (e.g., ALTER TABLE ... MODIFY COLUMN) to tables managed by Fivetran. The connector expects the schema it created. Manual changes can cause type mapping errors and schema mismatch failures.

Use materialized views for custom transformations.

Ensure cluster health during syncs

The Fivetran destination checks that all replicas are active before performing operations. If any replica is offline, operations fail after retrying for up to 600 seconds.

  • Keep all replicas running during sync windows.
  • Schedule syncs when all nodes are available if you scale down during off-hours.
  • Monitor the ClickHouse Cloud console for service issues.

Debugging operations

When diagnosing failures:

  • Check the ClickHouse system.query_log for server-side issues.
  • Request Fivetran for help with client-side issues.

For connector bugs, create a GitHub issue or contact ClickHouse Support.

Debugging Fivetran syncs

Use the following queries to diagnose sync failures on the ClickHouse side.

Check recent ClickHouse errors related to Fivetran

SELECT event_time, query, exception_code, exception
FROM system.query_log
WHERE client_name LIKE 'fivetran-destination%'
  AND exception_code > 0
ORDER BY event_time DESC
LIMIT 50;

Check recent Fivetran user activity

SELECT event_time, query_kind, query, exception_code, exception
FROM system.query_log
WHERE user = '{fivetran_user}'
ORDER BY event_time DESC
LIMIT 100;