Skip to Content
Analytics StoreCustom Exporters

Custom Exporters

You can define custom SQL-based exporters to export any data from PostgreSQL to Parquet/DuckLake without writing Java code. Custom exporters are configured in a YAML file and activated via Spring profile.

Activation

Docker: Add the custom-exporters profile to config/env:

SPRING_PROFILES_ACTIVE=analytics,custom-exporters

Zip: Pass both profiles to the start script:

./bin/start.sh analytics,custom-exporters

Configuration

Edit config/application-custom-exporters.yml to define your custom exporters.

The application-custom-exporters.yml file shipped in the distribution is a sample configuration with examples. You can modify it to define your own exporters.

Exporter Properties

PropertyRequiredDescription
nameYesTarget table name in the analytics catalog
queryYesSQL template with placeholders (see below)
partition-strategyNoDAILY (default) or EPOCH
depends-on-adapot-jobNoIf true, waits for AdaPot reward calculation to complete before exporting. Only meaningful with EPOCH strategy. Default: false

Query Placeholders

PlaceholderDescription
{source}Resolves to the schema name (e.g., mainnet)
{start_slot}Slot range start (inclusive)
{end_slot}Slot range end (exclusive)
{epoch}Epoch number (only available with EPOCH strategy)

Examples

Daily Exporter

yaci: store: analytics: custom-exporters: - name: daily_tx_count query: >- SELECT b.number, to_timestamp(COALESCE(b.block_time, 0)) as block_time, b.slot, b.epoch, b.no_of_txs FROM {source}.block b WHERE b.slot >= {start_slot} AND b.slot < {end_slot} ORDER BY b.slot

Epoch Exporter

yaci: store: analytics: custom-exporters: - name: epoch_pool_rewards partition-strategy: EPOCH depends-on-adapot-job: true query: >- SELECT r.address, r.earned_epoch AS epoch, r.spendable_epoch, r.type, r.pool_id, r.amount, r.slot FROM {source}.reward r WHERE r.earned_epoch = {epoch} ORDER BY r.earned_epoch, r.address

When using EPOCH partition strategy, the query must output a column named epoch. If the source column has a different name, use an alias (e.g., r.earned_epoch AS epoch). Without this, the DuckLake table will not be partitioned and all Parquet files will be written flat without epoch=N/ subdirectories.

Last updated on