Using CrossjoinSync

Ideal for BI developers, data engineers, and analytics teams who need a fast, repeatable way to sync source data into cloud destinations.

If you can answer “yes” to one or more of these, CrossjoinSync is a good fit:

How CrossjoinSync Works

It is simple:

Execution model (metadata-driven)

CrossjoinSync reads its “what to run” configuration from a SQL Server metadata database:

At runtime, the CLI:

  1. Loads enabled extracts (optionally filtered by job name or extract name)
  2. For each extract (in order), connects to the configured source and destination systems
  3. Refreshes the destination table (TRUNCATE or DELETE)
  4. Executes the source query and bulk loads the result set into the destination
  5. Logs progress and errors to console and log files

Help switches

CrossjoinSync follows common CLI conventions. You can discover commands and options using:

You can also get more about the command line syntax interface on the command-line page

Example: run a job

A simple “execute job” command looks like this:

You can also run all enabled extracts (no filter), or run a single extract by name when you need a targeted refresh.

CrossjoinSync Architecture

A metadata-driven engine that orchestrates repeatable EL jobs across Oracle, SQL Server, ODBC, and Snowflake sources and SQL Server and Snowflake destinations.

Components

Data Type Translation

When CrossjoinSync creates destination tables, it translates source column types into destination-native types so schemas are usable and consistent across platforms.

At a high level, type translation works like this:

This makes cross-provider movement predictable while still allowing customization for your own standards.

Translation behavior is driven by type-mappings.json, so you can tune defaults and target types without changing core workflow commands.

Find more information about type mapping