Using CrossjoinSync
Ideal for BI developers, data engineers, and analytics teams who need a fast, repeatable way to sync source data into cloud destinations.
If you can answer “yes” to one or more of these, CrossjoinSync is a good fit:
- You can run a command-line tool (locally, on a server, or in a scheduled task)
- You already know the SQL query you want to land in your warehouse
- You want a predictable, metadata-driven execution model rather than custom scripts per extract
- You want a simple path to uplift on-premises data into cloud-ready destinations (for example Azure SQL)
How CrossjoinSync Works
It is simple:
- Define your source query
- create the destination table using the CrossjoinSync “create-destination” command
- Transfer the data from your query using CrossjoinSync “extract” command
Execution model (metadata-driven)
CrossjoinSync reads its “what to run” configuration from a SQL Server metadata database:
dbo.Connectiondefines connection codes, providers, and connection stringsdbo.Extractdefines jobs and extracts: source connection, destination connection, destination table, extract SQL, ordering, and whether to truncate or delete before loading- `dbo.ParameterValue'
At runtime, the CLI:
- Loads enabled extracts (optionally filtered by job name or extract name)
- For each extract (in order), connects to the configured source and destination systems
- Refreshes the destination table (
TRUNCATEorDELETE) - Executes the source query and bulk loads the result set into the destination
- Logs progress and errors to console and log files
Help switches
CrossjoinSync follows common CLI conventions. You can discover commands and options using:
CrossjoinSync --helpCrossjoinSync -?
You can also get more about the command line syntax interface on the command-line page
Example: run a job
A simple “execute job” command looks like this:
CrossjoinSync extract --job DailySync
You can also run all enabled extracts (no filter), or run a single extract by name when you need a targeted refresh.
CrossjoinSync Architecture
A metadata-driven engine that orchestrates repeatable EL jobs across Oracle, SQL Server, ODBC, and Snowflake sources and SQL Server and Snowflake destinations.
Components
- CrossjoinSync CLI: the command-line executable you schedule or run manually
- Metadata database (SQL Server): stores connections and extract definitions (
dbo.Connection,dbo.Extract) - Source systems: where data is extracted from (today: SQL Server and Oracle)
- Destinations: where data is loaded to (today: SQL Server)
- Logging: rolling file logs plus console output for diagnostics
Data Type Translation
When CrossjoinSync creates destination tables, it translates source column types into destination-native types so schemas are usable and consistent across platforms.
At a high level, type translation works like this:
- The source provider reports column metadata (type name, size, precision, scale)
- CrossjoinSync chooses a translation rule based on the provider pair (for example
oracle:sqlserverorsqlserver:snowflake) - It applies defaults when metadata is missing (for example default precision/scale for numeric fields)
- It generates destination DDL using translated types before data is loaded
This makes cross-provider movement predictable while still allowing customization for your own standards.
Translation behavior is driven by type-mappings.json, so you can tune defaults and target types without changing core workflow commands.