- Is my freshness / volume issue related to a job that didn’t complete? Which job?
- Which tables were built as part of the job that loaded data with issues?
- Which job should I rerun to resolve?
- Orchestrator name:
orchestrator - Job name:
job_name - Job ID:
job_id - Job results URL:
job_url - The ID of a specific run execution:
job_run_id - Job run results URL:
job_run_url
How Elementary collects jobs metadata
Environment variables
Elementary reads metadata at run time from environment variables. Which fields fill in automatically depends on the orchestrator—see the table in Which orchestrators are supported?. For anything not supplied by the orchestrator, set the matching env vars in your orchestration tool, or pass dbt vars. These env vars are read when present:ORCHESTRATOR, JOB_NAME or DBT_JOB_NAME (both map to job_name), JOB_ID, JOB_URL, JOB_RUN_ID, JOB_RUN_URL
To configure env vars for your orchestrator, use your orchestrator’s documentation. For dbt Cloud and the job_name column specifically, follow Setting job name on dbt Cloud.
dbt vars
Elementary also supports passing job metadata as dbt vars. If both an env var and avar exist for the same field, the var wins.
To pass job data to Elementary using var, use the --vars flag in your invocations:
If you already pass
--vars, merge every key into one JSON object. Wrong: two --vars flags. Right: dbt run --vars '{"job_name": "my_job", "other_key": "value"}'.Variables supported format
| var / env_var | Format |
|---|---|
| orchestrator | One of: airflow, dbt_cloud, github_actions, prefect, dagster |
| job_name, job_id, job_run_id | String |
| job_url, job_run_url | Valid HTTP URL |
Which orchestrators are supported?
You can pass job info to Elementary from any orchestration tool as long as you configureenv_vars / vars.
The following default environment variables are supported out of the box:
| Orchestrator | Env vars |
|---|---|
| dbt cloud | orchestrator job_id: DBT_CLOUD_JOB_IDjob_run_id: DBT_CLOUD_RUN_IDjob_url: generated from DBT_ACCOUNT_ID, DBT_CLOUD_PROJECT_ID, DBT_CLOUD_JOB_IDjob_run_url: generated from ACCOUNT_ID, DBT_CLOUD_PROJECT_ID, DBT_CLOUD_RUN_ID |
| Github actions | orchestrator job_run_id: GITHUB_RUN_IDjob_url: generated from GITHUB_SERVER_URL, GITHUB_REPOSITORY, GITHUB_RUN_ID |
| Airflow | orchestrator |
Setting job name on dbt Cloud
Use one of the following. After you save, only new runs writejob_name; past invocations are not updated.
Open where env vars are configured
In dbt Cloud, open the deployment or job that executes your dbt commands—the screen where you add environment variables for that run. Exact labels vary (for example Environment variables, Deployment environment, or Job → Settings).
Set the job name
Add either:
JOB_NAME— set the value to the label you want indbt_invocations.job_name(often the same as the job name in dbt Cloud), orDBT_JOB_NAME— same behavior asJOB_NAME; use whichever naming convention you prefer.
Alternative: job_name via dbt --vars
On the job’s dbt command, include job_name in --vars:
job_name var overrides JOB_NAME / DBT_JOB_NAME if both are set.
What if I use dbt Cloud + orchestrator?
By default, Elementary will collect the dbt cloud jobs info. If you wish to override that, change your dbt cloud invocations to pass the orchestrator job info using--vars:
Where can I see my job info?
- In your Elementary schema, the raw fields are stored in the table
dbt_invocations. You could also use the viewjob_run_resultswhich groups invocation by job. - In the Elementary UI, if the info was collected successfully, you can filter the lineage by job and see the details in the node info.

