DBT
DBT Cloud
Overview
The DBT Cloud integration allows Connetra to ingest transformation metadata directly from DBT Cloud using the DBT Cloud REST API. This enriches your warehouse assets with models, tests, documentation, and lineage produced by DBT Cloud projects.
DBT Cloud is a secondary integration in Connetra. You must first connect a supported data warehouse or relational database (such as Snowflake, BigQuery, Postgres, or Redshift) before enabling DBT Cloud.
What metadata Connetra extracts from DBT Cloud
- DBT models and sources
- Model-to-table mappings
- Tests and test results
- Documentation and descriptions
- Project structure and dependencies
- End-to-end lineage across transformations
Connetra uses the DBT Cloud API to extract:
Connetra does not execute DBT jobs, it only reads metadata generated by your existing DBT Cloud workflows.
Getting started with DBT Cloud
- Retrieve your DBT Cloud Account ID and Access URL
- Create a DBT Cloud Service Token
- Connect DBT Cloud to Connetra
There are three steps to connect DBT Cloud to Connetra:
Step 1 – Retrieve your DBT Cloud Account ID and Access URL
- Go to Account Settings in DBT Cloud
- Copy Account information → Account ID and Access URLs → Access URL
You can find your account ID in the DBT Cloud console.
Step 2 – Create a DBT Cloud Service Token
Connetra connects to DBT Cloud using the DBT Cloud REST API, which is available to paid DBT Cloud plans.
- Go to Account Settings → Service Tokens
- Click New Token
- Grant the token Analyst access to the relevant projects (minimum required permission)
- Save and copy the generated token
To generate a Service Token:
The token will be used by Connetra to securely retrieve metadata and is stored encrypted.
Step 3 – Connect DBT Cloud to Connetra
- Navigate to Settings → Data Connectors → Select DBT Data Tool
- Select DBT Cloud
- Enter :
- DBT Cloud Account ID
- DBT Cloud Access URL
- Service Token
- Click Test Connection and Save the connector configurations
- Run the initial sync
In Connetra:
DBT Core
Overview
dbt Core is a secondary integration in Connetra. It enriches your existing data warehouse or relational database assets with transformation metadata, tests, documentation, and lineage.
Before connecting dbt Core, you must first connect a primary data source, such as Snowflake, BigQuery, Postgres, Redshift, or another supported warehouse. Connetra supports two ways to integrate with dbt Core: (Recommended) Connect a cloud storage bucket that contains dbt artifacts Upload dbt artifacts manually via the Connetra UI
- (Recommended) Connect a cloud storage bucket that contains dbt artifacts
- Upload dbt artifacts manually via the Connetra UI
Connetra supports two ways to integrate with dbt Core:
Option 1 – Connect a storage bucket (recommended)
Connecting a storage bucket allows Connetra to continuously sync the latest dbt artifacts, ensuring your metadata and lineage stay up to date.
- manifest.json
- catalog.json
- run_results.json
Connetra reads the following dbt Core artifacts from the bucket :
Connetra does not trigger dbt runs, it only consumes metadata produced by your existing dbt workflows.
1a. Connect an AWS S3 bucket
You can connect Connetra to an AWS S3 bucket using an IAM user.
- Create a new IAM user with programmatic access
- Save the generated Access Key ID and Secret Access Key
- Attach a policy with read access to your dbt artifacts (update <your-bucket-name> accordingly):
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetObject", "s3:GetObjectAcl" ], "Resource": [ "arn:aws:s3:::<your-bucket-name>", "arn:aws:s3:::<your-bucket-name>/*" ] } ] } - In Connetra, navigate to Settings → Data Connectors → dbt and select dt core.
- Select the Access Key option and enter:
- Region
- Bucket name
- Access Key ID
- Secret Access Key
- Storage URL
- Test the connection and run the initial sync
Steps :
1b. Connect a GCS S3-compatible bucket
If your dbt artifacts are stored in Google Cloud Storage, you can connect using S3 interoperability
- Create a GCP service account
- Grant it Storage Object Viewer access on the bucket
- Enable interoperability and generate HMAC keys
- Configure CORS on the bucket (via CLI)
- In Connetra, add:
- Access Key ID
- Secret Key
- Bucket region
- Bucket Name
- S3 endpoint: https://storage.googleapis.com
- Test the connection and run the initial sync
Steps :
1c. Connect an Azure Blob Storage container
- Go to portal.azure.com → Storage accounts
- Select your storage account and copy the account name
- Under Security + networking → Access keys, copy the connection string
- In Connetra, navigate to Settings → Data Connectors → dbt and select dt core
- Paste the connection string and test the connection
Steps :
- If a path is specified, Connetra searches that path for dbt artifacts
- If no path is specified, Connetra scans the container root
- Connetra looks for files named manifest*.json or manifest*.json.gz
Notes :
Option 2 – Upload dbt artifacts manually
You can also upload dbt artifacts directly via the Connetra UI.
- Models and sources
- Model-to-table mappings
- Tests and documentation
- End-to-end lineage
The dbt manifest.json contains detailed metadata about:
This option performs a one-time sync and does not update automatically.
Steps :
- Navigate to Settings → Data Connectors → dbt and select dt core
- Select File Upload
- Upload:
- manifest.json
- catalog.json
- run_results.json
- Test the connection and run the initial sync
Tip :
When specifying a search path, do not include leading slashes.
Previous
Oracle
Next
Power BI