Concepts
This page explains the core terms used throughout the Runtime docs.
Workspaces
A workspace is the main unit of organization in Runtime. It maps to a local project directory and contains your jobs, deployments, configurations, and telemetry. When you run dlt runtime login, you select (or create) a workspace to work in.
Each workspace has its own:
- Set of jobs and their run history
- Deployment and configuration versions
- Pipeline and dataset telemetry
- Member list and access settings
Organizations
An organization groups workspaces and team members. It owns billing and controls who can access which workspaces. Members have organization-level roles (Owner or Member) and can be granted workspace-level roles (Owner or Viewer) independently. See Team Access for details.
Pipelines and datasets
A pipeline is a dlt pipeline — a unit of data movement from a source to a destination. When a pipeline runs, it loads data into a dataset (a schema in your destination database, e.g., a MotherDuck database or BigQuery dataset).
Runtime discovers pipelines and datasets through telemetry collected at run time — not by analyzing your source code. When a job runs and calls pipeline.run(), dlt emits trace and state data that Runtime ingests. This means:
- A pipeline or dataset only appears in Runtime after it has been executed at least once.
- The information shown (pipeline names, dataset names, destination types, tables, row counts) reflects what actually ran, which may differ from what the current code defines — for example, if you renamed a pipeline or switched destinations but haven't run the updated code yet, Runtime still shows the previous names and destination.
- If a pipeline has not run recently, its metrics may be stale. Runtime does not poll your destination or inspect your code to refresh this data.
The Pipelines and Datasets pages show aggregated metrics like success rate, rows loaded, and duration trends — across all jobs that use a given pipeline or write to a given dataset.
A pipeline is a dlt concept — it's the dlt.pipeline(...) object in your code. A job is a Runtime concept — it's the script file that Runtime executes. A single job can run multiple pipelines, and the same pipeline can appear in different jobs.
Jobs
A job represents a script registered with Runtime (e.g., my_pipeline.py). Jobs are created implicitly the first time you run dlt runtime launch, or explicitly with dlt runtime job create.
A job defines what to run. It can be triggered manually, on a schedule, or from CI/CD.
Runs: job runs vs pipeline runs
A job run is a single execution of a job — Runtime starts a container, runs your script, and records the result. Job runs have a lifecycle:
| Status | Meaning |
|---|---|
| Pending | Queued, waiting to start |
| Starting | Container is being initialized |
| Running | Actively executing |
| Completed | Finished without errors |
| Failed | Encountered an error |
| Cancelled | Manually stopped |
A pipeline run is a dlt-level event that happens inside a job run. Each time your script calls pipeline.run(), dlt records a pipeline run with its own metrics (tables loaded, row counts, bytes, duration). A single job run can produce multiple pipeline runs — for example, a script that loads data from two different sources into two different pipelines.
In the Web UI:
- The Jobs page shows job runs — did the script succeed or fail?
- The run detail page shows pipeline runs within that job run — what data was actually loaded?
- The Pipelines page aggregates pipeline runs across all jobs — how is this pipeline performing over time?
Deployments
A deployment is a versioned snapshot of your code, synced from your local project directory to Runtime. Each time you run dlt runtime deploy or dlt runtime launch, a new deployment version is created if your code has changed.
Deployments include all Python files and supporting modules in your workspace directory. They are versioned independently from configurations.
Configurations
A configuration is a versioned snapshot of your .dlt/ directory — secrets, config files, and profile-specific settings. Like deployments, a new version is created each time you sync.
Configurations are versioned separately from deployments so you can update credentials without redeploying code (and vice versa). See Manage Secrets for details.
Profiles
Profiles control which credentials a run uses. Runtime supports two profiles:
| Profile | Used by | Typical access level |
|---|---|---|
prod | Batch jobs (dlt runtime launch) | Read-write |
access | Interactive apps (dlt runtime serve) | Read-only |
You configure profile-specific secrets in separate files (e.g., secrets.prod.toml, secrets.access.toml). This ensures batch pipelines have full write access while shared notebooks and other interactive apps use read-only credentials. See Manage Secrets for the full setup.
Interactive apps
Interactive apps are long-running services served through Runtime — Marimo notebooks, Streamlit dashboards, and MCP servers. They run under the access profile with read-only credentials by default.
Deploy them with dlt runtime serve and optionally share them publicly with dlt runtime publish. See Serve a Notebook and Publish and Share.