data load tool (dlt) โ the open-source Python library that automates all your tedious data loading tasks
Be it a Google Colab notebook, AWS Lambda function, an Airflow DAG, your local laptop,
or a GPT-4 assisted development playgroundโdlt can be dropped in anywhere.
dlt supports Python 3.9 through Python 3.14. Note that some optional extras are not yet available for Python 3.14, so support for this version is considered experimental.
pip install dltLoad chess game data from chess.com API and save it in DuckDB:
import dlt
from dlt.sources.helpers import requests
# Create a dlt pipeline that will load
# chess player data to the DuckDB destination
pipeline = dlt.pipeline(
pipeline_name='chess_pipeline',
destination='duckdb',
dataset_name='player_data'
)
# Grab some player data from Chess.com API
data = []
for player in ['magnuscarlsen', 'rpragchess']:
response = requests.get(f'https://api.chess.com/pub/player/{player}')
response.raise_for_status()
data.append(response.json())
# Extract, normalize, and load the data
pipeline.run(data, table_name='player')Try it out in our Colab Demo or directly on our wasm-based playground in our docs.
dlt is an open-source Python library that loads data from various, often messy data sources into well-structured datasets. It provides lightweight Python interfaces to extract, load, inspect, and transform data. dlt and dlt docs are built from the ground up to be used with LLMs: the LLM-native workflow will take your pipeline code to data in a notebook for over 5000 sources.
dlt is designed to be easy to use, flexible, and scalable:
- dlt extracts data from REST APIs, SQL databases, cloud storage, Python data structures, and many more.
- dlt infers schemas and data types, normalizes the data, and handles nested data structures.
- dlt supports a variety of popular destinations and has an interface to add custom destinations to create reverse ETL pipelines.
- dlt automates pipeline maintenance with incremental loading, schema evolution, and schema and data contracts.
- dlt supports Python and SQL data access, transformations, pipeline inspection, and visualizing data in Marimo Notebooks.
- dlt can be deployed anywhere Python runs, be it on Airflow, serverless functions, or any other cloud deployment of your choice.
For detailed usage and configuration, please refer to the official documentation.
You can find examples for various use cases in the examples folder, or in the code examples section of our docs page.
dlt follows the semantic versioning with the MAJOR.MINOR.PATCH pattern.
majormeans breaking changes and removed deprecationsminornew features, sometimes automatic migrationspatchbug fixes
We suggest that you allow only patch level updates automatically:
- Using the Compatible Release Specifier. For example dlt~=1.0 allows only versions >=1.0 and less than <1.1
- Poetry caret requirements. For example ^1.0 allows only versions >=1.0 to <1.0
Please also see our release notes for notable changes between versions.
The dlt project is quickly growing, and we're excited to have you join our community! Here's how you can get involved:
- Connect with the Community: Join other dlt users and contributors on our Slack
- Report issues and suggest features: Please use the GitHub Issues to report bugs or suggest new features. Before creating a new issue, make sure to search the tracker for possible duplicates and add a comment if you find one.
- Track progress of our work and our plans: Please check out our public Github project
- Improve documentation: Help us enhance the dlt documentation.
Please read CONTRIBUTING before you make a PR.
- ๐ฃ New destinations are unlikely to be merged due to high maintenance cost (but we are happy to improve SQLAlchemy destination to handle more dialects)
- Significant changes require tests and docs and in many cases writing tests will be more laborious than writing code
- Bugfixes and improvements are welcome! You'll get help with writing tests and docs + a decent review.
dlt is released under the Apache 2.0 License.