A docker-compose environment starts a Spark Thrift server and a Postgres database as a Hive Metastore backend.
Note that this is spark 2 not spark 3 so some functionalities might not be available.
The following command would start two docker containers
docker-compose up -d
It will take a bit of time for the instance to start, you can check the logs of the two containers.
If the instance doesn't start correctly, try the complete reset command listed below and then try start again.
The endpoint for SQL-based testing is at http://localhost:10000 and can be referenced with the Hive or Spark JDBC drivers using connection string jdbc:hive2://localhost:10000 and default credentials dbt:dbt
Note that the Hive metastore data is persisted under ./.hive-metastore/, and the Spark-produced data under ./.spark-warehouse/. To completely reset you environment run the following:
docker-compose down
rm -rf ./.hive-metastore/
rm -rf ./.spark-warehouse/
Reporting bugs and contributing code
Want to report a bug or request a feature? Let us know on Slack, or open an issue.
Code of Conduct
Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the PyPA Code of Conduct.
dbt-labs/dbt-spark
dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.
dbt-spark
The
dbt-spark
package contains all of the code enabling dbt to work with Apache Spark and Databricks. For more information, consult the docs.Getting started
Running locally
A
docker-compose
environment starts a Spark Thrift server and a Postgres database as a Hive Metastore backend. Note that this is spark 2 not spark 3 so some functionalities might not be available.The following command would start two docker containers
It will take a bit of time for the instance to start, you can check the logs of the two containers. If the instance doesn't start correctly, try the complete reset command listed below and then try start again.
Create a profile like this one:
Connecting to the local spark instance:
http://localhost:10000
and can be referenced with the Hive or Spark JDBC drivers using connection stringjdbc:hive2://localhost:10000
and default credentialsdbt
:dbt
Note that the Hive metastore data is persisted under
./.hive-metastore/
, and the Spark-produced data under./.spark-warehouse/
. To completely reset you environment run the following:Reporting bugs and contributing code
Code of Conduct
Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the PyPA Code of Conduct.
Join the dbt Community
Reporting bugs and contributing code
Code of Conduct
Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the dbt Code of Conduct.