udpate
Activity
jeremyyeo push jeremyyeo/dbt-sandcastles
commit sha: bd5219a224974cea2c8cffa1f623c454a94db0df
push time in 9 hours agojeremyyeo in jeremyyeo/dbt-sandcastles create branch ci-checker
jeremyyeo push jeremyyeo/dbt-sandcastles
commit sha: 11450afc6ac73c8f71034cb9479e7f4c548a9112
push time in 9 hours agojeremyyeo push jeremyyeo/dbt-sandcastles
commit sha: d14b68a491eaa1a7d4712ecb191f81a88863f9c1
push time in 3 days agojeremyyeo in jeremyyeo/dbt-sandcastles create branch my-ci-again
jeremyyeo push jeremyyeo/dbt-sandcastles
commit sha: 43772c9f75d8623f2eafb1926b2e0a213aee1198
push time in 3 days agojeremyyeo push jeremyyeo/dbt-sandcastles
commit sha: 20a2d882a386582e6461681afda66a03bc6a359b
push time in 3 days agojeremyyeo in jeremyyeo/dbt-sandcastles create branch new-ci-test
jeremyyeo push jeremyyeo/dbt-sandcastles
commit sha: 1f3870b8d574ebcbc8fad993ddd70a4ff7bd686c
push time in 3 days agojeremyyeo push jeremyyeo/dbt-sandcastles
commit sha: 63e27d12f62d7c86363724b2338f2cc1f1bb78d4
push time in 3 days agojeremyyeo push dbt-labs/dbt-utils
commit sha: a5fcc7cbd79933021610fe56bf3cce0caf986fec
push time in 1 week agojeremyyeo issue comment dbt-labs/dbt-utils
Fix: Make `union_relations` `include/exclude` case insensitive
This is a:
- documentation update
- bug fix with no breaking changes
- new functionality
- a breaking change
All pull requests from community contributors should target the main
branch (default).
Description & motivation
Fixes #578.
Checklist
- I have verified that these changes work locally on the following warehouses (Note: it's okay if you do not have access to all warehouses, this helps us understand what has been covered)
- BigQuery
- Postgres
- Redshift
- Snowflake
- I followed guidelines to ensure that my changes will work on "non-core" adapters by:
- dispatching any new macro(s) so non-core adapters can also use them (e.g. the
star()
source) - using the
limit_zero()
macro in place of the literal string:limit 0
- using
dbt_utils.type_*
macros instead of explicit datatypes (e.g.dbt_utils.type_timestamp()
instead ofTIMESTAMP
- dispatching any new macro(s) so non-core adapters can also use them (e.g. the
- I have updated the README.md (if applicable)
- I have added tests & descriptions to my models (and macros if applicable)
- I have added an entry to CHANGELOG.md
@dbeatty10, added your suggested test but I tweaked it a little to actually include upper/lowercase in the exclude
param... wasn't sure if the test previously would have had that or not 😁
jeremyyeo push dbt-labs/dbt-utils
commit sha: 58bf699f3d192c0287495a1ee2e31f2430a4438a
push time in 1 week agojeremyyeo push dbt-labs/dbt-utils
commit sha: efee51fa2af8cb0df562cfbc1d68b03546aa7542
push time in 1 week agojeremyyeo issue comment dbt-labs/dbt-core
[CT-648] [Feature] Distinguish what dbt command has been executed in Jinja context
Is this your first time opening an issue?
- I have read the expectations for open source contributors
Describe the Feature
Hi colleagues, In multiple advanced cases like dbt hooks, operations and unit testing we would like to execute queries against the database and also run some state changing operations (e.g. agate table print to file) Currently dbt doesn't gives the opportunity to avoid such misbehavior
Suggested idea is to add executed_command
to the Jinja context storing compile
| run
| test
| docs
etc from dbt command palette
So all macroses for hooks and operations can use it and flexibly configure when to run and what Same for unit testing use-cases:
- I can skip unit tests on
dbt compile
anddbt docs
- I can now mock
current_timestamp
function indbt_utils
to write advanced tests without changing the logic
Describe alternatives you've considered
I was trying to do workarounds using tags and relying on presence of attributes like run_result, schemas, graph nodes in the Jinja context Some of them are working, but all of them doesn't smell good
Who will this benefit?
Looks like multiple developers are facing the same issue So, users of hooks, operations and unit-testing will benefit
Are you interested in contributing this feature?
Yes, PR will come soon
Anything else?
Easily found several related cases https://github.com/dbt-labs/dbt-core/issues/4785 https://github.com/dbt-labs/dbt-core/issues/4445
Hey @SOVALINUX, have you taken a look at flags.WHICH
(https://docs.getdbt.com/reference/dbt-jinja-functions/flags) by any chance?
Kind of wondering if (flags.WHICH
+ execute
) simply gives you what you need to do what you had wanted to with this proposed executed_command
Jinja context?
jeremyyeo push dbt-labs/dbt-core
commit sha: 3594ced4da797b2eabf54da8884c22385880cec7
push time in 1 week agojeremyyeo issue comment dbt-labs/dbt-core
[CT-639] [Bug] DBT Cloud is using a very old version of the python connector to send queries to Snowflake periodically
Is there an existing issue for this?
- I have searched the existing issues
Current Behavior
This has nothing to do with dbt-core itself, but I'm not sure where else to report this.
It seems that for any connection configured in dbt Cloud, queries will be sent periodically. I don't know why, but I assume there must be some reason. Maybe to check if the credentials are still valid? Anyway, that's not the end of the world, as those queries only consume cloud services (no active warehouse required).
However, the version of the python connector used to send those queries is really old (2.2.1
). We're doing our quarterly review of Snowflake client drivers, and this one came up as seriously out of date, and no longer supported.
I assume this is not unique to Snowflake, but we're not using any other data warehouse to check whether or not if affects them as well.
Expected Behavior
Python connector is kept up-to-date with pip (or dbt-core, or something more recent).
Steps To Reproduce
- Create a connection to Snowflake in dbt Cloud
- Create a corresponding environment, with a configured user & credentials
- dbt Cloud will start sending queries periodically (using this old version of the python connector)
Relevant log output
N/A
Environment
dbt Cloud
dbt-core version does not matter.
What database are you using dbt with?
snowflake
Additional Context
No response
Thanks for raising @mroy-seedbox. We have raised this internally with the dbt Cloud team.
@jtcohen6 I tagged you in the internal thread.
jeremyyeo push jeremyyeo/my-dbt-project
commit sha: 7c307aad2a1bb446c288fa6f88807764dca367ce
push time in 1 week agojeremyyeo pull request dbt-labs/dbt-utils
[Fix] Make `union_relations` `include/exclude` case insensitive
This is a:
- documentation update
- bug fix with no breaking changes
- new functionality
- a breaking change
All pull requests from community contributors should target the main
branch (default).
Description & motivation
Fixes #578.
Checklist
- I have verified that these changes work locally on the following warehouses (Note: it's okay if you do not have access to all warehouses, this helps us understand what has been covered)
- BigQuery
- Postgres
- Redshift
- Snowflake
- I followed guidelines to ensure that my changes will work on "non-core" adapters by:
- dispatching any new macro(s) so non-core adapters can also use them (e.g. the
star()
source) - using the
limit_zero()
macro in place of the literal string:limit 0
- using
dbt_utils.type_*
macros instead of explicit datatypes (e.g.dbt_utils.type_timestamp()
instead ofTIMESTAMP
- dispatching any new macro(s) so non-core adapters can also use them (e.g. the
- I have updated the README.md (if applicable)
- I have added tests & descriptions to my models (and macros if applicable)
- I have added an entry to CHANGELOG.md
jeremyyeo in dbt-labs/dbt-utils create branch fix/578-make-union-relations-case-insensitive
jeremyyeo issue comment dbt-labs/dbt-utils
`union_relations` `exclude` param not actually excluding
Describe the bug
The exclude
param in the union_relations
function doesn't actually exclude the column from the final result.
Steps to reproduce
- Create 2 sources:
create or replace table development.dbt_jyeo.my_source_a as (
select 1 as user_id, 'alice' as user_name, 'active' as status
);
create or replace table development.dbt_jyeo.my_source_b as (
select 2 as user_id, 'bob' as user_name
);
- Add those sources to your project:
version: 2
sources:
- name: dbt_jyeo
tables:
- name: my_source_a
- name: my_source_b
-
Add to
dbt_utils
topackage.yml
and then dodbt deps
. -
Use
union_relations
in a model and specifyexclude
:
-- models/my_model.sql
{{
dbt_utils.union_relations(
relations=[
source('dbt_jyeo', 'my_source_a'),
source('dbt_jyeo', 'my_source_b')
],
exclude=['status']
)
}}
-
Run or compile the model above.
-
Check logs or query the table to see that
status
column is not excluded as expected.
Expected results
Expected that the status
column is not added to my_model
table.
Actual results
status
column shows up in my_model
table.
Screenshots and log output
debug logs:
2022-05-09T22:29:37.055722Z: 22:29:37 Using snowflake connection "model.my_dbt_project.my_model"
2022-05-09T22:29:37.055847Z: 22:29:37 On model.my_dbt_project.my_model: /* {"app": "dbt", "dbt_version": "1.0.6", "profile_name": "user", "target_name": "default", "node_id": "model.my_dbt_project.my_model"} */
create or replace transient table development.dbt_jyeo.my_model as
(
(
select
cast('development.dbt_jyeo.my_source_a' as
varchar
) as _dbt_source_relation,
cast("USER_ID" as NUMBER(1,0)) as "USER_ID" ,
cast("USER_NAME" as character varying(5)) as "USER_NAME" ,
cast("STATUS" as character varying(6)) as "STATUS"
from development.dbt_jyeo.my_source_a
)
union all
(
select
cast('development.dbt_jyeo.my_source_b' as
varchar
) as _dbt_source_relation,
cast("USER_ID" as NUMBER(1,0)) as "USER_ID" ,
cast("USER_NAME" as character varying(5)) as "USER_NAME" ,
cast(null as character varying(6)) as "STATUS"
from development.dbt_jyeo.my_source_b
)
);
2022-05-09T22:29:37.765446Z: 22:29:37 SQL status: SUCCESS 1 in 0.71 seconds
System information
The contents of your packages.yml
file:
packages:
- package: dbt-labs/dbt_utils
version: 0.8.4
Which database are you using dbt with?
- postgres
- redshift
- bigquery
- snowflake
- other (specify: ____________)
The output of dbt --version
:
1.0.6 in Cloud
Additional context
Haven't tried to find out why this is happening yet - just reproducing.
Are you interested in contributing the fix?
Yes
Looks like this is a case sensitivity thing with Snowflake...
another update