Lashwin Naidoo
10/18/2022, 3:44 AMPaul Burden
10/18/2022, 3:44 AMKalise Richmond
10/18/2022, 3:28 PMBrian Bergeron
10/18/2022, 3:57 PMBrian Bergeron
10/18/2022, 3:58 PMAndrew Black
10/18/2022, 8:17 PMKalise Richmond
10/19/2022, 9:18 PMChris Reuter
10/20/2022, 1:13 PMChris White
10/20/2022, 1:13 PMChris Reuter
10/20/2022, 1:15 PMKalise Richmond
10/20/2022, 2:09 PMChris Reuter
10/20/2022, 3:19 PMDavid Abraham
10/20/2022, 3:19 PMAdam
10/31/2022, 1:37 PMgit clone
of the dbt repo?
Currently just using dbt cloud to run on a schedule but thinking of moving away from that towards the above solution.Andreas Nigg
10/31/2022, 5:03 PM{% if is_incremental() %}
where loaded_at>= coalesce(_dbt_max_partition, '2022-01-01')
{% else %}
But, the problem is, that in my incremental model I do not partition by “loaded_at” but by a different column (due to use-case demands). So _dbt_max_partition would not help here, as it would simply return the maximum partition value of the model (which I can’t use as filter for the source table).
In “native” BigQuery I would simply use a scripting variable as follows
declare max_source_partition timestamp;
set max_source_partition = (select max(loaded_at) as ts from `my_model_table`);
select * from `my_source_table` where loaded_at > max_source_partition
How can one implement such a scenario with dbt? Is there a way to create scripting variables as part of my models? Or do I need to add it as a on-start-hook? Or any better strategies to exclude partitions in my source without having the same column as partition field in my model?Marc Lipoff
11/02/2022, 6:22 PMdbt ...
commands from prefect. It seems there is a lot of overhead to getting this working. My steps (my head at least are):
• Grab the appropriate dbt docker image
• Pull my dbt repo
• do a docker run, something like this: docker run -v /repo/loc/:/src/ -w /src/ image_name run --select ...
I see there are both prefect-docker and prefect-dbt collections. Curious what others have done...Chris Marchetti [Datateer]
11/14/2022, 9:00 PMandres aava
11/19/2022, 4:29 PMkeurcien
11/28/2022, 11:43 AMRachel Molloy
11/29/2022, 5:03 PMAaron Gonzalez
01/11/2023, 3:18 PMAaron Gonzalez
01/11/2023, 3:19 PMBigQueryTargetConfigs
prefect-dbt block .Stephen Herron
01/20/2023, 11:49 AMRaul Maldonado
01/25/2023, 7:47 PMFarid
02/06/2023, 10:31 PMERROR: Runtime Error
Credentials in profile "snowflake_dbt_transformations", target "dev" invalid: 'database' is a required propertyDefined profiles:
- snowflake_dbt_transformations
For more information on configuring profiles, please consult the dbt docs:
<https://docs.getdbt.com/docs/configure-your-profile>
Upon investigation, it seems like the dbtCliProfile passed to the dbtTrigger task gets saved in the ~/.dbt/profiles.yml
first and then used inside the dbtCli shell task. However the problem is the method where it saves the dbtCliProfile into a yaml file does not save the TargetConfigs
resulting in the profile file being incomplete.
Has anyone had an experience with this before and know any workarounds?Donny Flynn
02/09/2023, 5:45 PMdbt docs generate
as the Flow Run is outputting that
Catalog written to /opt/prefect/target/catalog.json
I checked S3 and it's definitely not there, and I'm guessing that the /opt
is a directory that's tied to the ECS Task from the container? Is there a way that the dbt cli command can output the docs file (specifically index.html into the S3 bucket so we can host our dbt docs as a static site)
Flow code is in the 🧵. I really appreciate any help or pointers 🙂Oluremi Akinwale
02/13/2023, 8:24 AMJaime Raldua Veuthey
02/15/2023, 4:58 PMAndrew Huang
02/16/2023, 10:46 PMShaun Fender
02/24/2023, 10:59 AMShaun Fender
02/24/2023, 10:59 AMAaron Gonzalez
02/24/2023, 12:55 PMtrigger_dbt_cli_command
is a task. It's complaining because you have nested that task inside other tasks like debug_project
, load_seeds
, etc.Shaun Fender
02/24/2023, 1:14 PM