Chris White
Chris Reuter
10/20/2022, 1:15 PMKalise Richmond
10/20/2022, 2:09 PMChris Reuter
10/20/2022, 3:19 PMDavid Abraham
Adam
10/31/2022, 1:37 PMgit clone
of the dbt repo?
Currently just using dbt cloud to run on a schedule but thinking of moving away from that towards the above solution.Andreas Nigg
10/31/2022, 5:03 PM{% if is_incremental() %}
where loaded_at>= coalesce(_dbt_max_partition, '2022-01-01')
{% else %}
But, the problem is, that in my incremental model I do not partition by “loaded_at” but by a different column (due to use-case demands). So _dbt_max_partition would not help here, as it would simply return the maximum partition value of the model (which I can’t use as filter for the source table).
In “native” BigQuery I would simply use a scripting variable as follows
declare max_source_partition timestamp;
set max_source_partition = (select max(loaded_at) as ts from `my_model_table`);
select * from `my_source_table` where loaded_at > max_source_partition
How can one implement such a scenario with dbt? Is there a way to create scripting variables as part of my models? Or do I need to add it as a on-start-hook? Or any better strategies to exclude partitions in my source without having the same column as partition field in my model?Marc Lipoff
11/02/2022, 6:22 PMdbt ...
commands from prefect. It seems there is a lot of overhead to getting this working. My steps (my head at least are):
• Grab the appropriate dbt docker image
• Pull my dbt repo
• do a docker run, something like this: docker run -v /repo/loc/:/src/ -w /src/ image_name run --select ...
I see there are both prefect-docker and prefect-dbt collections. Curious what others have done...Chris Marchetti [Datateer]
11/14/2022, 9:00 PMandres aava
11/19/2022, 4:29 PMkeurcien
11/28/2022, 11:43 AMRachel Molloy
11/29/2022, 5:03 PMAaron Gonzalez
01/11/2023, 3:18 PMAaron Gonzalez
01/11/2023, 3:19 PMBigQueryTargetConfigs
prefect-dbt block .Stephen Herron
01/20/2023, 11:49 AMRaul Maldonado
01/25/2023, 7:47 PMFarid
02/06/2023, 10:31 PMERROR: Runtime Error
Credentials in profile "snowflake_dbt_transformations", target "dev" invalid: 'database' is a required propertyDefined profiles:
- snowflake_dbt_transformations
For more information on configuring profiles, please consult the dbt docs:
<https://docs.getdbt.com/docs/configure-your-profile>
Upon investigation, it seems like the dbtCliProfile passed to the dbtTrigger task gets saved in the ~/.dbt/profiles.yml
first and then used inside the dbtCli shell task. However the problem is the method where it saves the dbtCliProfile into a yaml file does not save the TargetConfigs
resulting in the profile file being incomplete.
Has anyone had an experience with this before and know any workarounds?Donny Flynn
02/09/2023, 5:45 PMdbt docs generate
as the Flow Run is outputting that
Catalog written to /opt/prefect/target/catalog.json
I checked S3 and it's definitely not there, and I'm guessing that the /opt
is a directory that's tied to the ECS Task from the container? Is there a way that the dbt cli command can output the docs file (specifically index.html into the S3 bucket so we can host our dbt docs as a static site)
Flow code is in the 🧵. I really appreciate any help or pointers 🙂Oluremi Akinwale
02/13/2023, 8:24 AMJaime Raldua Veuthey
02/15/2023, 4:58 PMAndrew Huang
02/16/2023, 10:46 PMShaun Fender
02/24/2023, 10:59 AMMarius Vollmer
02/27/2023, 10:32 AMShaun Fender
03/01/2023, 9:19 AMChris Reuter
03/02/2023, 8:06 PMMarius Vollmer
03/08/2023, 11:36 AMRuntimeError: Command failed with exit code 133:
/tmp/prefect-4zqou1u7: line 1: 299284 Trace/breakpoint trap (core dumped) dbt build --select dl_nicc_ecom__order_header --profiles-dir /tmp/dbt --project-dir /tmp/dbt
12:14:43 PM
Finished in state Failed('Flow run encountered an exception. RuntimeError: Command failed with exit code 133:\n/tmp/prefect-4zqou1u7: line 1: 299284 Trace/breakpoint trap (core dumped) dbt build --select dl_nicc_ecom__order_header --profiles-dir /tmp/dbt --project-dir /tmp/dbt\n\n')
Many thanks in advance!Vincenzo
03/21/2023, 1:32 PMStephen Lloyd
04/10/2023, 9:22 AMDbtCoreOperation
we are receiving the following error...
This suggests that our dbt cli profile block, which we have defined in dbt Cloud, does not have a name or target or target_configs, but it does..
pydantic.error_wrappers.ValidationError: 3 validation errors for DbtCoreOperation
dbt_cli_profile -> name
field required (type=value_error.missing)
dbt_cli_profile -> target
field required (type=value_error.missing)
dbt_cli_profile -> target_configs
field required (type=value_error.missing)
I was able to use the profile_dir
parameter to use my local credentials, but this only verifies there aren't obvious problems with the flow code.
prefect-dbt==0.3.1
prefect-snowflake==0.26.0
Any ideas?Stephen Lloyd
04/12/2023, 10:06 AMdbtCoreOperation
.
I am running...
with DbtCoreOperation(
commands=["dbt deps", "dbt build " + dbt_build_arg + " --target " + DBT_ENV],
project_dir="edw-dbt",
# working_directory="edw-dbt",
# overwrite_profiles=True,
profiles_dir="~/.dbt",
# dbt_cli_profile=dbt_cli_profile,
stream_output=True,
) as dbt_operation:
dbt_process = dbt_operation.trigger()
dbt_process.wait_for_completion()
result = dbt_process.fetch_result()
Currently failing with:
15:35:09.705 | INFO | Flow run 'furry-carp' - PID 98611 stream output:
09:50:09 Error importing adapter: No module named 'dbt.adapters.snowflake'
15:35:09.706 | INFO | Flow run 'furry-carp' - PID 98611 stream output:
09:50:09 Encountered an error while reading profiles:
ERROR: Runtime Error
Credentials in profile "edw", target "dev" invalid: Runtime Error
Could not find adapter type snowflake!Defined profiles:
- edw
For more information on configuring profiles, please consult the dbt docs:
<https://docs.getdbt.com/docs/configure-your-profile>
15:35:09.720 | INFO | Flow run 'furry-carp' - PID 98611 stream output:
09:50:09 Encountered an error:
Runtime Error
Could not run dbt
fozan talat
04/13/2023, 5:22 PMFinished in state Failed('Flow run encountered an exception. RuntimeError: PID 17072 failed with return code 1.\n')