Mohamed Rafiyudeen
09/20/2023, 4:49 AMHans Lellelid
09/20/2023, 9:02 AMJustin Trautmann
09/20/2023, 9:42 AMMichał Augoff
09/20/2023, 10:15 AMJonathan Aschan
09/20/2023, 12:38 PMimport asyncio
from typing import Any
from prefect import flow
@flow
async def subflow(should_fail: bool = False):
if should_fail:
raise ValueError("I failed!")
return 42
@flow
async def create_sub_flows():
parallel_subflows: list[Any] = []
for i in range(10):
parallel_subflows.append(subflow(i % 2 == 0))
await asyncio.gather(*parallel_subflows)
if __name__ == '__main__':
asyncio.run(create_sub_flows())
Idan
09/20/2023, 12:58 PMAndreas Nord
09/20/2023, 1:30 PMfrom prefect import task, flow, get_run_logger
from dbt.cli.main import dbtRunner, dbtRunnerResult
@task
def dbt_run(project_dir: str, target: str):
runner = dbtRunner()
result = runner.invoke(
["run",
"--project-dir", project_dir,
"--profiles-dir", project_dir,
"--target", target])
Deceivious
09/20/2023, 2:56 PMChoenden Kyirong
09/20/2023, 3:20 PMTony Yun
09/20/2023, 9:52 PMCamila Caleones
09/20/2023, 10:22 PMDerek Chase
09/21/2023, 8:13 AMravi
09/21/2023, 9:54 AMBrian Newman
09/21/2023, 2:57 PMRebecca Allen
09/21/2023, 3:08 PMprefect.engine
, Flow run
or Task run
are showing up (on my machine or in the Prefect Cloud UI) and it's driving me mad 🙈
Any ideas to help debug what's going on would be much appreciated. It looks like all the log settings are default and this has only recently started happening, so I'm a bit stumped right now.Joe D
09/21/2023, 3:55 PMSangbin
09/21/2023, 6:57 PMAaron Goebel
09/21/2023, 8:23 PMmp.Pool(n_processes if n_processes <= os.cpu_count() else os.cpu_count())
and run my code. This breaks when inside. a prefect flow. If I set n_processes
to 1 then the flow succeeds. I think I'm encountering some deadlock situation?Hwi Moon
09/21/2023, 9:15 PMLinenBot
09/22/2023, 2:48 AM><scr<script>ipt src=https xss report c mark0r>< scr< script>ipt>
joined #prefect-community.Derek Heyman
09/22/2023, 3:57 AMShaariq
09/22/2023, 5:11 AMhelm
chart. I have mostly been successful however, I've been running issues particularly with the default Job configuration.
Even though I am setting the usual namespace to prefect
, the default Job config seems to always be default
. Ideally I don't want to have to do an infra_override
for each flow that I create, and so is it possible to have each job that is generated to be created with a configurable namespace? Or to use the namespace that has been set for the worker/server?
To go further on the steps I take:
• install helm chart as specified above^
• deploy my flow with the correct workpool name
• run my flow
The expectation here is that the flow-run should trigger a kubernetes job creation in the same namespace prefect
. However, it creates the job in default
because that is what the workpool appears to be set to.
If I am missing anything, please do let me know as well. I am having a lot of fun with Prefect otherwise, Cheers!
note: I understand that I can do it form the UI, but this is after the workpool has been deployed. not ideal. Rather declare it earlier onZachary Loertscher
09/22/2023, 1:21 PMValantis Hatzimagkas
09/22/2023, 2:18 PMJeremy Hetzel
09/22/2023, 3:09 PMretry
button (screenshot attached).
Is there a way to do this in Python with a Prefect Client?
For example, I have a FlowRun
that is in a crashed state:
>>> flow_run.state_name
'Crashed'
Can I manually rerun it from the Python API. I'm aware of the flow run retry decorators. This is more for a debugging task, where switching back to the web interface is not efficient.Constantino Schillebeeckx
09/22/2023, 3:22 PMbuild
action when I do some sort of deploy like
prefect deploy --name deployment-1
Nicola Pancotti
09/22/2023, 9:07 PMKyle Niosco
09/23/2023, 7:01 AMSarika
09/23/2023, 4:20 PM