https://prefect.io
Join Slack
<@ULVA73B9P> how can i debug the prefect code in my local machine
s

Suman Kamalapuram

over 1 year ago
@Marvin how can i debug the prefect code in my local machine
s
m
  • 2
  • 1
  • 127
Once I deploy a prefect-worker via helm chart, using value.yaml file, I have only one work-pool mapp...
g

Geese Howard

almost 2 years ago
Once I deploy a prefect-worker via helm chart, using value.yaml file, I have only one work-pool mapped to the newly generated worker. Is there any way to assign multiple workpools to a single worker? Here is the values.yaml file I have been using, but this one only activates gke-cpu-5000m-4Gi workpool.
namespaceOverride: prefect
worker:
  cloudApiConfig:
    accountId: HIDE
    workspaceId: HIDE
  config:
    workPool: gke
    workPool: gke-cpu-5000m-4Gi
serviceAccount:
  create: false
  name: "HIDE"
✅ 1
g
c
  • 2
  • 7
  • 127
hi all, I’ve upgraded to Prefect 2.0.0, now how can I define a block for Azure blob storage? I just ...
r

Riccardo Tesselli

almost 3 years ago
hi all, I’ve upgraded to Prefect 2.0.0, now how can I define a block for Azure blob storage? I just see S3 and GCP blocks…
✅ 1
r
a
+3
  • 5
  • 14
  • 127
hello, is there any way can set task or flow priority when using dask executor on prefect 1.0 ( e.g...
y

yu zeng

about 3 years ago
hello, is there any way can set task or flow priority when using dask executor on prefect 1.0 ( e.g., with dask.annotation( resource = { "GPU":1 } ) @Anna Geller. i konw prefect 2.0 support this but for some reasons i can't use 2.0
y
k
  • 2
  • 10
  • 127
I'm getting a QueuePool limit of size 5 overflow 10 reached when calling .map with around 700 tasks....
m

Maity

almost 2 years ago
I'm getting a QueuePool limit of size 5 overflow 10 reached when calling .map with around 700 tasks. How do I handle this and prevent the overflow/timeout from occuring?
# create an api request for each self link
api_results = api_get.map(map(lambda x: x["self"], employee_self_links))

@task(
    retries=3,
    retry_delay_seconds=exponential_backoff(backoff_factor=10),
)
def write(employee: Response):
    employee = employee.json()

    dest = f"raw/employees/details/{employee['id']}.json"
    write_to_file_system.fn(dest, employee)

# save the results to storage
storage_results = write.map(api_results)
m
m
  • 2
  • 4
  • 126
Hey guys, I would like to instrument my self hosted Prefect deployment with `OpenTelemetry` (Its th...
t

Tomer Friedman

about 2 years ago
Hey guys, I would like to instrument my self hosted Prefect deployment with
OpenTelemetry
(Its the last part of our micro-service architecture thats missing it) I was looking online and only managed to find a single old thread and one attempt that doesn't look working/maintained. Anyone tried this? Know if its doable even? • It would require passing the Open Telemetry trace id from a parent flow to each of its subflows/tasks. Prefect run context which might be an option for this does not appear to allow additional fields. • The tasks/flows themselves would also need to be instrumented of course Would appreciate any help 🙏 Thanks all
t
a
j
  • 3
  • 7
  • 126
Hey folks, I just had a flow run failing because of "prefect.exceptions.MissingResult: State data is...
a

Andreas Nigg

over 2 years ago
Hey folks, I just had a flow run failing because of "prefect.exceptions.MissingResult: State data is missing. Typically, this occurs when result persistence is disabled and the state has been retrieved from the API." This flow runs several times per day and always succeeded before and also after this incident. The good thing for me: The flow entered failed state and als the logs show an exception - but the flow itself continued after the exception and all the "results" of the pipeline were achieved. Looking at the Exception (in the Thread), is there something I'm missing, or is this maybe something for a github issue? (prefect 2.6.6, prefect cloud)
a
t
m
  • 3
  • 4
  • 126
I'm having problems with tasks stuck at the "Running" state however when I inspected running workers...
a

Ahmed Ezzat

about 3 years ago
I'm having problems with tasks stuck at the "Running" state however when I inspected running workers everything seems running fine and results are submitted to my database although nothing is updating prefect state. currently using "prefect cloud" with
prefect 1.2.0-python3.9
docker image. same as https://github.com/PrefectHQ/prefect/issues/3952 for the dev team: https://cloud.prefect.io/bitthebyte/flow-run/b30223e1-5308-48fe-aa0b-9326c6e48860 (this is the stuck workflow) I already tried restarting
a
k
  • 2
  • 11
  • 126
I’d like to trigger some flows whenever a new file is added to a Google Drive folder. My current pla...
j

Jason Prado

about 4 years ago
I’d like to trigger some flows whenever a new file is added to a Google Drive folder. My current plan is to run a small python server that receives webhook calls from the Drive API, then uses the Prefect API to trigger the flows. Questions: 1. Is the right Prefect API to use the GraphQL API mutations API
createFlowRun
? 2. Is there anything like this around already? Would love to not reinvent the wheel and deploy a new service.
j
n
  • 2
  • 3
  • 126
Has anybody migrated from 1.x to Prefect 2? And how was the journey?
a

Andrei Tulbure

over 2 years ago
Has anybody migrated from 1.x to Prefect 2? And how was the journey?
marvin duck 2
a
k
+2
  • 4
  • 5
  • 125
Previous373839Next

Prefect Community

Bring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.

Powered by