Hi there, is there a TTL for inactive agents, or is there any API documentation available to clean up dead agents. Thanks in advance
1 year ago
Can anyone share their strategy for setting up a Staging-like environment with Prefect Cloud? Given that we only have a single tenant whatever solution we come up with will never truly be totally separate from Production flows, so I was curious what people’s approach is here? I realize that you can do local testing but of course that doesn’t mirror every aspect of the environment when deploying stuff.
1 year ago
Hi, probably a dumb question, but just to be sure, tasks returning generators as part of their output, are they always guaranteed to work ?
Something like this : ( note, I will never use a Dask runner, this is a special case working on huge files )
Hello everyone!I'm trying to find the right way to send a single slack notification for a group of mapped shell tasks. In my case, the list of mapped task is quite big, therefore, the slack messages are numerous and the slack channel has become really spammy. To be more specific, let's suppose that a group of similar shell commands should be mapped:
from prefect.tasks.shell import ShellTask
from prefect import Flow
a = ShellTask()
commands = [
with Flow('test') as flow:
b = a.map(commands)
# send a single slack nofiication# that summarize the states of mapped tasks
The purpose of this slack notification is to summarize the states of mapped tasks by displaying the percentage of successfully mapped tasks. In the case above, this percentage will be 66.6% (the last command will fail).I tried to approach the problem using triggers and state handlers but I couldn't find a clean way to achieve the goal.Have you been in this situation before? Any hint?Thanks in advance!
1 year ago
Hi 👋Does anyone know how (if at all possible) to iterate over a
Unexpected error: OSError("Timed out trying to connect to '<tcp://10.100.0.100:44991>' after 10 s: Timed out trying to connect to '<tcp://10.100.0.100:44991>' after 10 s: connect() didn't finish in time")
Is this a known issue?The environment is a daskkubernetes cluster running on AWS EKS...