• y

    Yuliy Kuroki

    1 year ago
    Hello everyone! Is there a way to have a prefect local agent wait for all tasks it’s running to finish before starting to work on a new task? It’s running a very slow and intensive process and I want it to finish before starting work on a new one.
    y
    Kevin Kho
    4 replies
    Copy to Clipboard
  • Jeffery Newburn

    Jeffery Newburn

    1 year ago
    Tacking onto the last post. Are there any creative ways to limit the amount of tasks any given agent takes? Our agent is beefy but it will just keep taking flows until it runs out of memory. We have limited the flow concurrency but task concurrency doesn’t seem to fit right when we go with multiple agents. Personally I would love to be able to configure the agent itself to not bite off more than it can chew.
    Jeffery Newburn
    Kevin Kho
    4 replies
    Copy to Clipboard
  • Varun Joshi

    Varun Joshi

    1 year ago
    The new automation feature is really helpful! Kudos to the Prefect team on that! 🙌
    Varun Joshi
    1 replies
    Copy to Clipboard
  • j

    Justin Chavez

    1 year ago
    Hi! Are Dask Executors the only environment that can achieve parallelization for mapped tasks? For example, I have a custom task called
    run_command
    , and inside it is launching the command on a
    RunNamespacedJob
    to use Kubernetes. I have multiple commands that take a while to complete so I would like multiple Namespaced Jobs to run at the same time, I tried using a mapping like:
    with Flow as flow:
        run_command.map([cmd1, cmd2,...])
    But Prefect is running each Namespaced Job in serial. Would switching to a Dask executor be the key? Or could I adjust the map function to achieve parallelization?
    j
    Kevin Kho
    +1
    5 replies
    Copy to Clipboard
  • r

    Riley Hun

    1 year ago
    Hello, For the Prefect Server helm chart, I'm trying to expose the UI on an nginx controller ingress, but it's returning a 404 or 503 error. I can confirm that the UI was deployed successfully and it works when using a standard external load balancer. Concomitantly, I don't think the issue is nginx controller because I have exposed other applications on it. Also note that I have the UI working in tandem with nginx controller in a different environment (I think using a different version of the helm chart).
    r
    Michael Adkins
    +1
    13 replies
    Copy to Clipboard
  • r

    Ranu Goldan

    1 year ago
    Hi everyone, my team are using Prefect Cloud and find that secrets are attached to the team, not project. So to divide the secret between environment (dev/stg/prod) we want to create new team for each env. But I can't find the button on the settings-team? How to create a new team?
    r
    Chris White
    +1
    7 replies
    Copy to Clipboard
  • Jeremy Tee

    Jeremy Tee

    1 year ago
    Hi everybody, I am trying to retrieve results from my child flow in my parent flow. I am currently storing all my child flow results in s3. I am finding it hard to retrieve the location based on the states returned using
    client.get_flow_run_info("xxxxx")
    . Is there another way for me to get the location on where the task results are stored?
    Jeremy Tee
    Michael Adkins
    2 replies
    Copy to Clipboard
  • l

    Lukas N.

    1 year ago
    Hello 👋 We're using Prefect server and running our flows with the KubernetesAgent. Sometimes a flow run is running twice in parallel. After a bit of investigation I found this: The first flow run fails the heartbeat so the ZombieKiller retries the flow run (starting the parallel execution). But the first one is still running, it's not dead, it just didn't do the heartbeat because of long blocking operation. Any ideas how to prevent this? I don't even know how the heartbeat system works
    No heartbeat detected from the remote task; retrying the run.
    l
    Kevin Kho
    +1
    8 replies
    Copy to Clipboard
  • h

    Hawkar Mahmod

    1 year ago
    Hey everyone, I am getting the following error:
    TypeError: can't pickle generator objects
    on a task that returns a generator. Now this task is not persisted using a
    Result
    , there is no Result or checkpointing enabled on the task. When I run locally the flow works just fine. However when I trigger via the Prefect UI, and use S3 Storage it tries to persist all tasks I think. This is what this line in the documentation refers to I believe (see image). How can I get this task to not be persisted by default if that is what is causing this.
    h
    Noah Holm
    2 replies
    Copy to Clipboard
  • Mickael Riani

    Mickael Riani

    1 year ago
    Hello everyone, I'm trying to find a way to do the task execution distribution on different server (batch and front). I would like my task to run with priority on my front server and if this one is not available I would like to run the task on the batch server. Do you know how I could do it?
    Mickael Riani
    1 replies
    Copy to Clipboard