• Michael Aldridge

    Michael Aldridge

    6 months ago
    I'm in the process of trying to deploy prefect in a test environment and I'm at the point where the instructions in /getting-started tell me to run
    prefect server create-tenant --name default
    . I get that when deploying as a standalone service you need to create the tenant, unfortunately this command appears to be expecting prefect to be visible on localhost, which it is not. Is there some variable I was supposed to export to get the local CLI to see the remote prefect server?
    Michael Aldridge
    Kyle McChesney
    4 replies
    Copy to Clipboard
  • Chris Reuter

    Chris Reuter

    6 months ago
    Hey all 👋 we're bringing the 🍕 Pizza Patrol to Austin, TX! If you're going to Data Council or just live in the Lone Star State, we'd love to see you there. All are welcome for free pizza and drinks 🍺. More info on Meetup! https://prefect-community.slack.com/archives/C036FRC4KMW/p1647898467352239
  • d

    Darshan

    6 months ago
    Hello - in prefect 2.0, is there a way to provide the task name dynamically ? For example, if I have a function defined as a task which is being called multiple times from a flow, I want to append a dynamic suffix to the task name.
    d
    Michael Adkins
    +1
    7 replies
    Copy to Clipboard
  • davzucky

    davzucky

    6 months ago
    With Orion, do you think it would be feasible to have a task that returns a pre-configured fsspec filesystem that is set up from Orion Storage? We are using different storage types depending on the environment. I would like to be able to remove a lot of conditional code we have today with prefect 1.0, and it look like Orion with the Storage may be really helpful for that.
    davzucky
    Jeremiah
    2 replies
    Copy to Clipboard
  • Vadym Dytyniak

    Vadym Dytyniak

    6 months ago
    Hello. What is the correct way to fail task that have retry logic, but I am sure that even 100 retries will not help?
    Vadym Dytyniak
    Anna Geller
    9 replies
    Copy to Clipboard
  • s

    Shrikkanth

    6 months ago
    Hey all , Is it possible to trigger a prefect flow using the flow name currently running as prefect cloud server using AWS Lambda ??? Any suggestions ???
    s
    Anna Geller
    2 replies
    Copy to Clipboard
  • a

    andrr

    6 months ago
    Hey all, 👋 We face several problems with flows that run in the Kubernetes cluster. • Pods often stuck in the
    Running
    state with the last message in logs
    DEBUG - prefect.CloudFlowRunner | Checking flow run state...
    • The flow in Prefect Cloud stucks in the
    Cancelling
    state and the pod stucks in the
    Running
    state in the Kubernetes cluster. Context: • prefect version
    0.15.13
    • Private Azure AKS cluster • We've tried to set
    PREFECT__CLOUD__HEARTBEAT_MODE
    to
    "thread"
    , but it only got worse (more stucked pods in the
    Running
    state). Now we have
    PREFECT__CLOUD__HEARTBEAT_MODE
    with
    "process"
    value and
    tini -- prefect execute flow-run
    as PID 1 to handle zombie process. It seems like the problem with the heartbeat process detecting the change to
    Cancelling
    or
    Cancelled
    states of the flow. I appreciate any help, thanks 🙂
    a
    Kevin Kho
    +1
    16 replies
    Copy to Clipboard
  • Florian Guily

    Florian Guily

    6 months ago
    Hey, i'm currently trying to do a simple ETL flow that connects to an API and fetch some records (i'm quite new to Prefect). I have to provide an API key to connect to this API as a query parameter. Is there some good practice regarding manipulation of this key ? Can i write it to another file that the flow will read ? i'm testing this locally but this will go in production env in the future (i hope)
    Florian Guily
    Sylvain Hazard
    3 replies
    Copy to Clipboard
  • Jason Motley

    Jason Motley

    6 months ago
    I know that you can set retries on individual task failures. Can you set a retry on a "flow of flows" if one flow fails?
    Jason Motley
    Kevin Kho
    14 replies
    Copy to Clipboard
  • Pedro Machado

    Pedro Machado

    6 months ago
    Hi everyone. I'd like to understand how memory is managed in a flow. I have a long-running flow that calls an API to get data. The flow works roughly like this: • get list of URLs to retrieve (about 180k URLs) • break the URLs in groups of 150 (list of lists) • a mapped task receives a list of 150 URLs and calls the API • another mapped task receives the API output for 150 URLs and saves the output to s3 I am using S3 results caching for the data intensive tasks (tasks 2 and 3 above) and Prefect Results for the rest of the tasks. I am seeing that the memory utilization keeps increasing until the container runs out of RAM (this is running on ECS fargate). It seems to be keeping the data retrieved from the API in memory, even after it's saved to s3. I can increase the container RAM but am trying to understand how I could write the flow so that it does not run out of RAM. This is what the Memory Utilization chart looks like. Eventually, the container dies and Prefect Cloud restarts it. Any suggestions?
    Pedro Machado
    Kevin Kho
    5 replies
    Copy to Clipboard