• Jeff Kehler

    Jeff Kehler

    5 months ago
    I've created a flow that triggers other flows using the
    create_flow_run
    function. I would like to use a Parameter to configure the top level flow but it seems parameters are only really meant to be passed into tasks. Is it possible to use the value from a Parameter within a Flow?
    Jeff Kehler
    Anna Geller
    +1
    17 replies
    Copy to Clipboard
  • Konstantin

    Konstantin

    5 months ago
    Hi Prefect team, in the current one, I can't do anything with the prefect constantly hanging. I need help. First, stop all running tasks in the scheduler, I attach a screenshot below. Secondly, to deal with the problem, fix and prevent the recurrence of this situation. The prefect is located on local servers, in docker, not in a cloud service. Tasks are performed in separate projects. Four agents have been launched that interact with GitLab. The first problem is that the agent freezes while performing the task, the monitoring runs out of RAM. In the task manager, in the docker "top", there are many python *.py launches, where parent ID =-1, kernel, docker container
    Konstantin
    Anna Geller
    +1
    8 replies
    Copy to Clipboard
  • Jan Nitschke

    Jan Nitschke

    5 months ago
    Hi Prefect, I want to run a flow on ECS and use GitHub as storage. My python code imports modules. The flow definition looks something like:
    from tasks import my_task
    from prefect.storage import GitHub
    from prefect import Flow
    from prefect.run_configs import ECSRun
    
    
    storage = GitHub(
        repo="repo",  # name of repo
        path="path/to/myflow.py",  # location of flow file in repo
        access_token_secret="GITHUB_ACCESS_KEY",  # name of personal access token secret
    )
    
    with Flow(name="foobar",
              run_config=ECSRun(),
              storage=storage) as flow:
        my_task()
    The problem seems to be that the GitHub storage only clones the single file and not the entire project which causes my import to fail. (
    ModuleNotFoundError("No module named 'tasks'")
    ) I've seen that there has been some discussion around this issue but it hasn't really helped me to solve the issue.... Is my only option to clone the repo into the custom image that I use for my ECS task? But that would mean that I would have to rebuild that image every time I change something to my underlying modules, right?
    Jan Nitschke
    Anna Geller
    2 replies
    Copy to Clipboard
  • a

    Andres

    5 months ago
    Hi Everyone, I'm developing a handler that will be sending slack notification to our team on flow failures with detailed information. I have managed it so far and it runs flawlessly locally. The issue comes when running it on our server, the handler sends a default notification that just says
    Some reference tasks failed.
    I was investigating a bit and seem that the state contains the result of all tasks while running it locally (
    state.result
    ) while on the server this is empty (i printed it using the logger) . Any idea on how to address this?
    a
    Anna Geller
    +1
    11 replies
    Copy to Clipboard
  • Atul Anand

    Atul Anand

    5 months ago
    Used Map functionality, But at a given point of time, I only see hike in only one cpi. Used DaskExecutor with 8 worker and 4 thread.Can anyone tell what is the issue.
    Atul Anand
    1 replies
    Copy to Clipboard
  • Tom Klein

    Tom Klein

    5 months ago
    Hello - I have a Q about ad-hoc env vars when running a flow — is it possible to “save” them somehow so they can be re-used again? or do I have to refill them manually each time i wanna create an ad-hoc run?
    Tom Klein
    Kevin Kho
    +1
    23 replies
    Copy to Clipboard
  • Shuchita Tripathi

    Shuchita Tripathi

    5 months ago
    Hi. poetry add prefect[azure] is giving error. And I am guessing this is the reason, my flows are not working. Without azure they were working fine. What can be done here?
    Shuchita Tripathi
    Kevin Kho
    +1
    41 replies
    Copy to Clipboard
  • Rajan Subramanian

    Rajan Subramanian

    5 months ago
    Hello all, i had my prefect orion cloud running for last 4 days with no hiccups. today i logged in and i see all the deployments disappeared from the cloud. Is there a time limit for these processes? @Anna Geller
    Rajan Subramanian
    Kevin Kho
    +1
    19 replies
    Copy to Clipboard
  • Joshua Weber

    Joshua Weber

    5 months ago
    Hey again everyone, is anyone implementing prefect while initializing local agents inside kubernetes pods? When our kubernetes pods die off and restart (for whatever reason) the old agents continue to live in a “dead” state in prefect. Eventually the agents accumlate and slow the whole UI down and deteriorate the performance. Anyone else have this problem or know a way to manage the agents in code?
    Joshua Weber
    Kevin Kho
    9 replies
    Copy to Clipboard
  • Rajan Subramanian

    Rajan Subramanian

    5 months ago
    Hello couple questions,1. After deployment, if i make further changes to my file. How does this new change get incorporated in the cloud? do i need to do a
    prefect deployment create deployment_name
    again for those new changes to take affect? 2) if above is true, then do i need to rerun the tasks again on the UI? 3) sometimes i inadverently press run twice and i have two running processes. Is there anyway to stop a process after it has been started? 4) when i delete the workspace, to start over, i notice when i type,
    ps aux | grep python | wc -l
    the python processes are still running and i have to do a
    pkill python
    to kill all the python processes. Is there any way that once a workspace is killed all the python processes are killed along with it?
    Rajan Subramanian
    Kevin Kho
    5 replies
    Copy to Clipboard