• a

    Alberto de Santos

    1 year ago
    Hi folks, Has anyone any idea why, given a standard local agent, after some scheduled tasks (let’s say every 15 minutes), all the processes start to be queued and never processed? Thank you!
  • m

    Mariusz Olszewski

    1 year ago
    hi, is it possible to install prefect on windows server 2016?
  • m

    Mariusz Olszewski

    1 year ago
    i had this kind of errors
  • m

    Mariusz Olszewski

    1 year ago
    Chris White
    2 replies
    Copy to Clipboard
  • v


    1 year ago
    Hi Prefect Team looks like I found a bug but need your view: I had an error while running flow as script with ModuleNotFoundError and I realise that I need to start the agent with "-p" option so that Agent can find the new path though that did not help. On looking further I found the code inside the LocalAgent
    if self.import_paths:
        python_path += self.import_paths
    current_env["PYTHONPATH"] = ":".join(python_path)
    As I am working on Windows I realise that the colon to separate the Python path might not work and it should have been Semicolon for Windows
    5 replies
    Copy to Clipboard
  • Georg Zangl

    Georg Zangl

    1 year ago
    Dear Prefect Team, first of all, congratulations and many thanks for your hard work you've put into Prefect. I have managed to setup some flows and are able to run them when running single flows with single agents. However, I made the following experience, when I want to run three different flows in three different directories using supervisord: If I use one agent as program like:
    command=sudo prefect agent start local -p /usr/dcc/fhn/data_import -p /usr/dcc/fhn/calc_td -p /usr/dcc/fhn/vol_back_all -f
    I can run only two flows, the third one fails with the message "Failed to load and execute Flow's environment: ModuleNotFoundError". It doesn't matter which flow is third, it is always the last one, which fails. So I have created two separate programs:
    command=sudo prefect agent start local -p /usr/dcc/fhn/data_import -p /usr/dcc/fhn/calc_td -f -l Import
    command=prefect agent start local -p /usr/dcc/fhn/vol_back_all  -f -l Vol
    to handle all three flows. But now, the success of the flows is unstable. Sometimes they fail with the "Failed to load.." message, sometimes they succeed. Even if I trigger them manually, they sometimes fail and sometimes succeed. More than 50% of flow runs fail. Here is a log from supervisord:
    2020-10-18 08:55:05,888 DEBG 'fhn1' stdout output:
    [2020-10-18 08:55:05] INFO - prefect.CloudFlowRunner | Beginning Flow run for 'Calculate Equations FHN'
    2020-10-18 08:55:05,903 DEBG 'fhn1' stdout output:
    [2020-10-18 08:55:05] ERROR - prefect.Local | Failed to load Flow from /usr/dcc/fhn/flows/back-allocation-fhn.prefect
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/dist-packages/prefect/environments/storage/local.py", line 103, in get_flow
        return prefect.core.flow.Flow.load(flow_location)
      File "/usr/local/lib/python3.7/dist-packages/prefect/core/flow.py", line 1495, in load
        return cloudpickle.load(f)
      File "/usr/local/lib/python3.7/dist-packages/cloudpickle/cloudpickle.py", line 562, in subimport
    ModuleNotFoundError: No module named 'read_wt'
    2020-10-18 08:55:05,954 DEBG 'fhn1' stdout output:
    No module named 'read_wt'
    The two agents are running fine. All code and agents are local, everything runs on one machine. I am running looped tasks in each of the flow. Here is the code for one of the flows:
    flow = Flow("Data Import FHN")
    flow.set_dependencies(loop_conns, keyword_tasks={"iloop": looplist}, mapped=True)
    with Flow("Data Import FHN") as flow:
        connect = db_conn()
        mapped_result =loop_conns.map(iloop=looplist)
    flow.storage = Local(directory="/usr/dcc/fhn/flows")
    Any help would be appreciate to understand the unstable performance better.
    Georg Zangl
    13 replies
    Copy to Clipboard
  • b

    Bruce Haggerty

    1 year ago
    Any idea what could be causing a flow to fail with the error "Parameter object has no attribute 'task_run_name'" when using Docker storage? The docker image build was fine and passed its health checks, and the flow runs fine when using local storage (and local agent) instead. Note this happens only if I use a custom Dockerfile.
    4 replies
    Copy to Clipboard
  • j


    1 year ago
    Hi, Is there a way to provide a default "run-name" for a flow? I know using the CLI, it can be specified manually, but I'm hoping to do it programmatically, so that it appears in the UI and can be manually edited if needed.
    Kyle Moon-Wright
    2 replies
    Copy to Clipboard