• t

    tsar

    1 year ago
    hoi, I'm trying to run prefect server start and I get the below error messages :
    ERROR: Invalid interpolation format for "apollo" option in service "services": "${GRAPHQL_HOST_PORT:-4201}"
    Exception caught; killing services (press ctrl-C to force)
    ERROR: Invalid interpolation format for "apollo" option in service "services": "${GRAPHQL_HOST_PORT:-4201}"
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/site-packages/prefect/cli/server.py", line 332, in start
        ["docker-compose", "pull"], cwd=compose_dir_path, env=env
      File "/usr/lib64/python3.6/subprocess.py", line 311, in check_call
        raise CalledProcessError(retcode, cmd)
    subprocess.CalledProcessError: Command '['docker-compose', 'pull']' returned non-zero exit status 1.
    t
    2 replies
    Copy to Clipboard
  • banditelol

    banditelol

    1 year ago
    Hello, I'm quite new to using prefect, but I've got some experience using airflow. I've got some question on the "best practice" on using prefect server. Currently I have a server instance running Prefect Server, and I put all my flow inside a git repository (github). And my usual worflow is:1. Work on the flow on my local machine, test it using
    flow.run()
    to run it locally and change it to
    flow.register()
    when I'm done. 2. Push the changes to git repo 3. Pull it from the server 4. Run the modified/created flow so that it's registered on Prefect Server (btw I have one agent running on the background in the server) 5. Activate the Flow from the UI I feel that there's clearly better way to do this, but I haven't found anything yet from googling. I really appreciate if there are anyone that could help give any clue for this. Thanks 🙂
    banditelol
    emre
    2 replies
    Copy to Clipboard
  • r

    Ralph Willgoss

    1 year ago
    Hi, Thanks for all the help over the past few weeks/months. I've been evaluating prefect for a use case and right now it looks like it doesn't add much for what we do. I wanted to check with you guys, to see if I'm missing something or if I'm on the right track. I have a python model that is very disk intensive. We generate and move around lots of intermediate data, about 180GB, which is going to increase too. I currently run the model using LocalDaskExecutor on a single AWS EC2 instance with about 92 cores and 196GB RAM. While prefect gives me the ability to scale horizontally, if I were to spread the work horizontally, moving the intermediary data between instances is going to be slower than referencing off disk. So in summary, while I can just scale vertically our model seems limited by disk access so using something like prefect to go horizontal appears to add additional overhead. Thoughts, questions, opinions all welcome.
    r
    Raphaël Riel
    +1
    12 replies
    Copy to Clipboard
  • m

    Marwan Sarieddine

    1 year ago
    Hi Folks , I am wondering if anyone has encountered this error before:
    Unexpected error: AttributeError("'NoneType' object has no attribute 'is_finished'")
    m
    1 replies
    Copy to Clipboard
  • m

    Marwan Sarieddine

    1 year ago
    Hi again, what would be the easiest way to programmtically get the tasks that failed and those that trigger failed given a flow run id ?
    m
    ale
    +1
    10 replies
    Copy to Clipboard
  • t

    tsar

    1 year ago
    there is currently no documentation on how to scale prefect on multiple hosts ? is this on purpose so people only use cloud ?
    t
    r
    +1
    13 replies
    Copy to Clipboard
  • m

    Marley

    1 year ago
    I’m trying to add two custom tasks to the end of my flow– one that notifies that the flow finished successfully (
    trigger=all_successful
    ), and one that only notifies if something failed (
    trigger=any_failed
    ). I’ve added a custom state handler to trigger failed (see code block in thread) that should
    SKIP
    if
    TriggerFailed
    . Raising that
    SKIP
    is causing the Task to fail in Prefect Cloud. I previously tried to leverage the Flow
    on_failure
    but for some reason it wasn’t sending my notifications. Am I missing something re: raising a
    SKIP
    signal on the last Task? Is there something special happening in a Flow’s
    on_failure
    preventing it from sending a Slack notification?
    m
    Kyle Moon-Wright
    12 replies
    Copy to Clipboard
  • n

    Nuno

    1 year ago
    Hello there. What’s the best way to re-run a task, automatically? My use case: I’m fetching a lots of data, but I can’t put it all in memory, so I need to re run the task until I’ve access all the data. Thank you
    n
    Kyle Moon-Wright
    11 replies
    Copy to Clipboard
  • i

    Isaac Brodsky

    1 year ago
    Anyone familiar with flows failing because of
    Unexpected error: KeyError('lz4')
    ? Seems the flow itself rather than a task is failing. This is using
    Docker
    storage, LocalEnvironment/DaskExecutor with Dask running on Kubernetes. Seems like somehow
    lz4
    is not present where the job is started? I do install
    pyarrow
    using
    python_dependencies
    in the
    Docker
    storage so I’d expect
    lz4
    to be there. I’m not sure where else
    lz4
    could be missing.
    i
    Kyle Moon-Wright
    5 replies
    Copy to Clipboard
  • james.lamb

    james.lamb

    1 year ago
    👋 hello from Chicago! I have a question that I haven't been able to answer from the docs or looking through the
    prefect
    source code.
    help(LocalResult)
    shows the following:
    LocalResult(dir: str = None, validate_dir: bool = True, **kwargs: Any) -> None
    Result that is written to and retrieved from the local file system.
    Is it fair to say that "local" in this case means "local to where the task is physically run" and not "local to wherever
    flow.run()
    is called from"?
    james.lamb
    Kyle Moon-Wright
    +1
    10 replies
    Copy to Clipboard