• Bruno Murino

    Bruno Murino

    1 year ago
    Hi everyone — I’m using Docker storage but I want to register the same flow against 2 Prefect Servers (one is PROD the other is UAT and we have some other as well). All docker images are stored in the same place so we don’t need to build and push the docker image all the time. Is there a way to re-use the docker storage to deploy to many servers? I tried just “not building” because it was already built but I got the error:
    Failed to load and execute Flow's environment: ValueError('Flow is not contained in this Storage')
    Bruno Murino
    Kevin Kho
    3 replies
    Copy to Clipboard
  • s

    Slackbot

    1 year ago
    This message was deleted.
    s
    1 replies
    Copy to Clipboard
  • Leon Kozlowski

    Leon Kozlowski

    1 year ago
    Is there any facility to pass a
    --build-arg
    for a Dockerfile to the prefect register CLI command?
    Leon Kozlowski
    Kevin Kho
    9 replies
    Copy to Clipboard
  • Nishtha Varshney

    Nishtha Varshney

    1 year ago
    Hello, i was wondering if there's any way we can look at the output of each step using prefect or something similar to the python fn-graph (Python modelling pipeline library) ?
    Nishtha Varshney
    Kevin Kho
    17 replies
    Copy to Clipboard
  • Nelson Griffiths

    Nelson Griffiths

    1 year ago
    Hi I am using DockerRun for my Flow's environment and I am running into some issues. The Docker image pulls and runs fine, but it uses pytorch and I am starting to run into some shared memory issues for my dataloader. Normally using Docker, people have solved this by passing arguments to increase the shared memory size to the docker run command. Is there any way to change the shared memory of the DockerRun on Prefect?
    Nelson Griffiths
    Kevin Kho
    22 replies
    Copy to Clipboard
  • Nathan Atkins

    Nathan Atkins

    1 year ago
    Git/GitHub would be nice to clone whole repo @Kevin Kho @Michael Duncan @Benjamin R. Everett We have a bunch of core logic in it's own package that we are building into a core base image to run our specific flows in using the ECS agent. Our specific flows have a fair amount of logic in a couple of modules. We see in the code that both Git and GitHub storage classes only wind up with the flow.py file getting loaded into the image at runtime. Granted they get there by two different. Git clones the repo, extracts the flow.py and then deletes the repo. GitHub downloads just the flow.py file. It seems like it wouldn't be all that difficult to modify the Git storage class to not delete the repo after it extracts the flow.py file and have the root of the modules on PYTHONPATH in the base image. Then when the flow is executed it would be able to find the modules and everything would be Prefect. While we are all noodling on this - is there anything obvious that keeps us from:1. Creating a new GitRepo storage class that keeps the repo around 2. Adding it to the core package and building it into the core image 3. Adding where that repo's root will be to PYTHONPATH 4. running with the specific models loaded from Git.
    Nathan Atkins
    nicholas
    4 replies
    Copy to Clipboard
  • j

    Jonathan Wright

    1 year ago
    Hello. I would like to understand my options for task output caching. Are my only options for caching (which can be invalidate when inputs/parameters change): • Memory, when running Prefect Core locally • PrefectResult, when running in Server or Cloud Or is it possible to use a different backend such as S3. Ideally I would like to use a combination of
    target
    and
    cache_for
    . So far I’ve not got this to work*, is this supported? *the target file is written and used by subsequent flow runs but it is never invalidated
    result_location_and_target = "cache/{project_name}/{flow_name}/{task_name}.prefect_result"
    
    s3result = S3Result(bucket="bucket-name", location=result_location_and_target)
    
    @task(
        cache_for=datetime.timedelta(minutes=10),
        cache_validator=all_parameters,
        checkpoint=True,
        target=result_location_and_target,
        result=s3result,
    )
    j
    Kevin Kho
    11 replies
    Copy to Clipboard
  • b

    Blake List

    1 year ago
    Hi there! Can Prefect's map functionality work with dicts or named tuples? Thank you!
    b
    Kevin Kho
    2 replies
    Copy to Clipboard
  • t

    Talha

    1 year ago
    Hi, I am having a problem with prefect UI. Sometime the I change flow and register it or change flow label from UI. But the Metadata is not modified. I have to delete the flow and then register with a different name to make it work. I want to mention again, it is happening with some flows. Not a regular occurring. Also, if i delete the flow, it is kept in the archieve, if I register flow with same name the re-register does not work. I think metadata from archieved flows is messing up with the new flow Meta-data. Is there a way to delete flow from archieve. (permantly delete it from the database) @Kevin Kho posted in new Thread. Please refer it to your team.
    t
    nicholas
    4 replies
    Copy to Clipboard
  • k

    Kathryn Klarich

    1 year ago
    Hello, I am trying to set up an ECS agent running on an ECS cluster. However, when I run a test flow, the ECS task definition is created but it appears that it is set to [INACTIVE] immediately, so the task doesn't actually get started (it shows as running in the AWS console, but with the [INACTIVE] flag) and my flow doesn't get executed. I don't see any logs other than
    Submitted for execution
    and eventually after three retries it marks the flow as failed
    A Lazarus process attempted to reschedule this run 3 times without success. Marking as failed.
    , on agent side, the only log i see is
    Deploying flow run
    . It appears that the agent is creating the task definition, running the task and then immediately de-registering it (as seen here), which could be the problem as I don't see how the task can run if the definition is immediately de-registered before the task run is complete. I have looked through this issue, but I don't think this is the same problem because I am using prefect cloud. Any help is much appreciated.
    k
    nicholas
    +3
    56 replies
    Copy to Clipboard