I am using a deployment to dynamically create more...
# ask-community
α
I am using a deployment to dynamically create more deployments. However, when the deployment is created, the schedule(cron) does not go through. When I isolate the flow code in a separate script, the deployment works perfectly, including the cron schedule. Looks like a bug to me. Any ideas ?
Copy code
# master_deployment.py

from prefect import flow, task
from prefect.runner.storage import GitRepository
from prefect_github import GitHubCredentials
from prefect.docker import DockerImage

@flow
def master_deployment_flow(client_id: str, client_preferences: dict):
    # This flow triggers the creation of a client-specific deployment
    source = GitRepository(
        url="actual_url",
        credentials=GitHubCredentials.load("actual_credentials")
    )

    # Define and deploy the client-specific deployment
    client_deployment_id = flow.from_source(
        source=source,
        entrypoint="scripts/flow.py:flow_function",
    ).deploy(
        name=f"{client_id}",
        work_pool_name="my-ecs-pool",
        image=DockerImage(
            name="actual_docker_image",
            platform="linux/amd64",
            dockerfile="Dockerfile.prefect"
        ),
        job_variables={
            "parameters": {
                "client_id": client_id,
                "client_preferences": client_preferences
            }
            
        },
        cron="*/10 * * * *",
        push=False,
        build=False
    )

    print(f"Deployment ID for client {client_id}: {client_deployment_id}")

if __name__ == "__main__":
    # Deploy the master flow itself (this only needs to be done once)
    source = GitRepository(
        url="actual_url",
        credentials=GitHubCredentials.load("github-creda-1")
    )

    master_deployment_id=flow.from_source(
        source=source,
        entrypoint="prefect_deployments/master_deployment.py:master_deployment_flow"
    ).deploy(
        name="master-deployment",
        work_pool_name="my-ecs-pool",
        image=DockerImage(
            name="...actual docker image...",
            platform="linux/amd64",
            dockerfile="Dockerfile.prefect"
        ),
        push=False,
        build=False
        
    )
Also, even when succeeding, the scheduled flow runs do not have the parameters. We are at a stalemate, and really struggling on how to continue.
@Bianca Hoch Thank you so much for the feedback. As you can see, we have already moved to programmatically deploying. We are already utilizing the PAT inside the block. The schedule is acting up, and I wonder whether we are missing something.
👀 1
b
Hi Αλκιβιάδης, I can take a look to see if there's any suggestions I can make here. Can you walk me through what pattern you're looking to achieve? Are there multiple flows you're looking to schedule to run every 10 minutes? Are they all stored in the same GitHub repo?
α
Essentially we have deployed a flow that every time it's called, it creates another deployment with a schedule. But unfortunately, this doesn't seem to work. Also we're getting this error
Copy code
Failed due to a(n) crashed flow run - Flow run process exited with non-zero status code 1. Further investigation needed to determine the cause of the crash.
which is basically, explaining, well, nothing at all. Where else can I get more logs? I have an ECS:push Work Pool.
Is there any better practice in which we can dynamically create deployments with parameters and a schedule ?
b
I took some time to build out a small MRE of the pattern you're proposing (using a local process workpool). I got it to work, and the parameters and schedules are passing in correctly to the new deployments created by the
master_deployment_flow
. One thing I noticed that may be causing problems for you: • Parameters for client-specific deployments are being set within the
job_variables
of your
.deploy()
method. Try setting them in the
parameters
argument instead (see below).
Copy code
# master_deployment.py

from prefect import flow
import os

@flow(log_prints=True)
def master_deployment_flow(client_id: str, client_preferences: str):

    # Define and deploy the client-specific deployment
    client_deployment_id = flow.from_source(
        source="<https://github.com/PrefectHQ/prefect.git>",
        entrypoint="flows/hello_world.py:hello"
        ).deploy(
            name=f"{client_id}",
            work_pool_name="my_process_pool",
            cron="*/10 * * * *",
            parameters={"name": client_preferences},
            push=False,
            build=False,
        )

    print(f"Deployment ID for client {client_id}: {client_deployment_id}")

if __name__ == "__main__":
    master_deployment_flow.from_source(
        source=os.path.dirname(__file__),
        entrypoint=f"{os.path.basename(__file__)}:master_deployment_flow"
        ).deploy(
            name="master-deployment",
            work_pool_name="my_process_pool",
            parameters={"client_id": "client-a", "client_preferences": "Marvin"},
        )
If you look at my
if __name__ == "__main__"
block, I also set the default parameter values for the
master_deployment_flow
there as well (client_id and client_preferences)
Where else can I get more logs? I have an ECS:push Work Pool.
Enabling debug level logging may help you out here. You can set it as an ENV var in your work pool base template:
PREFECT_LOGGING_LEVEL=DEBUG
α
That looks lovely. I appreciate you so much for taking the time to replicate this. We will try to implement it first thing tomorrow.
b
ofc! let me know what you find. FWIW you should be able to run my MRE in your own environment. The flow that's being deployed by the
master_deployment_flow
lives here: https://github.com/PrefectHQ/prefect/blob/main/flows/hello_world.py