<@ULVA73B9P> I want to reuse a docker image in dif...
# ask-marvin
d
@Marvin I want to reuse a docker image in diff deployments. However, when using
prefect.deploy
I am bounded to only one work-pool. Is there a way to deploy multiple flows with different work-pools but same image (also same python req.txt).
m
In Prefect 2.x, it is possible to reuse a Docker image in different deployments. By default, work pools that use containers refer to the
2-latest
image of Prefect, but you can specify another image at work pool creation. Additionally, the choice of the work pool image can be overridden in individual deployments. To deploy multiple flows with different work pools using the same Docker image, Prefect supports multiple deployment declarations within the
prefect.yaml
file. This method allows you to configure all deployments to be version controlled and deployed with a single command. You can specify different work pool names for each deployment, while reusing the same Docker image. An example of how you would specify deployments in the
prefect.yaml
file with the same image but different work pools might look like this:
Copy code
yaml
deployments:
  - name: deployment-1
    image: my-registry/my-image:dev
    work_pool_name: my-work-pool-1
    # flow settings, tags, etc.
  - name: deployment-2
    image: my-registry/my-image:dev
    work_pool_name: my-work-pool-2
    # flow settings, tags, etc.
In this example,
deployment-1
and
deployment-2
can use the same Docker image
my-registry/my-image:dev
, but they will be directed to different work pools,
my-work-pool-1
and
my-work-pool-2
, respectively. Each deployment declaration must have a unique
name
field, and you can pass the necessary settings for each flow, such as the flow's location, parameters, schedules, and so on. When executing the
prefect deploy
command, Prefect takes the information from the
prefect.yaml
file and creates deployments according to the specified configurations. Therefore, you can manage different deployment scenarios, including different work pools, while maintaining the same Docker image for consistency across your flows.