<@ULVA73B9P> Hello! I was wondering if it's possib...
# marvin-in-the-wild
g
@Marvin Hello! I was wondering if it's possible to set up a Prefect Cloud Workpool with workers from separate Kubernetes instances, or otherwise, to split up work between two Kubernetes locations. For my use case, I want to have redundancy in case a physical Kubernetes cluster fails. My desire on how to accomplish that is to have workers in two locations/clusters which can both pick up a deployment's work. If one of the locations were to go down, the other would seamlessly continue working fine. Is this something that's possible? My thought would be to configure a work pool such that it has workers from multiple locations, I'm just not sure if that's actually supported (and if so, how to do it).
1
m
It appears that with Prefect Cloud Work Pools, infrastructure configuration can be highly customized directly from the Prefect UI, including customizing the YAML payload used for running flows on Kubernetes. This suggests that advanced users have the flexibility to inject runtime information and use custom template variables, which can potentially accommodate multiple Kubernetes clusters. Although the specifics of configuring workers from different clusters within a single work pool are not detailed in the information available, the customizable nature of work pools seems to imply that such a setup might be possible. Unfortunately, no issues specifically discussing the setup with workers from separate Kubernetes instances in a Prefect Cloud Workpool were found in the open GitHub issues. For a definitive answer and step-by-step guidance, I would recommend reaching out to Prefect support or consulting the Prefect Cloud documentation for more detailed instructions on configuring work pools for such use cases. The customization available in work pools could be the key to achieving redundancy and distribution of work between different Kubernetes clusters as desired.
n
yeah you could have 2 k8s type work pools
b
^^
g
awesome, thanks!
b
Just my 2cents, but if you want to have some sort of fail-over handling for scheduled work, you could create 2 sets of deployments for your flows (one that submits work to k8s pool #1, and another for k8s pool #2). From the get-go, you could enable the schedule for deployment A, and disable it for deployment B. Then, you could create an automation which triggers if flow runs from deployment A enter a
crashed
state. The action of that deployment could then be to pause the schedule for deployment A, and enable the schedule for deployment B (which goes to your backup cluster).
catjam 1
g
This is a great alternative, I've added it as an option for my team to investigate. Thank you!
🎊 1