https://prefect.io logo
Title
b

Bogdan Serban

07/08/2022, 2:04 PM
Hello everyone! I am planning to build an image processing pipeline which consists of both ML and non ML processing. I am planning to build each processing step as an independent function (Prefect Task) and successively apply these functions on each image. I will be pulling the images from a cloud storage container. The ML processing will require some GPU acceleration. My question here is twofold: 1. How do I load and share the ML models to be used in running the inference? I have some pytorch models right now. 2. Is it possible to specify on what type of node (GPU/non-GPU) each task will run on? I want that the ML inference functions to run on GPU nodes, and non-ML on CPU nodes. I would really appreciate your answers! Thanks!
👀 1
✅ 1
a

Anna Geller

07/08/2022, 2:16 PM
how would you do it in Python? if you can write the pipeline for it in Python, you can orchestrate and operationalize it with Prefect. For GPU, an easy way would be to spin up an agent with EKS-managed nodegroups that have embedded GPUs and a preinstalled NVIDIA Kubernetes device plugin. This can be created with a single
eksctl
command:
eksctl create cluster --name=prefect-eks-gpu --node-type=p2.xlarge  --nodes=2
b

Bogdan Serban

07/08/2022, 2:20 PM
thank you for your answer and for the linked resources! Will definitely look through them
🙌 1