<@ULVA73B9P> I want to use "work queue" feature wi...
# ask-marvin
j
@Marvin I want to use "work queue" feature without
.deploy
m
It seems that I haven't found specific information in the Prefect documentation based on your query about using the "work queue" feature without using
.deploy
. However, the Prefect documentation does discuss workflows and task queues in the context of task execution and dependency management. If you are looking to use work queues in Prefect without deploying with the typical
.deploy
command, you may need to look into agentless techniques or other methods of executing tasks that do not involve a standard deployment to an agent. For additional specific information about Prefect's capabilities without using
.deploy
, it would be useful to look into the Prefect community discussions, GitHub issues, or more in-depth documentation sections related to task execution or agentless setups. For further assistance, please consider visiting the following resource: - Work Queues and Workflows: Work Pool Concepts - Blog post discussing deployment and workflows: Prefect Blog Post If you have a more specific use case or need further clarification, please let me know, and I can perform a more targeted search or provide some code examples if necessary.
n
all work pools have configurable work queues, that you're free to use regardless of how you're creating your deployments are you trying to do something specific?
j
@Nate Thank you for your response, Nate. From what I've gathered, it seems that Prefect doesn't support a work queue when using the
.submit()
method with a flow, correct? Additionally, I'm unable to utilize GitRepository since I'm working within a private repository, and it appears that containers created by the Docker worker are unable to detect GPUs
n
there is not a submit method for flows at this time, only for tasks work queues are ways to stagger work delivery in a work pool. and the location of your source code should in principle have nothing to do with your runtime environments ability to access GPUs feel free to ask a specific question on one of the above problems you're having
j
@Nate Ah, there was a mistake; I wasn't referring to the .submit method, but rather the .serve method. In my situation, there are three main challenges. (1) Initially, it seemed that deploying the flow using the .serve method would be the most straightforward approach. However, I found that the .serve method doesn't support work queue (especially, concurrency. I thought priority queue and give a concurrency limit for each queue is intuitive than global concurrency or tag based concurrency control) functionality, which led me to consider alternative solutions. (2) Consequently, I explored using a local subprocess type work pool, but this approach hit a snag. Deployment in a local subprocess type work pool demands a GitRepository, which doesn't fit my current use case. (3) Therefore, I shifted to deploying in a Docker type work pool, which allows for the transfer of source code from the host environment to the container environment without requiring a GitRepository. This seemed promising, but now I'm facing issues with GPU detection. My specific engineering requirement is to process machine learning model latency benchmarking requests on a single GPU, without interference from accuracy benchmarking requests. While I anticipate scaling this feature horizontally in the future, it's not a concern at the moment. Thank you for your assistance!
n
I see! thanks for clarifying let me follow up later on these points (i happen to be quite busy today šŸ™‚ ) • Deployment in a local subprocess type work pool demands a GitRepository • GPU with docker work pool
šŸ™ 1
j