Hey folks, I'm looking into migrating a django + celery app to use Prefect for async task processing – we run X celery workers, which all listen to a queue broker (redis/rabbitmq/etc), and our API adds jobs to the queue as users trigger events. Pretty standard stuff.
I'm trying to recreate a basic version of this in Prefect. I've got a prefect server running; and several workers running in docker instances, each joining a work pool. The workers have the application code baked into the image. In celery-land, I'd just trigger jobs by calling
my_decorated_func.apply_async([args])
, i.e. `say_hello_world.apply_async(["Marvin"])`; and the workers would pick up the jobs, set up app internals (environment config et al), and then run the decorated function automatically.
I'm not seeing an obvious way to do this with Prefect. I can call my
say_hello_world
flow directly, and it'll run locally, but I need it to run in the worker pool. Calling
.deploy()
tries to register it with the default worker pool, which is great, but it complains about needing an entrypoint or image. I saw some comments online about using 'local storage' to point to the specific file the flow is in, i.e.
/path/to/file/flow.py:say_hello_world
, but... there's no way that's the "right" way to queue a job, right?
I get that the Prefect control plane allows for total independence between the place that's queueing jobs and the place that's executing them, but in my case, they're both the same docker image; just with different entrypoints (starting the API vs starting the prefect workers). What's a clean way to just say "look for this exact same decorated function in the worker", essentially as if it were running locally but in a different container?
CC
@Marvin