Claudiu
03/25/2025, 11:02 AMwork pool
in order to schedule flows, but since our deployment will be tied to some specific hardware (Jetson) we have to support emergency use-case without any cloud or work pool access. Our orchestration and infrastructure layers are on the same hardware so we don't need to separate them. What's the solution for us to be able to schedule flows without work pools?
• is there a way to pause/resume flows without work pools? Rn it seems we NEED to create a work pool to have this functionality.
• prefect.deploy
or prefect.serve
seem like good tools for remote deployment, but that just isn't our usecase.
• Do work pools make sense for our specific use-case or is there an entity that we can use instead?
Currently we have a yaml file that provides the scheduling details for a flow, but it's a very convoluted process. having the ability to directly scheduling a task when needed would simplify our process (more details in the thread)
Issue nr 2: serialization issues
• we have some entities that can't be easily serialized and custom serialisation logic will require additional parts of the system, that aren't implemented in the scope of POC . We know you have some serializers, but they don't work for our entities.
• we also have some singleton classes that act as a "syncing" element in our system. Is there a better alternative to manage the state for a single-machine all in one deployment?
• we're currently using the default task runner
, is there any benefit to using another one (like DaskTaskRunner
)? given that we don't need distributed cognition for POCClaudiu
03/25/2025, 11:03 AMHealthCheckerPipeline:
total_executions: -1 # runs indefinitely
delay: 0 # sec
frequency: 3 # sec
expired: False # always false
-> based on the config we create a thread per pipeline (flow) and we schedule it like this like this
data = strategy.get_data()
try:
if strategy.should_execute(start, next_trigger):
future = self.executor.submit(
self._execute, data, start, pipeline_name
)
---Claudiu
03/25/2025, 11:37 AM