<@ULVA73B9P> i have `PREFECT_REDIS_MESSAGING_HOST`...
# marvin-ai
j
@Marvin i have
PREFECT_REDIS_MESSAGING_HOST
and other environment variables set. My tasks run fine, everything connects fine. The keys added to redis look like this:
Copy code
03:05:04 SET -> message:messaging-cache:01981657-042e-7d0c-ae52-56a84a06ac91
03:05:04 SET -> message:messaging-cache:01981657-10e1-76dd-bf19-c793fa912490
03:05:04 SET -> message:messaging-cache:01981657-10fe-7b2e-b388-00b15e3d616d
03:05:04 SET -> message:messaging-cache:01981657-9cd5-71fa-9c2c-d30f379e1848
03:05:04 SET -> message:messaging-cache:6a539075-d51a-4479-867f-0e3059f9a103
i'd like to note that everything is working as intended except the tasks are not showing up on the UI and i get a "This task did not generate any task or subflow runs". when i stop using redis as a message broker, the UI works as intended. i want to emphasize that everything is working as intended otherwise; there are no connectivity or accessibility issues.
m
thought for 45.5 seconds
Based on your description and the research, I have a few thoughts about this situation: 1. First, I want to acknowledge that you've emphasized everything is working correctly from an execution standpoint - the tasks are running, Redis is connecting, and messages are being properly set in Redis. This is important because it rules out basic connectivity issues. 2. The behavior you're seeing - tasks not showing up in the UI but working otherwise - suggests this might be related to how events are being propagated to the UI rather than an issue with the actual task execution. 3. There are a few potential solutions to investigate: a. Event Retention Settings: Check if you have any custom settings for
PREFECT_EVENTS_MAXIMUM_RETENTION_DAYS
or
PREFECT_EVENTS_MINIMUM_RETENTION_DAYS
. These could affect how events are stored and displayed. b. Redis Configuration: While your Redis connection is working (as evidenced by the keys being set), you might want to verify these environment variables are set correctly: -
PREFECT_API_DATABASE_CONNECTION_URL
-
PREFECT_API_URL
c. Worker Settings: If you're using a worker, ensure it's configured to properly report task states back to the API. 4. To troubleshoot this: - Could you try running with debug logging enabled? Set
PREFECT_LOGGING_LEVEL=DEBUG
and check if there are any relevant messages about event propagation or UI updates. - Verify that your Prefect server is running and accessible from where your flows are executing. - Check if the behavior is consistent across different flows/tasks or if it's specific to certain patterns. Given that everything works when not using Redis as a message broker, this strongly suggests it's related to how events are being propagated through the Redis messaging system to the UI, rather than an issue with the task execution itself. Would you be able to share: 1. The version of Prefect you're using? 2. Whether you're using a worker or running flows directly? 3. Any other relevant environment variables you have set besides
PREFECT_REDIS_MESSAGING_HOST
? This would help me provide more specific guidance for your situation.
j
@Marvin the connection URL and API urls are set correctly --- otherwise the tasks wouldn't run. 1. version 3.4.5 2. i'm using a worker --- i've tried using a cloud run worker and a process worker, but both result in the same thing. how do i run the flow directly? 3. my only redis-related environment variables are:
Copy code
PREFECT_MESSAGING_BROKER=prefect_redis.messaging
PREFECT_MESSAGING_CACHE=prefect_redis.messaging
PREFECT_REDIS_MESSAGING_DB=0
PREFECT_REDIS_MESSAGING_HOST=10.142.0.16
PREFECT_REDIS_MESSAGING_PORT=6379
m
thought for 67.7 seconds
Based on your setup and the research, here are a few specific things to try: 1. Event Granularity Settings: Add this environment variable to potentially improve event propagation:
Copy code
PREFECT_EVENTS_PROACTIVE_GRANULARITY=1
2. Redis Lock Management: Since you're using Redis for messaging, you might want to explicitly set the Redis lock manager:
Copy code
PREFECT_REDIS_LOCK_MANAGER=prefect_redis.lock
3. Debug Mode: To get more insight into what's happening, you can enable debug logging specifically for the messaging components:
Copy code
PREFECT_LOGGING_LEVEL=DEBUG
PREFECT_LOGGING_EXTRA_LOGGERS=["prefect.engine","prefect.events"]
4. Event Retention: You might want to explicitly set the event retention period:
Copy code
PREFECT_EVENTS_MAXIMUM_RETENTION_DAYS=7
Also, could you check a couple of things: 1. In your Redis logs or using Redis CLI, can you see if there are any keys with patterns like
prefect:*
or
event:*
? The keys you showed are message cache keys, but there should be others related to event tracking. 2. When you run your flow, does the flow run itself show up in the UI (even if the tasks don't)? 3. Are you using any custom task runners or flow runners in your flow definition? The fact that this works without Redis as a message broker suggests there might be an issue with how events are being propagated through Redis. The message cache keys you showed indicate that Redis is working for message passing, but there might be a disconnect in how task state events are being handled.
j
@Marvin 1. there are not any keys with patterns like
prefect
or
event
! --- is this the core of the problem? 2. yes. the flow shows up in the UI, but the subflows and tasks do not 3. i am not running any custom task runners or flow runners
m
thought for 33.1 seconds
Based on this, I believe we need to properly configure Redis for event handling. Here's what I recommend: 1. First, make sure you have the Redis integration properly installed:
Copy code
bash
pip install prefect-redis
2. You need to add these additional environment variables to configure Redis for event handling:
Copy code
bash
PREFECT_EVENTS_BROKER=prefect_redis.messaging
PREFECT_EVENTS_CACHE=prefect_redis.messaging
PREFECT_REDIS_EVENTS_HOST=${PREFECT_REDIS_MESSAGING_HOST}
PREFECT_REDIS_EVENTS_PORT=${PREFECT_REDIS_MESSAGING_PORT}
PREFECT_REDIS_EVENTS_DB=0
The key difference here is that we're explicitly configuring Redis for events (
PREFECT_EVENTS_*
) in addition to messaging (
PREFECT_MESSAGING_*
). Right now, your configuration only has Redis set up for messaging, which explains why: - Message cache keys are being created (the ones you see) - The flow itself shows up in the UI (basic communication works) - But task/subflow states aren't being tracked (no event keys in Redis) 3. After adding these environment variables, restart your worker and try running your flow again. You should start seeing
prefect:*
and
event:*
keys in Redis. 4. If you want to verify the events configuration is working, you can enable debug logging specifically for events:
Copy code
bash
PREFECT_LOGGING_LEVEL=DEBUG
PREFECT_LOGGING_EXTRA_LOGGERS=["prefect.events"]
Let me know if adding these event-specific Redis configurations resolves the UI visibility issue. If not, we can dig deeper into the event propagation settings.
j
@Marvin those settings didn't change anything, but here are some event logs:
Copy code
03:31:38.473 | DEBUG   | prefect.workers.cloud-run-v2.cloudrunworkerv2 9e6327f8-8c34-4ad9-9108-96d8c2190b8c - Discovered 1 scheduled_flow_runs
03:31:38.474 | INFO    | prefect.flow_runs.worker - Worker 'CloudRunWorkerV2 9e6327f8-8c34-4ad9-9108-96d8c2190b8c' submitting flow run '4c2633ee-ddf7-4a63-a733-170cc9cec227'
03:31:38.538 | DEBUG   | prefect.client - Connecting to API at <http://10.142.0.18:8080/prefect/api/>
03:31:38.580 | DEBUG   | prefect.workers.cloud-run-v2.cloudrunworkerv2 9e6327f8-8c34-4ad9-9108-96d8c2190b8c - Ready to submit 4c2633ee-ddf7-4a63-a733-170cc9cec227: True
03:31:38.758 | DEBUG   | prefect.events.clients - EventsClient(id=140136228732704): Emitting event id=01981670-0c83-7e07-b3a8-cce4d887cfab.
03:31:38.759 | DEBUG   | prefect.events.clients - Added event id=01981670-0c83-7e07-b3a8-cce4d887cfab to unconfirmed events list. There are now 2 unconfirmed events.
03:31:38.760 | DEBUG   | prefect.events.clients - EventsClient(id=140136228732704): Emit reconnection attempt 0.
03:31:38.762 | DEBUG   | prefect.events.clients - EventsClient(id=140136228732704): Sending event id=01981670-0c83-7e07-b3a8-cce4d887cfab.
03:31:38.764 | DEBUG   | prefect.events.clients - EventsClient(id=140136228732704): Checkpointing event id=01981670-0c83-7e07-b3a8-cce4d887cfab.
03:31:38.850 | INFO    | prefect.flow_runs.worker - Creating Cloud Run JobV2 upsilon5-elion-83a664f121414f468cb47acc896435e3
03:31:39.361 | INFO    | prefect.flow_runs.worker - Current Job Condition: waiting for condition update
03:31:44.418 | INFO    | prefect.flow_runs.worker - Current Job Condition: {'type': 'Ready', 'state': 'CONDITION_RECONCILING', 'lastTransitio
m
Error: 1 validation error for GitHubIssue body Input should be a valid string [type=string_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/string_type
thought for 42.0 seconds
Based on your event logs and the research, I notice a few important things: 1. The events are being emitted and checkpointed (
EventsClient: Checkpointing event id=01981670-0c83-7e07-b3a8-cce4d887cfab
), but they're not making it to the UI. 2. You're using a Cloud Run worker, which means the tasks are running in an isolated environment. Let's try a different approach. Instead of focusing on Redis event configuration, let's ensure the events are being properly propagated from your Cloud Run environment. Add these environment variables:
Copy code
bash
# Increase event visibility and tracking
PREFECT_EVENTS_MAXIMUM_SIZE_BYTES=1500000
PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE=500
PREFECT_EVENTS_WEBSOCKET_BACKFILL_PAGE_SIZE=250

# Ensure events are persisted
PREFECT_SERVER_SERVICES_EVENT_PERSISTER_ENABLED=true
PREFECT_SERVER_SERVICES_EVENT_PERSISTER_BATCH_SIZE=20
PREFECT_SERVER_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL=5

# Enable more aggressive event streaming
PREFECT_SERVER_EVENTS_STREAM_OUT_ENABLED=true
PREFECT_SERVER_EVENTS_WEBSOCKET_BACKFILL="PT15M"
Also, could you check: 1. Are you running your own Prefect server or using Prefect Cloud? 2. Can you share a minimal flow example that reproduces this issue? This would help us understand if it's related to specific flow patterns. 3. In your Cloud Run configuration, are you setting any memory or CPU constraints that might affect event processing? The fact that the flow shows up but tasks don't suggests that the initial flow state events are making it through, but subsequent task state events aren't being properly propagated. This could be due to how Cloud Run is handling the event emission process or how the events are being persisted and retrieved.