<@ULVA73B9P>, Hi i'm using prefect with postgresql...
# ask-community
s
@Marvin, Hi i'm using prefect with postgresql aurora serverless v2 database. I have hosted prefect service as a ecs task with 1vcpu and 2gb memory. Now i'm trying to trigger 700 flows at atime. Durimng that time i'm observing that there is a slowness in the UI. Late response and also i see flows are submitting a bit late like 2mins. How to overcome this I want to submit any no of flows at once and dont want slowness. In real time i'll be running 1000's of flows at a time.
m
thought for 59.5 seconds
Based on your scenario, here are several recommendations to improve performance when running large numbers of concurrent flows: 1. Database Optimization - Increase the PostgreSQL Aurora connection pool size - Adjust
max_connections
parameter in PostgreSQL - Consider implementing table partitioning for flow run history tables - Optimize the
work_memory
and
maintenance_work_mem
settings in PostgreSQL 2. Server Resources - Your current ECS task (1 vCPU, 2GB memory) is likely insufficient for this scale - Recommend increasing to at least: - 2-4 vCPUs - 4-8GB memory - Consider horizontal scaling by running multiple server instances behind a load balancer 3. Architectural Recommendations - Use work pools instead of agents for better scaling - Implement batching strategies for flow submissions - Consider using flow concurrency limits to prevent system overload - Use work queues to distribute work more evenly 4. Performance Tuning
Copy code
python
   from prefect.settings import Settings
   
   # Adjust these settings in your Prefect configuration
   settings = Settings(
       PREFECT_API_REQUEST_TIMEOUT=300,  # Increase API timeout
       PREFECT_API_MAX_RETRIES=5,        # Increase retry attempts
       PREFECT_WORKER_PREFETCH_SECONDS=60 # Adjust prefetch window
   )
5. Database Connection Management
Copy code
python
   # In your PostgreSQL Aurora configuration
   {
       "max_connections": 500,
       "shared_buffers": "1GB",
       "work_mem": "50MB",
       "maintenance_work_mem": "256MB",
       "effective_cache_size": "3GB"
   }
6. Monitoring and Logging - Implement proper monitoring for both Prefect server and PostgreSQL - Watch for connection pooling issues - Monitor database IOPS and CPU utilization - Keep an eye on Aurora serverless scaling events For your specific use case of running thousands of flows, I would strongly recommend: 1. Upgrade your ECS task to at least 4 vCPUs and 8GB memory 2. Implement table partitioning in PostgreSQL 3. Use work pools with multiple workers to distribute the load 4. Consider implementing a flow submission queuing mechanism to prevent overwhelming the system If these optimizations don't provide sufficient improvement, you might want to consider: - Using Aurora Provisioned instead of Serverless v2 for more predictable performance - Implementing a caching layer for frequently accessed data - Setting up read replicas for your PostgreSQL database Remember that the UI slowness is often a symptom of database bottlenecks, so focusing on database optimization should be your first priority. Would you like more specific details about implementing any of these recommendations? I can provide more detailed configuration examples for specific areas you want to focus on.