Daniel Gonzalez
06/30/2025, 7:38 PMMarvin
06/30/2025, 7:38 PMMarvin
06/30/2025, 7:41 PMyaml
server:
database:
sqlalchemy:
pool_size: 60 # Adjust based on your needs
max_overflow: 10
2. Time Range Impact
- The UI defaults to showing 7 days of flow runs, which can be excessive if you have many runs
- Reduce the time range when viewing flow runs to improve performance
- Use filters to limit the number of runs being displayed
3. Server Configuration
- Ensure your server has adequate resources (CPU/memory)
- Consider adjusting these server settings:
yaml
server:
api:
default_limit: 500 # Adjust this lower if needed
flow_run_graph:
max_nodes: 10000
max_artifacts: 10000
4. Optimization Recommendations:
- Use tags to organize and filter your flows more efficiently
- Implement pagination when viewing large sets of flow runs
- Consider cleaning up old flow runs that are no longer needed
- If possible, host the Prefect server closer to where you're accessing the UI to reduce network latency
5. Database Maintenance
- Regularly clean up old flow runs and task runs
- Consider implementing a retention policy for your flow run history
- Ensure your database is properly indexed
If these solutions don't help, you might want to check:
- The server logs for any specific errors or warnings
- The database performance metrics (if you're using a managed database)
- Network latency between your browser and the Prefect server
Would you like me to provide more specific details about any of these areas?Marvin
06/30/2025, 7:57 PMMarvin
06/30/2025, 7:58 PMbash
ssh -L 4200:localhost:4200 your-remote-machine
Then access the UI at <http://localhost:4200>
2. Browser Resource Usage
- The UI defaults to showing 7 days of flow runs, which can be resource-intensive
- Reduce the time range when viewing flow runs (e.g., last 24 hours)
- Use filters to limit the number of runs being displayed
- Try clearing your browser cache or using an incognito window
3. Server Resources
- Check if the remote machine has sufficient resources:
bash
top # Check CPU usage
free -h # Check memory usage
df -h # Check disk space
- Consider increasing the resources allocated to the Prefect server
4. Database Configuration
If you're using SQLite (the default):
- Consider switching to PostgreSQL for better performance
- Regularly clean up old flow runs to prevent database bloat
- Implement a retention policy for your flow run history
5. Server Configuration Optimizations
Add these settings to your Prefect configuration:
yaml
server:
api:
default_limit: 200 # Reduce from default 500
database:
sqlalchemy:
pool_size: 20
max_overflow: 5
flow_run_graph:
max_nodes: 5000 # Reduce if you don't need large graphs
Would you like me to provide more specific guidance about any of these areas? For example, I can provide detailed steps for setting up SSH tunneling or implementing a specific optimization.Daniel Gonzalez
06/30/2025, 8:12 PMMarvin
06/30/2025, 8:14 PMMarvin
06/30/2025, 8:15 PMyaml
api:
request_timeout: 300 # Increase from default 60 seconds
client:
max_retries: 5
2. Network Connectivity
- Since you're running on a remote machine, network latency or instability could be causing the timeout
- Ensure stable network connection between your flow runner and the Prefect server
- Consider using SSH tunneling if network conditions are poor:
bash
ssh -L 4200:localhost:4200 your-remote-machine
3. Server Resource Constraints
- The server might be overwhelmed processing requests
- Check server resources:
bash
top # Check CPU usage
free -h # Check memory usage
- Consider adjusting database connection settings:
yaml
server:
database:
sqlalchemy:
pool_size: 20
max_overflow: 5
4. Flow Configuration
- Large task results or logs can cause timeouts
- Consider using result persistence for large outputs
- Implement proper error handling and retries in your flows
Would you like me to provide more specific guidance about any of these areas? For example, I can provide detailed steps for configuring timeouts or setting up result persistence.Daniel Gonzalez
06/30/2025, 8:48 PMMarvin
06/30/2025, 8:48 PMMarvin
06/30/2025, 8:52 PM