<@ULVA73B9P> I have both Prefect and Postgres 17 r...
# ask-marvin
a
@Marvin I have both Prefect and Postgres 17 running locally on my Windows server. I can interact with the database and UI just fine in all areas except when viewing a flow-run in the UI. This error shows up in my Prefect server logs:
Copy code
| ERROR   | prefect.server.services.failexpiredpauses - Unexpected error in: TimeoutError()
Traceback (most recent call last):
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connection.py", line 2421, in connect
    return await connect_utils._connect(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<20 lines>...
    )
    ^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 1049, in _connect
    conn = await _connect_addr(
           ^^^^^^^^^^^^^^^^^^^^
    ...<6 lines>...
    )
    ^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 886, in _connect_addr
    return await __connect_addr(params, True, *args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 931, in __connect_addr
    tr, pr = await connector
             ^^^^^^^^^^^^^^^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 810, in _create_ssl_connection
    do_ssl_upgrade = await pr.on_data
                     ^^^^^^^^^^^^^^^^
asyncio.exceptions.CancelledError
m
thought for 94.0 seconds
Based on the error and research, this appears to be an SSL-related connection issue between Prefect and PostgreSQL. Here are several steps you can try to resolve this: 1. Check SSL Settings First, try disabling SSL for the database connection by setting this environment variable:
Copy code
bash
   PREFECT_API_DATABASE_CONNECTION_URL="<postgresql+asyncpg://username:password@localhost:5432/prefect?ssl=false>"
2. Increase Timeout Settings If the issue persists, try increasing the database timeout:
Copy code
bash
   PREFECT_API_DATABASE_TIMEOUT=60
3. Configure SSL Properly If you need SSL (recommended for production), ensure proper SSL configuration:
Copy code
bash
   PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_ENABLED=true
   PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_VALIDATE=false  # Only during testing
4. Check PostgreSQL Configuration - Verify your
postgresql.conf
has SSL enabled:
Copy code
ssl = on
     ssl_cert_file = 'server.crt'
     ssl_key_file = 'server.key'
- Check the
pg_hba.conf
file to ensure it's configured to accept SSL connections from your client IP address 5. Resource Allocation - Ensure your PostgreSQL server has adequate resources allocated - Check if there are any resource constraints on your Windows server The error suggests that the SSL handshake is being cancelled, which could be due to timeouts or SSL configuration mismatches. Start with disabling SSL temporarily to verify if that's indeed the root cause. If the issue persists, you might want to check the Prefect server logs for additional error messages and ensure your PostgreSQL server is accepting connections properly. For more detailed information about Prefect server configuration, you can refer to the official documentation. Would you like me to provide more specific guidance about any of these solutions?
a
@Marvin I don't need SSL. I added "ssl=false" to my database connection url and I also have "PREFECT_API_TLS_INSECURE_SKIP_VERIFY" set to true. My "PREFECT_API_DATABASE_TIMEOUT" is set to 600. It only happens when viewing a flow-run - is there a reason why that would happen?
m
thought for 22.0 seconds
Based on the research, when viewing a flow run in the UI, Prefect needs to retrieve a significant amount of data from the database, including logs, task states, and other metadata. The timeout error you're seeing suggests that the database connection is being interrupted during these large data retrievals. Here are a few potential solutions: 1. Adjust PostgreSQL Settings Add these settings to your `postgresql.conf`:
Copy code
idle_in_transaction_timeout = 600000  # 10 minutes in milliseconds
   statement_timeout = 600000            # 10 minutes in milliseconds
2. Configure Connection Pooling Add these environment variables to your Prefect server configuration:
Copy code
bash
   PREFECT_API_DATABASE_CONNECTION_POOL_SIZE=20
   PREFECT_API_DATABASE_MAX_OVERFLOW=30
3. Optimize Query Performance Set this environment variable to limit the number of logs retrieved at once:
Copy code
bash
   PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS=60
4. Client-Side Settings Add these environment variables to help manage large data retrievals:
Copy code
bash
   PREFECT_API_REQUEST_TIMEOUT=60
   PREFECT_API_RESPONSE_TIMEOUT=60
If these solutions don't resolve the issue, you might want to try: 1. Clearing your browser cache and cookies before viewing flow runs 2. Using a different browser to rule out client-side issues 3. Monitoring your PostgreSQL server's performance metrics during flow run views 4. Ensuring your PostgreSQL server has adequate resources allocated The issue appears to be particularly common when there are many tasks or logs to display. If you continue to experience problems, you might want to consider implementing log rotation or archiving older flow run data to reduce the database load. Would you like me to provide more specific details about any of these solutions?
a
@Marvin It is incredibly slow when viewing flow-run of a flow with no tasks even. You say make sure the postgreSQL server has adequate resources allocated; how much should be allocated usually? I have 32GB of RAM available, and currently my postgres shared_buffers is 2GB, work_mem is 32MB, and maintenance_work_mem is 1GB.
m
thought for 166.2 seconds
Based on your system having 32GB of RAM and the slow performance even with simple flows, here are recommended PostgreSQL settings that should improve performance: 1. Memory-related Settings
Copy code
postgresql
# With 32GB RAM, adjust these settings:
shared_buffers = 8GB          # Increase from 2GB to 25% of RAM
work_mem = 64MB              # Double your current setting
maintenance_work_mem = 2GB    # Reasonable for your RAM size
effective_cache_size = 24GB   # About 75% of RAM
2. Connection and Query Settings
Copy code
postgresql
max_connections = 100
random_page_cost = 1.1        # If using SSD
effective_io_concurrency = 200 # If using SSD
max_parallel_workers_per_gather = 4
max_parallel_workers = 8
max_worker_processes = 8
3. Vacuum and Autovacuum Settings
Copy code
postgresql
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.05
autovacuum_vacuum_cost_limit = 1000
4. Query Planning
Copy code
postgresql
default_statistics_target = 100
random_page_cost = 1.1  # If using SSD storage
Additionally, add these Prefect-specific environment variables to optimize database interactions:
Copy code
bash
PREFECT_API_DATABASE_ECHO=false
PREFECT_API_DATABASE_CONNECTION_POOL_SIZE=20
PREFECT_API_DATABASE_MAX_OVERFLOW=30
PREFECT_API_DATABASE_POOL_PRE_PING=true
PREFECT_API_DATABASE_POOL_RECYCLE=300
Also, ensure these indexes exist on your Prefect database:
Copy code
sql
CREATE INDEX IF NOT EXISTS ix_flow_run_state_type ON flow_run_state(type);
CREATE INDEX IF NOT EXISTS ix_flow_run_state_timestamp ON flow_run_state(timestamp);
CREATE INDEX IF NOT EXISTS ix_log_flow_run_id ON log(flow_run_id);
If performance is still slow after applying these settings: 1. Monitor query performance:
Copy code
sql
   SELECT * FROM pg_stat_activity WHERE state = 'active';
2. Check for slow queries:
Copy code
sql
   SELECT pid, now() - pg_stat_activity.query_start AS duration, query 
   FROM pg_stat_activity 
   WHERE state != 'idle' 
   ORDER BY duration DESC;
3. Consider enabling query logging temporarily to identify problematic queries:
Copy code
postgresql
   log_min_duration_statement = 1000  # Log queries taking more than 1 second
Remember to: 1. Restart PostgreSQL after changing these settings 2. Monitor system performance after making these changes 3. Adjust values based on actual usage patterns Would you like me to provide more specific guidance about implementing any of these recommendations?
a
@Marvin I made those postgres.conf changes but now the service won't start. Keep in mind I'm on Windows. Were any of the configurations specific to a linux environment?
m
thought for 11.1 seconds
Ah yes, let me adjust these recommendations specifically for Windows PostgreSQL configuration. Windows has different memory management characteristics and some settings need to be adjusted accordingly. Here's a revised Windows-compatible configuration:
Copy code
postgresql
# Memory settings for Windows (32GB total RAM)
shared_buffers = 4GB          # Windows typically needs lower values than Linux
work_mem = 64MB              
maintenance_work_mem = 1GB    # Keep at your current setting
effective_cache_size = 16GB   # Lower on Windows due to different memory management

# Connection settings
max_connections = 100

# Windows-specific settings
max_worker_processes = 4      # Lower on Windows
max_parallel_workers = 4      # Lower on Windows
max_parallel_workers_per_gather = 2

# Autovacuum settings
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.05
autovacuum_vacuum_cost_limit = 800    # Slightly lower for Windows

# If using SSD
random_page_cost = 1.1
effective_io_concurrency = 8  # Lower on Windows
Key differences for Windows: 1.
shared_buffers
should be lower (typically 12.5% of RAM instead of 25%) 2.
effective_io_concurrency
should be much lower 3. Worker processes should be reduced 4. Memory values must use the exact format (e.g.,
4GB
not
4gb
or
4G
) Try these settings first. If the service still won't start: 1. Check the PostgreSQL error logs (typically in the
log
directory of your PostgreSQL installation) 2. Start with minimal changes and gradually increase:
Copy code
postgresql
   shared_buffers = 2GB    # Start with your current working value
   work_mem = 32MB        # Start with your current working value
   maintenance_work_mem = 1GB
Would you like me to help troubleshoot based on any specific error messages you're seeing?
a
@Marvin with those effective_io_concurrency settings my server still would not start, but once I commented it out it started successfully. I am still getting these errors:
Copy code
| ERROR   | prefect.server - Encountered exception in request:
Traceback (most recent call last):
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connection.py", line 2421, in connect
    return await connect_utils._connect(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<20 lines>...
    )
    ^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 1049, in _connect
    conn = await _connect_addr(
           ^^^^^^^^^^^^^^^^^^^^
    ...<6 lines>...
    )
    ^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 886, in _connect_addr
    return await __connect_addr(params, True, *args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 931, in __connect_addr
    tr, pr = await connector
             ^^^^^^^^^^^^^^^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 810, in _create_ssl_connection
    do_ssl_upgrade = await pr.on_data
                     ^^^^^^^^^^^^^^^^
asyncio.exceptions.CancelledError
m
thought for 92.6 seconds
Based on the research and the error you're seeing, let's try a systematic approach to resolve this SSL-related connection issue: 1. First, modify your database connection URL to explicitly disable SSL if you're running locally:
Copy code
bash
   PREFECT_API_DATABASE_CONNECTION_URL="<postgresql+asyncpg://username:password@localhost:5432/prefect?ssl=false>"
2. If you're already using
ssl=false
and still seeing the error, try adding these additional connection parameters:
Copy code
bash
   PREFECT_API_DATABASE_CONNECTION_URL="<postgresql+asyncpg://username:password@localhost:5432/prefect?ssl=false&connect_timeout=10>"
3. Add these environment variables to modify connection behavior:
Copy code
bash
   PREFECT_API_DATABASE_TIMEOUT=60
   PREFECT_API_DATABASE_CONNECTION_POOL_SIZE=5
   PREFECT_API_DATABASE_MAX_OVERFLOW=10
   PREFECT_API_DATABASE_POOL_RECYCLE=3600
4. Check your PostgreSQL
pg_hba.conf
file and ensure the local connection method is set to either:
Copy code
# IPv4 local connections:
   host    all             all             127.0.0.1/32            trust
   # IPv6 local connections:
   host    all             all             ::1/128                 trust
5. If you're using environment variables in Windows, make sure they're set correctly:
Copy code
powershell
   # PowerShell
   $env:PREFECT_API_DATABASE_TIMEOUT=60
   $env:PREFECT_API_DATABASE_CONNECTION_POOL_SIZE=5
or in Command Prompt:
Copy code
cmd
   set PREFECT_API_DATABASE_TIMEOUT=60
   set PREFECT_API_DATABASE_CONNECTION_POOL_SIZE=5
6. If you're still seeing issues, try adding this to your PostgreSQL configuration:
Copy code
postgresql
   ssl = off                  # Disable SSL
   tcp_keepalives_idle = 300
   tcp_keepalives_interval = 10
   tcp_keepalives_count = 6
Could you confirm: 1. Are you using any environment variables for the database connection? 2. How are you starting the Prefect server (command line, service, etc.)? 3. Can you check if the error occurs immediately or after some time of running? This will help me provide more targeted solutions if the above suggestions don't resolve the issue.
a
@Marvin I made those changes and it didn't help. To answer your questions: 1. Are you using any environment variables for the database connection? No I am not. 2. How are you starting the Prefect server (command line, service, etc.)? I start it through command line, "prefect server start" 3. Can you check if the error occurs immediately or after some time of running? The error occurs whenever I view a flow-run through the UI. Otherwise there are no errors. I can also view all tables in the postgresql database tables without any delay, so I don't think it's a postgresql issue.
m
thought for 27.9 seconds
Based on the GitHub issues and your specific scenario, it appears this might be related to connection pool management in the Prefect server. Let's try a more focused approach: 1. Create a file named
.prefect/config.toml
in your user directory (on Windows, typically
C:\Users\YourUsername\.prefect\config.toml
) with these settings:
Copy code
toml
[server]
host = "127.0.0.1"
port = "4200"

[server.database]
connection_url = "<postgresql+asyncpg://username:password@localhost:5432/prefect?ssl=false>"
timeout = 60
pool_size = 20
max_overflow = 40
pool_pre_ping = true
pool_recycle = 3600
echo = false

[server.services]
scheduler.loop_seconds = 60
pause_grace_period_seconds = 60
2. Before starting the server, set these environment variables in PowerShell:
Copy code
powershell
$env:PREFECT_SERVER_DATABASE_POOL_SIZE="20"
$env:PREFECT_SERVER_DATABASE_MAX_OVERFLOW="40"
$env:PREFECT_SERVER_DATABASE_POOL_TIMEOUT="60"
$env:PREFECT_API_REQUEST_TIMEOUT="60"
$env:PREFECT_API_RESPONSE_TIMEOUT="60"
$env:PREFECT_API_DEFAULT_LIMIT="50"
3. Create these indexes in your PostgreSQL database if they don't exist:
Copy code
sql
CREATE INDEX IF NOT EXISTS ix_flow_run_state_type ON flow_run_state(type);
CREATE INDEX IF NOT EXISTS ix_flow_run_state_timestamp ON flow_run_state(timestamp);
CREATE INDEX IF NOT EXISTS ix_log_flow_run_id ON log(flow_run_id);
CREATE INDEX IF NOT EXISTS ix_task_run_flow_run_id ON task_run(flow_run_id);
4. Stop any running Prefect server instances, then start a fresh server with:
Copy code
bash
prefect server start
The key changes here are: - Increased connection pool size and overflow limits - Added connection pool recycling - Added database indexes for common queries - Increased various timeout values - Reduced the default limit for API requests Can you try these changes and let me know if you still see the same error when viewing flow runs? Also, it would be helpful to know: 1. How many tasks are typically in the flow runs you're trying to view? 2. Are you seeing the error with all flow runs or just specific ones?
a
@Marvin Yes I am still receiving the error below, which triggers a whole bunch of other errors. 1. The one flow run I'm viewing has 2 tasks, but it also happens when I view a flow with no tasks. 2. I am seeing the error with all flow runs.
Copy code
14:10:43.907 | ERROR   | prefect.server.services.cancellationcleanup - Unexpected error in: TimeoutError()
Traceback (most recent call last):
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connection.py", line 2421, in connect
    return await connect_utils._connect(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<20 lines>...
    )
    ^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 1049, in _connect
    conn = await _connect_addr(
           ^^^^^^^^^^^^^^^^^^^^
    ...<6 lines>...
    )
    ^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 886, in _connect_addr
    return await __connect_addr(params, True, *args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 931, in __connect_addr
    tr, pr = await connector
             ^^^^^^^^^^^^^^^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 810, in _create_ssl_connection
    do_ssl_upgrade = await pr.on_data
                     ^^^^^^^^^^^^^^^^
asyncio.exceptions.CancelledError
m
thought for 14.6 seconds
The fact that this happens with all flow runs, even those with no tasks, and that it's specifically a
CancelledError
during connection handling suggests this might be an asyncpg connection issue specific to Windows. Let's try a different approach: 1. First, let's modify the PostgreSQL connection settings to use a different connection style. Create a new
.prefect/config.toml
file with these settings:
Copy code
toml
[server]
host = "127.0.0.1"
port = "4200"

[server.database]
connection_url = "<postgresql+asyncpg://username:password@localhost:5432/prefect?prepared_statement_cache_size=0&statement_cache_size=0&ssl=false>"
timeout = 60
pool_size = 5
max_overflow = 10
pool_pre_ping = false
pool_recycle = 300
echo = false

[server.services]
scheduler.loop_seconds = 20
pause_grace_period_seconds = 30

[server.api]
connect_timeout = 300
command_timeout = 300
2. In PowerShell, set these environment variables:
Copy code
powershell
$env:PREFECT_API_DATABASE_TIMEOUT="300"
$env:PREFECT_API_DATABASE_CONNECTION_POOL_SIZE="5"
$env:PREFECT_API_DATABASE_MAX_OVERFLOW="10"
$env:PREFECT_API_DATABASE_POOL_RECYCLE="300"
$env:PREFECT_API_DATABASE_PREPARED_STATEMENT_CACHE_SIZE="0"
$env:PREFECT_API_DATABASE_STATEMENT_CACHE_SIZE="0"
$env:PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS="20"
3. Add this to your PostgreSQL configuration (
postgresql.conf
):
Copy code
postgresql
tcp_keepalives_idle = 300
tcp_keepalives_interval = 10
tcp_keepalives_count = 6
statement_timeout = 300000  # 5 minutes in milliseconds
idle_in_transaction_session_timeout = 300000  # 5 minutes in milliseconds
4. Then try this diagnostic approach: - Stop all Prefect services - Clear any existing Prefect server processes - Start fresh with minimal services
Copy code
powershell
# Stop any running Prefect processes
Get-Process | Where-Object {$_.ProcessName -like "*prefect*"} | Stop-Process -Force

# Clear any temporary files
Remove-Item "$env:TEMP\prefect*" -Force -Recurse -ErrorAction SilentlyContinue

# Start the server with minimal services
prefect server start --dev
The key changes in this approach: - Disabled prepared statement caching (which can cause issues on Windows) - Reduced connection pool size to minimize connection management complexity - Added TCP keepalive settings to maintain connections - Added explicit timeouts for PostgreSQL - Using dev mode to help with debugging Can you try these changes and let me know: 1. Does the error still occur immediately when viewing a flow run? 2. Are there any different error messages in the server logs? 3. What version of PostgreSQL are you using exactly?
a
@Marvin can the prefect server toml be in the directory of my .venv where I'm starting the server from?
m
thought for 9.0 seconds
Yes, absolutely! The Prefect server will look for the
config.toml
file in several locations, with the following precedence (from highest to lowest priority): 1. The path specified by
PREFECT_SERVER_API_CONFIG
environment variable 2. The current working directory (where you run the server from) 3.
.prefect
directory in the current working directory 4. User's home directory
.prefect
folder So you can place the
config.toml
in any of these locations:
Copy code
D:\Tasks\prefect-server\config.toml
D:\Tasks\prefect-server\.prefect\config.toml
C:\Users\YourUsername\.prefect\config.toml
For your case, since you're running from the virtual environment directory, you can put it directly in:
Copy code
D:\Tasks\prefect-server\config.toml
This is often preferred during development as it keeps the configuration with your project. Just make sure to: 1. Stop any running Prefect server instances 2. Create the config.toml in your project directory 3. Start the server again with
prefect server start
Would you like me to provide the configuration content again for your config.toml?
a
@Marvin I made those changes. These are some of the errors I get.
Copy code
14:38:03.342 | ERROR   | prefect.server.events.triggers - Error running periodic evaluation
Traceback (most recent call last):
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connection.py", line 2421, in connect
    return await connect_utils._connect(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<20 lines>...
    )
    ^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 1049, in _connect
    conn = await _connect_addr(
           ^^^^^^^^^^^^^^^^^^^^
    ...<6 lines>...
    )
    ^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 886, in _connect_addr
    return await __connect_addr(params, True, *args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 931, in __connect_addr
    tr, pr = await connector
             ^^^^^^^^^^^^^^^
  File "D:\Tasks\prefect-server\.venv\Lib\site-packages\asyncpg\connect_utils.py", line 802, in _create_ssl_connection
    tr, pr = await loop.create_connection(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<2 lines>...
        host, port)
        ^^^^^^^^^^^
  File "C:\Program Files\Python313\Lib\asyncio\base_events.py", line 1193, in create_connection
    transport, protocol = await self._create_connection_transport(
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<2 lines>...
        ssl_shutdown_timeout=ssl_shutdown_timeout)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\Python313\Lib\asyncio\base_events.py", line 1226, in _create_connection_transport
    await waiter
asyncio.exceptions.CancelledError

14:38:04.079 | ERROR   | asyncio - Future exception was never retrieved
future: <Future finished exception=ConnectionError('unexpected connection_lost() call')>
m
thought for 16.9 seconds
These errors suggest an issue with the async connection handling, particularly with SSL/TLS connections being cancelled. Let's try a different connection approach: 1. Update your
config.toml
with these modified settings:
Copy code
toml
[server]
host = "127.0.0.1"
port = "4200"

[server.database]
# Note the different connection URL format
connection_url = "<postgresql+asyncpg://username:password@127.0.0.1:5432/prefect?ssl=false&connect_timeout=10>"
timeout = 300
pool_size = 5
max_overflow = 10
pool_pre_ping = true
pool_recycle = 300
echo = false

[server.api]
connect_timeout = 300
command_timeout = 300

[server.services]
scheduler.loop_seconds = 60
pause_grace_period_seconds = 60

[logging]
level = "DEBUG"
2. In your PostgreSQL
pg_hba.conf
file, ensure you have these entries at the top (before any other entries):
Copy code
# IPv4 local connections:
host    all             all             127.0.0.1/32            trust
# IPv6 local connections:
host    all             all             ::1/128                 trust
3. In your
postgresql.conf
, make sure you have:
Copy code
postgresql
listen_addresses = '127.0.0.1'
max_connections = 100
ssl = off
4. Before starting the server, set these environment variables in PowerShell:
Copy code
powershell
$env:PREFECT_SERVER_DATABASE_TIMEOUT="300"
$env:PREFECT_SERVER_DATABASE_CONNECTION_RETRY_ATTEMPTS="3"
$env:PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS="60"
$env:PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS="60"
$env:PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_LOOP_SECONDS="60"
5. Then try this startup sequence:
Copy code
powershell
# Stop any existing Prefect processes
Get-Process | Where-Object {$_.ProcessName -like "*prefect*"} | Stop-Process -Force

# Clear any existing connections in PostgreSQL (run in psql or your preferred PostgreSQL client)
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'prefect';

# Start Prefect server with minimal services
prefect server start --dev
Key changes in this approach: - Using
127.0.0.1
instead of
localhost
to avoid potential DNS resolution issues - Explicitly disabled SSL - Added connection retry attempts - Increased service loop intervals to reduce connection frequency - Added explicit trust authentication for local connections - Enabled debug logging to get more detailed error information Can you try these changes and let me know: 1. Do you still see the same connection errors? 2. Are there any new messages in the debug logs? 3. Can you confirm your PostgreSQL version and whether it was installed as a Windows service or standalone?