<@ULVA73B9P> I'm self hosting a Prefect 3 server. ...
# ask-marvin
n
@Marvin I'm self hosting a Prefect 3 server. Sometimes I get this error:
Copy code
sqlalchemy.exc.IntegrityError: (sqlalchemy.dialects.postgresql.asyncpg.IntegrityError) <class 'asyncpg.exceptions.UniqueViolationError'>: duplicate key value violates unique constraint "uq_task_run__flow_run_id_task_key_dynamic_key"
DETAIL:  Key (flow_run_id, task_key, dynamic_key)=(53202dd3-9e17-4f81-a3a3-c02badc2c20f, run_inference-c4df095a, 655c42ec-0a45-4e22-b341-430123bfcd73) already exists.
[SQL: INSERT INTO task_run (flow_run_id, task_key, dynamic_key, flow_run_run_count, empirical_policy, task_inputs, tags, labels, state_id, name, state_type, state_name, state_timestamp, run_count, expected_start_time, start_time, total_run_time, id, created, updated) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5, $6, $7::JSONB, $8::JSONB, $9::UUID, $10::VARCHAR, $11::state_type, $12::VARCHAR, $13::TIMESTAMP WITH TIME ZONE, $14::INTEGER, $15::TIMESTAMP WITH TIME ZONE, $16::TIMESTAMP WITH TIME ZONE, $17::INTERVAL, $18::UUID, $19::TIMESTAMP WITH TIME ZONE, $20::TIMESTAMP WITH TIME ZONE) ON CONFLICT (id) DO UPDATE SET flow_run_id = $21::UUID, task_key = $22::VARCHAR, dynamic_key = $23::VARCHAR, flow_run_run_count = $24::INTEGER, empirical_policy = $25, task_inputs = $26, tags = $27::JSONB, labels = $28::JSONB, state_id = $29::UUID, name = $30::VARCHAR, state_type = $31::state_type, state_name = $32::VARCHAR, state_timestamp = $33::TIMESTAMP WITH TIME ZONE, run_count = $34::INTEGER, expected_start_time = $35::TIMESTAMP WITH TIME ZONE, start_time = $36::TIMESTAMP WITH TIME ZONE, total_run_time = $37::INTERVAL, id = $38::UUID, updated = $39::TIMESTAMP WITH TIME ZONE WHERE task_run.state_timestamp < $40::TIMESTAMP WITH TIME ZONE]
[parameters: ('53202dd3-9e17-4f81-a3a3-c02badc2c20f', 'run_inference-c4df095a', '655c42ec-0a45-4e22-b341-430123bfcd73', 1, '{"max_retries": 0, "retry_delay_seconds": 0.0, "retries": 0, "retry_delay": 0, "retry_jitter_factor": null}', '{"stack_inference": []}', '[]', '{}', '0f4423ae-705d-4989-844e-d095a51a0347', 'run_inference-655', 'RUNNING', 'Running', DateTime(2025, 7, 21, 13, 57, 7, 759284, tzinfo=Timezone('UTC')), 1, DateTime(2025, 7, 21, 13, 57, 7, 758721, tzinfo=Timezone('UTC')), DateTime(2025, 7, 21, 13, 57, 7, 759284, tzinfo=Timezone('UTC')), datetime.timedelta(0), 'fdfbff0a-e30d-4665-8899-cfa4d2e1cdfd', DateTime(2025, 7, 21, 13, 57, 7, 768571, tzinfo=Timezone('UTC')), DateTime(2025, 7, 21, 13, 57, 7, 786763, tzinfo=Timezone('UTC')), '53202dd3-9e17-4f81-a3a3-c02badc2c20f', 'run_inference-c4df095a', '655c42ec-0a45-4e22-b341-430123bfcd73', 1, '{"max_retries": 0, "retry_delay_seconds": 0.0, "retries": 0, "retry_delay": 0, "retry_jitter_factor": null}', '{"stack_inference": []}', '[]', '{}', '0f4423ae-705d-4989-844e-d095a51a0347', 'run_inference-655', 'RUNNING', 'Running', DateTime(2025, 7, 21, 13, 57, 7, 759284, tzinfo=Timezone('UTC')), 1, DateTime(2025, 7, 21, 13, 57, 7, 758721, tzinfo=Timezone('UTC')), DateTime(2025, 7, 21, 13, 57, 7, 759284, tzinfo=Timezone('UTC')), datetime.timedelta(0), 'fdfbff0a-e30d-4665-8899-cfa4d2e1cdfd', DateTime(2025, 7, 21, 13, 57, 7, 769855, tzinfo=Timezone('UTC')), DateTime(2025, 7, 21, 13, 57, 7, 759284, tzinfo=Timezone('UTC')))]
(Background on this error at: <https://sqlalche.me/e/20/gkpj>)
What is it?
m
thought for 34.8 seconds
Based on the error message and related GitHub issues, I can explain what's happening: The error you're seeing is a database constraint violation, specifically a unique constraint violation in PostgreSQL. The constraint
uq_task_run__flow_run_id_task_key_dynamic_key
ensures that there can't be multiple task runs with the same combination of: - flow_run_id - task_key - dynamic_key In your case, the system is trying to create a task run with a combination of these values that already exists in the database:
Copy code
flow_run_id: 53202dd3-9e17-4f81-a3a3-c02badc2c20f
task_key: run_inference-c4df095a
dynamic_key: 655c42ec-0a45-4e22-b341-430123bfcd73
This issue often occurs in high-concurrency situations or when there are race conditions in the Prefect server. It's related to a known issue where the Prefect server can experience deadlocks with PostgreSQL databases, particularly under certain conditions: 1. When running multiple concurrent flows 2. When the server is under heavy load 3. When there are potential race conditions in task run creation To mitigate this issue, you could try: 1. Upgrading to the latest version of Prefect if you haven't already 2. Reviewing your database configuration to ensure it's properly tuned for your workload 3. If possible, reduce concurrent execution of the same tasks within the same flow run 4. Consider adding some retry logic around your flow executions if they're failing due to these errors If you're seeing this frequently, it would be helpful to: 1. Monitor your PostgreSQL server's performance metrics 2. Check the PostgreSQL logs for any related warnings or errors 3. Consider adjusting your database's connection pool settings There's an active discussion about this in PrefectHQ/prefect#17767 where similar issues are being tracked and discussed. Would you like me to research any specific aspect of this issue in more detail?
o
@Nicholas Pini Hello Did you find way to solve this issue?
n
Not really. My guess is that the db must have enough resources to deal with Prefect requests. You can also increase the timeout on the Prefect server using an env variable (can't remember which one right now, but it's in the docs). Another possibility is that one of my flow takes a very big dictionary as argument which causes slow downs, so maybe having flows with small parameters is better
o
Thank you for your response. I’ve discovered that there have been significant changes related to the database, which appear to be causing the issue. I’ll proceed with updating Prefect to version 3.4.14 and see if that resolves it. I’ll keep you posted.
👍 1