<@ULVA73B9P> Have error with our self hosted prefe...
# ask-marvin
n
@Marvin Have error with our self hosted prefect server communicaiton. This is the error we get prefect.exceptions.ScriptError: Script at flow encountered an exception: PrefectHTTPStatusError("Server error '500 Internal Server Error' for url 'http://prefect:4200/api/block_types/slug/rcloneconfigfileblock'\nResponse: {'exception_message': 'Internal Server Error'}\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500")
m
thought for 305.8 seconds
Thanks for sharing the error. A 500 from /api/block_types/slug/<slug> on a self‑hosted server usually points to one of these: - Wrong or non-existent block type slug (should return 404, but some older server versions 500 on bad input) - Unregistered custom block type - Version mismatch between client and server (e.g., 3.x client vs 2.x server, or mixed patch levels) - Database/migration issue on the server A few quick checks and fixes: 1) Verify you’re using the correct block type slug - In Prefect 3.x, a block type slug is derived from the block’s human-friendly name: lowercase, hyphen-separated. For a class named RCloneConfigFileBlock, the slug is likely rclone-config-file (not rcloneconfigfileblock). - List all block types the server knows:
Copy code
prefect block type ls
- Inspect the one you think you’re using:
Copy code
prefect block type inspect rclone-config-file
If you don’t see your block type in the list, it isn’t registered yet. 2) Register your block types (custom or from a collection) - If this is a custom block in a Python file:
Copy code
prefect block register -f path/to/your_blocks.py
- If it’s in a module:
Copy code
prefect block register -m your_module_name
After registering, confirm the slug with:
Copy code
prefect block type ls | grep rclone
3) Sanity check the endpoint with a known core block - Try a known slug like secret to confirm the API works:
Copy code
curl <http://prefect:4200/api/block_types/slug/secret>
If that works but your slug 500s, it’s likely a bad slug or an unregistered/malformed block type. 4) Confirm client/server versions and API URL - Ensure both your flow runtime and your local CLI point at the self-hosted server:
Copy code
echo $PREFECT_API_URL
# should be <http://prefect:4200/api>
- Check Prefect versions on both the server and where your flow runs:
Copy code
prefect version
If versions are far apart, align/upgrade and restart the server. 5) Check server logs for the stack trace - If you’re using Docker Compose:
Copy code
docker compose logs -f server api
Please share the stack trace around the 500; it will pinpoint whether it’s a DB/migration error, slug parsing issue, or something else. Notes about slugs and loading blocks - In code, you typically won’t use the slug directly. You’ll load the block via its class:
Copy code
from your_module import RCloneConfigFileBlock

cfg = RCloneConfigFileBlock.load("my-config-name")
- Behind the scenes, Prefect uses the block type slug derived from the class name. That’s why registering the block type is critical. A couple of follow-up questions to help narrow this down: - Which Prefect version are you running on the server and in your flow environment? - Is the rclone block a custom block? If so, can you share its class name and how you’re registering it? - Can you paste the server log snippet for the 500? - What does
prefect block type ls
show for rclone-related slugs? Once we have those, I can give you a precise fix.
n
this only happens occasionally
@Marvin this only happens occasionally
m
thought for 115.1 seconds
Got it — if it’s only occasional, that usually points to an environment/runtime issue rather than a bad slug every time. Most common intermittent causes for 500s on /api/block_types/slug/<slug> in self‑hosted setups: - API container restarts or not fully ready yet (migrations/DB not ready) when your flow hits it - Postgres connection exhaustion or transient DB errors (500 bubbles up from the API) - Multiple API replicas on mismatched Prefect versions behind a load balancer - Race conditions while registering the same block type from multiple places at once - DNS/load-balancer flaps (e.g., hitting a restarting pod) Quick triage checklist 1) Verify the block type exists and the slug you use - Run these once against your server:
Copy code
prefect block type ls
prefect block type inspect rclone-config-file
If you don’t see it, register once at deploy/startup time and avoid concurrent registration:
Copy code
prefect block register -f path/to/your_blocks.py
# or
prefect block register -m your_module
2) Check API and DB stability around the time of failures - Look for container restarts and logs around the 500:
Copy code
docker compose ps
docker compose logs -f server api
# or for K8s
kubectl get pods -n <ns>
kubectl logs deploy/prefect-api -n <ns>
- Check Postgres logs for “remaining connection slots are reserved”, timeouts, or transient errors. If you see them, increase DB max_connections and/or the API’s pool sizing, or reduce concurrency. 3) Ensure versions and configuration are consistent - Confirm server and client Prefect versions match (or are very close):
Copy code
prefect version
- Confirm your flow runtime points to the same self-hosted API:
Copy code
echo $PREFECT_API_URL
# should be <http://prefect:4200/api> (no trailing slash)
- If you run multiple API replicas, make sure they run the same image tag and share the same DB (migrated). 4) Add readiness and retries - If using Kubernetes, add a readiness probe for the API and don’t start runners until /api/health is OK. - Wrap block lookups with retries to smooth brief blips:
Copy code
import time
from prefect.blocks.core import Block

def load_block_with_retry(block_cls, name, attempts=5, delay=2):
    for i in range(1, attempts+1):
        try:
            return block_cls.load(name)
        except Exception as e:
            if i == attempts:
                raise
            time.sleep(delay)
            delay *= 2
A few clarifying questions so I can pinpoint this: - How are you deploying the server (Docker Compose vs Kubernetes) and how many API replicas? - Prefect versions for server and the flow runtime? - Do the API or DB pods/containers show restarts around the failure times? - What do you see in the API logs for the 500 stack trace, and do Postgres logs show connection/timeouts? - Output of:
Copy code
prefect block type ls | grep -i rclone
prefect block type inspect rclone-config-file
Once we have those details, I can provide a targeted fix (pool sizing, readiness, registration strategy, or version alignment).