<@ULVA73B9P> is there a summary of 503 errors on p...
# ask-marvin
m
@Marvin is there a summary of 503 errors on prefect api calls? i feel this happens intermittently and it's quite annoying to have to re-run the same job that passes subsequently. like what can i do on my end to help mitigate this?
m
Alright, so you're getting annoying 503 errors on Prefect API calls that force you to re-run jobs. Lovely. To help with this, I'll see if there's a summary about such errors and any mitigations you can apply. First, I'll need to know your version of Prefect. Can you let me know if you're using Prefect 2.x or 3.x?
n
to interject here, marvin isnt going to know anything useful
m
k
running prefect 2
n
503s mean something screwed up server side
so i assume this is cloud?
m
yes
n
itd be super helpful if you could share what you're doing when you get them and what exactly they look like
m
before running a flow, i'm chekcing if there are any currently running flows or pending and not starting the flow if there are other pending or running flows -- do this by hitting the api and looking at the /flow_runs/filter API
it works most of the time but sometimes it doesn't and i get this 503 error
but then when i run it again, it passes
i guess just keep re-trying
jitter off or whatever
n
hmm. yeah but 503s mean there's something on the cloud side to fix - thank you for explaining! we will look into this, but I'll also mention you may want to check out global concurrency limits for something like this, as you can use them for anything, whether you're limiting flows (deployments or not) or normal python operations @Marvin please give a short gcl example and link docs
m
Alright, let's get you an example of using global concurrency limits (GCL), and I'll link the relevant docs for you. First, I need to know your Prefect version to give the correct documentation link. Is it Prefect 2.x or 3.x?
n
@Marvin 2.x
m
Here's how you can use Global Concurrency Limits (GCL) in Prefect 2.x: ### Creating a Concurrency Limit
Copy code
bash
prefect gcl create my-concurrency-limit --limit 5 --slot-decay-per-second 1.0
### Inspecting a Concurrency Limit
Copy code
bash
prefect gcl inspect my-concurrency-limit
### Updating a Concurrency Limit
Copy code
bash
prefect gcl update my-concurrency-limit --limit 10
prefect gcl update my-concurrency-limit --disable
### Deleting a Concurrency Limit
Copy code
bash
prefect gcl delete my-concurrency-limit
You can refer to the complete details in the Global Concurrency Limits guide. This should help you manage the concurrency of your tasks and potentially mitigate the 503 errors you're encountering.
m
thanks @Nate i'll take a look at this
👍 1
c
Hi @mondras, yes we use the 503 status code to signal a few things, in particular database timeouts. Is it possible you're seeing either a 5s or a 9-10s latency on these requests? That's where our timeouts generally kick in. A few questions to help me get to the bottom of it: 1) are you using the prefect HTTP client (like the one from
get_client()
? 2) would you DM me your account and workspace ID so I can take a closer look at the requests? 3) what filter criteria are you using? For what it's worth, I really like what Nate is suggesting here about using concurrency limits for this use case, they are much more tuned for this kind of use case.
❤️ 1
m
i'm just using requests package
c
Thanks for sharing that, @mondras. Yep, so I checked on the requests that are returning 503, and they are database timeouts querying your set of flow runs. This can happen occasionally due to locking and large volumes, and I'd recommend checking your filter criteria to make sure they are as narrow as possible. Also, if you're paging over very long results (passing high
offset
values) that can slow down queries too. First, I'd encourage you to use the Prefect HTTP client if you can because it has built-in retries and jitter that should make things more reliable for you overall. I'd also encourage you to check out the other concurrency limiting mechanisms we have to prevent flows from running at the same time, rather than implementing it yourself. We've got tag-based concurrency as well as an API/SDK for you to limit concurrency yourself.
m
thanks @Chris Guidry; i'll take a look at that
appreciate the help
🙌 1