https://prefect.io logo
#prefect-community
Title
# prefect-community
v

Vadym Dytyniak

07/25/2022, 12:31 PM
Hi. We use KubernetesRun and see debug logs like this in Prefect Cloud Logs:
Copy code
Event: 'FailedScheduling' on pod 'prefect-job-79c82cd5-w6nfd'
	Message: 0/7 nodes are available: 2 node(s) had taint {<http://dask.corp.com/component|dask.corp.com/component>: scheduler}, that the pod didn't tolerate, 5 node(s) didn't match Pod's node affinity/selector
Do you have ideas how to disable them?
r

Rob Freedy

07/25/2022, 10:26 PM
Hey Vadym!! Are you looking to change the log level entirely or just looking to disable that specific log?
v

Vadym Dytyniak

07/26/2022, 7:53 AM
Hey. We don't set debug level and still see these events.
r

Rob Freedy

07/26/2022, 1:45 PM
Can you try adding this environment variable (env_vars in the agent) to your KubernetesAgent?
"log_level":"INFO"
I believe these logs are appearing because of the log level configured in the agent. https://docs.prefect.io/api/latest/agent/kubernetes.html#kubernetesagent
v

Vadym Dytyniak

07/26/2022, 2:34 PM
That is why I don't understand why I see this logs - we don't configure log level in agent. One important thing that level near the log - not
agent
, but
k8s_infra
.
r

Rob Freedy

07/26/2022, 3:55 PM
I believe you are seeing this log because of this line of code. This line in the KubernetesAgent logs out debug statements in the manage_jobs function: https://github.com/PrefectHQ/prefect/blob/16322bdc56065f1d48ddad0285cf98edfb055a85/src/prefect/agent/kubernetes/agent.py#L285
v

Vadym Dytyniak

07/26/2022, 4:12 PM
I think you are right. But it is still should depends on agent log level, right?
r

Rob Freedy

07/26/2022, 4:20 PM
I believe so. You can set the agent log levels through the start command `--log-level`: https://docs.prefect.io/api/latest/cli/agent.html#kubernetes-start
v

Vadym Dytyniak

07/26/2022, 5:01 PM
Thank you!
r

Rob Freedy

07/26/2022, 5:07 PM
My pleasure!!
4 Views