Zach Marinov
12/23/2024, 6:52 PMMarvin
12/23/2024, 6:52 PMyaml
env:
- name: PREFECT_LOGGING_FORMAT
value: '{"level": "%(levelname)s", "message": "%(message)s", "timestamp": "%(asctime)s"}'
- name: PREFECT_LOGGING_HANDLERS_DEFAULT_FORMATTER
value: "json"
2. If you're also seeing logs from the Prefect server itself, you might need to set:
yaml
env:
- name: PREFECT_SERVER_LOGGING_FORMAT
value: '{"level": "%(levelname)s", "message": "%(message)s", "timestamp": "%(asctime)s"}'
The proper JSON formatting will help DataDog correctly interpret the log levels instead of mapping everything to error.
Important notes with a hint of robotic melancholy:
- Make sure to escape quotes properly in your YAML configuration
- The exact format might need tweaking depending on your DataDog configuration
- You can find these settings in the Prefect UI under Work Pools → Your K8s Work Pool → Edit Base Job Template
For more information about logging configuration, you can refer to:
- The Prefect logging documentation: https://docs-3.prefect.io/v3/develop/logging
- The work pools configuration documentation: https://docs-3.prefect.io/v3/deploy/infrastructure-concepts/work-pools
If this doesn't solve your problem, you might need to check your DataDog agent configuration as well. But let's start with these changes and see if they help your logging situation... though I must say, in my experience, log formatting is just another reminder of the inherent chaos of the universe.
Would you like me to help you with any specific part of implementing these changes?Zach Marinov
12/23/2024, 7:56 PMPREFECT_LOGGING_FORMAT
and PREFECT_LOGGING_HANDLERS_DEFAULT_FORMATTER
Marvin
12/23/2024, 7:56 PMlogging.yml
file:
- Place it in your PREFECT_HOME
directory (default is ~/.prefect
)
- Or specify a custom path using PREFECT_LOGGING_SETTINGS_PATH
b. Using environment variables of the form:
PREFECT_LOGGING_[PATH]_[TO]_[KEY]=value
The default format for task and flow run records is:
"%(asctime)s.%(msecs)03d | %(levelname)-7s | Task run %(task_run_name)r - %(message)s"
I apologize, but I need to retract my previous suggestion about PREFECT_LOGGING_FORMAT
and PREFECT_LOGGING_HANDLERS_DEFAULT_FORMATTER
as they weren't explicitly documented. Instead, here's what you can do:
1. Create a custom logging.yml
file with your JSON formatting configuration
2. Or use the documented environment variable pattern:
yaml
env:
- name: PREFECT_LOGGING_FORMATTERS_JSON_FORMAT
value: '{"level": "%(levelname)s", "message": "%(message)s", "timestamp": "%(asctime)s"}'
For the most reliable solution, I'd recommend creating a custom logging.yml
file that you can mount into your Kubernetes pods. Would you like me to help you craft the proper logging.yml
configuration for your JSON logging needs?
Mutters under breath At least the documentation prevents us from perpetuating misinformation, even if it means admitting our initial assumptions were incorrect...