Netanel Malka
06/13/2024, 11:03 AMasync def k8s_namespace_job(logger: Logger | LoggerAdapter, job_params: JobFlowConfig):
job: Job = k8s_job_definition(logger=logger, job_name=job_params.job_name)
<http://logger.info|logger.info>(f"K8S job definition: {job.as_json()}")
default_credentials = KubernetesCredentials()
prefect_k8s_job = KubernetesJob(v1_job=job.as_body(),
namespace=default_namespace(),
credentials=default_credentials,
interval_seconds=job_params.interval_seconds,
delete_after_completion=True,
timeout_seconds=HOUR_IN_SECONDS * 12)
j8s_job_run = await prefect_k8s_job.trigger()
await j8s_job_run.wait_for_completion()
pod_logs_result = await j8s_job_run.fetch_result()
# Get the logs from the job pod assuming there is only one pod
pod_logs: str = str(next(iter(pod_logs_result.values())))
<http://logger.info|logger.info>(pod_logs)
It seems that the pod dies before I get all the logs for the pod.
Therefore, only a fragment of pod logs is displayed in the perfect UI.
I couldn't figure out how to increase pod liveness.
It is not possible to solve the problem by increasing the job interval_seconds:
• The logs should be streamed
• The duration of jobs varies, so I'm not sure what number to use
Any suggestions?
Thanks.Netanel Malka
06/16/2024, 7:38 AMBring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.
Powered by