Hi, My flow creates a K8s job, and this function is called by the flow: ```async def k8s_namespace_j...
n
Hi, My flow creates a K8s job, and this function is called by the flow:
Copy code
async def k8s_namespace_job(logger: Logger | LoggerAdapter, job_params: JobFlowConfig):
    job: Job = k8s_job_definition(logger=logger, job_name=job_params.job_name)

    <http://logger.info|logger.info>(f"K8S job definition: {job.as_json()}")

    default_credentials = KubernetesCredentials()
    prefect_k8s_job = KubernetesJob(v1_job=job.as_body(),
                                    namespace=default_namespace(),
                                    credentials=default_credentials,
                                    interval_seconds=job_params.interval_seconds,
                                    delete_after_completion=True,
                                    timeout_seconds=HOUR_IN_SECONDS * 12)

    j8s_job_run = await prefect_k8s_job.trigger()

    await j8s_job_run.wait_for_completion()

    pod_logs_result = await j8s_job_run.fetch_result()

    # Get the logs from the job pod assuming there is only one pod
    pod_logs: str = str(next(iter(pod_logs_result.values())))
    <http://logger.info|logger.info>(pod_logs)
It seems that the pod dies before I get all the logs for the pod. Therefore, only a fragment of pod logs is displayed in the perfect UI. I couldn't figure out how to increase pod liveness. It is not possible to solve the problem by increasing the job interval_seconds: • The logs should be streamed • The duration of jobs varies, so I'm not sure what number to use Any suggestions? Thanks.
Bumping 🙂