Hey Community, I am launching some kubernetes pods...
# ask-community
t
Hey Community, I am launching some kubernetes pods using the RunNamespacedJob task and I am trying to get the logs from these task pods to show up on the Prefect UI (currently running a Prefect server instance on an Azure VM with in-cluster agents). I end up seeing the
Started following logs for <task pod>
and then nothing more other than the completion and deletion log. Meanwhile, if I check the kubernetes job pod logs (
kubectl logs -n my-namespace prefect-job-...
I see the logs of the task pod showing up nicely, just not being sent back to the the Agent/Server. Anyone have a similar experience and know of that one flag I'm missing?
k
Hey @Thomas Nyegaard-Signori, you would need to forward the logs with the
ReadNamespacedPodLogs
.
t
Ah okay, I thought I was missing some setup-specific configuration, thanks @Kevin Kho.
Hey @Kevin Kho, sorry to beat this horse just a tiny bit more, I just want to be absolutely sure. In the screenshot, the log lines below the red line get logged and shown in the UI (from the job pod) but the lines above do not (those originate from the task pod). Do you know why these would not get logged but the log lines below the red line would? I am just trying to figure out why these logs would show up on the job pod and still not get logged. Hope it makes sense.
k
Hey @Thomas Nyegaard-Signori, no worries at all! I asked another team member for help to replicate. Is this after using the
ReadNamespacedPodLogs
?
t
Yes! The logs are on the pod running the flow itself, I am having a hard time figuring out the correct terminology. I am just curious, since the task logs that I am very interested in appear on the pod logs (using
kubectl
) but are not passed on to the UI.
Here is the flow, if its any help:
from prefect import Flow, Parameter, task from prefect.run_configs import KubernetesRun from prefect.tasks.kubernetes.job import RunNamespacedJob, ReadNamespacedPodLogs from prefect.storage import Azure from prefect.utilities.notifications import slack_notifier import os @task def generate_body(db_uri, user, pw):     params = {         "apiVersion": "batch/v1",         "kind": "Job",         "metadata": {"name": "name"},         "spec": {             "template": {                 "spec": {                     "containers": [                         {                             "name": "name",                             "image": "",                             "resources": {"requests": {"cpu": 2, "memory": "8Gi"}},                             "dnsPolicy": "Default",                             "env": [                                 {"name": "DB_URI", "value": db_uri},                                 {"name": "FTP_PASSWORD", "value": pw},                                 {"name": "FTP_USERNAME", "value": user},                             ],                         }                     ],                     "restartPolicy": "Never",                     "imagePullSecrets": [{"name": "dockerhub-cred"}],                 },             },             "backoffLimit": 0,         },     }     return params job = RunNamespacedJob(     namespace="prefect-zone",     kubernetes_api_key_secret=None,     log_level="info",     delete_job_after_completion=True,     state_handlers=[slack_notifier], ) """ log_job = ReadNamespacedPodLogs(     pod_name="cmems-historical",     namespace="prefect-zone",     kubernetes_api_key_secret=None, ) """ with Flow(     "CMEMS historical",     storage=Azure(         container="prefect-flows",         stored_as_script=True,         blob_name="historical-flow-script",         connection_string="",     ),     run_config=KubernetesRun(         image="prefecthq/prefect:latest-python3.8",         env={"PREFECT__CLOUD__USE_LOCAL_SECRETS": True},     ), ) as flow:     db_uri = Parameter("db_uri")     user = Parameter("user")     pw = Parameter("pw")     params = generate_body(db_uri, user, pw)     job(body=params, log_level="info")     #log_job(on_log_entry=lambda log_line: flow.logger.info(log_line)) flow.register(project_name="")
m
I'm looking at it
t
I am hosting a Prefect server on an Azure VM and running an AKS cluster with an in-cluster agent, if that helps to narrow anything down.
m
Hello Thomas. I'm still a bit confused about logic around the ReadNamespacedPodLogs. I'm going to raise this question internally and get back to you, meanwhile you could try following:
Copy code
log_job = ReadNamespacedPodLogs(
    pod_name="cmems-historical",
    namespace="mariia-testing",
    kubernetes_api_key_secret=None,
)

@task(log_stdout=True)
def print_logs(log_line):
    print(log_line)

with Flow(
    "CMEMS historical",
) as flow:
    db_uri = Parameter("db_uri")
    user = Parameter("user")
    pw = Parameter("pw")
    params = generate_body(db_uri, user, pw)
    job(body=params, log_level="info")
    logs = log_job()
    print_logs(logs)
flow.register(project_name="")
t
Ah, I see now that i was too quick. The printout has nothing to do with ReadNamespacedPodLogs, only the logs from RunNamespacedJob on the job node. Sorry for the confusion… just to reiterate, the printout was from before ReadNamespacedPodLogs was added, and the script will generate the same logs if run as is.
Hey @Mariia Kerimova, wanted to hear if you had heard anything new. Sorry for being pushy!
m
Hello Thomas! You are not being pushy at all. I'm still working on this task, and involved another engineer to debug the issue. I'll update you asap.
🙏 1
I still don't have the answer for you, but I opened the issue in Prefect repo, so we will not loose it here in the thread.
m
Hello 🙂 I have the same problem. I see only this
Started following logs for xxxxxx
and the next line is
Job xxxxxx has been completed.
The logs from the job pod are missing. Interestingly though, when I go to all logs from the flow, they are present there - they are only missing in the logs view for a specific task.
m
they are only missing in the logs view for a specific task.
Thank you Marko for your feedback. Just to double check: you can see pod logs directly from the pod (
kubectl logs ...
), but not in UI, right? Just want to make sure we are seeing the same bug, and not another one 🙂
m
Yes, I see it in k8s w/
kubectl logs ...
, but not in the UI as you mentioned.
👍 1
And you’re welcome 🙂
Hello @Mariia Kerimova 🙂 Do you guys have any updates on this? Will it be worked on in the near future? Thanks!
k
Hey sorry @Marko Jamedzija, we did some digging but haven’t had bandwidth to work on this issue yet (contributions most welcome!). Unfortunately, we don’t have a clear timeline for this.
m
Thanks for the answer Kevin 🙂