Hello, we are having some issues with Logging on k...
# ask-community
p
Hello, we are having some issues with Logging on k8s and Dask losing some messages. This is even occurring on a simple flow. Odd thing is that if I put a small delay(5sec) after what should be logged it appears in the log and anything before. Has anyone else noticed this?   Sample task will not print a message to the log:  
Copy code
@task(log_stdout=True)
def print_msg():
    print("Will it work?")
  Sample task will print message to the log:  
Copy code
@task(log_stdout=True)
def print_msg():
    print("Will it work? Yup!")
    time.sleep(5)
  Any help appreciated,   P
k
Hey @Peter Peter, messages shouldn’t be lost. There is possibility of a delay when showing up in the UI. Do these not show up in the UI after a while?
p
Thanks for the quick response @Kevin Kho, No they never show up. But they show up with there is a small delay after.
k
Looks like there is a chance of a race condition here. A team member will look into it. Do you experience this also when you run Flows without Dask? Does it always happen for the first flow?
p
If I run it locally there are no issues with logging. What do you mean by first flow? if you mean task if I have a number of tasks that print to the log it will not print them if there is no delay. If I have 3 print tasks and the second has a delay(after the print) then the first will print along with the second message.
k
Sorry I meant the first `task`*
p
It seems to happen if the flow finishes too fast. It will happen fist, second, .... . First noticed it if a task fails there was no actual stack trace and no logging put in place.
k
Got it we’ll have a team member look into this
p
@Kevin Kho Thanks! Keep me posted if you don't mind
k
Sure. I don’t have any updates for now, but a team member was running some tests on this.
p
@Kevin Kho Any update on this?
k
Will check
z
Hey @Peter Peter -- do the logs never show up or do they eventually populate in the UI?
p
@Zanie If I add a delay in the task they will show up. It seems to be logging at the end of the flow(crash, or graceful finish) that are missing.
z
How are you viewing these logs?
p
prefect ui
z
And if you refresh the page after the flow run is complete they do not appear still? The flow run state can be reported as complete before logs finish propagating.
p
no they do not appear later or tomorrow. No refresh helps
z
If you use a
logger
instance instead of logging stdout, do you see the same behavior?
I've never seen this so it may be quite specific to your setup. Some additional information on your executor configuration may be useful as well.
p
I will need to try if it happens when using the logger as well
z
Thanks! Just trying to narrow down where things are going wrong.
p
via logger they still do not show up.
z
Which dask executor and scheduler are you using?
p
executor = DaskExecutor(cluster_class=lambda: KubeCluster(     make_pod_spec(image='___theImageName___')),                         adapt_kwargs={                             "minimum": 1,                             "maximum": 1                         })
z
👍
Does this also occur when not using a
KubeCluster
dask cluster?
p
still using daskexecutor?
z
Yeah, just with a local dask cluster
(that the executor creates still)
p
with "executor = DaskExecutor(adapt_kwargs={"minimum": 1, "maximum": 1})" logging is still missing
z
I cannot reproduce this with a simple flow
If you use
Copy code
from prefect import Flow, task
from prefect.executors import DaskExecutor

@task(log_stdout=True)
def print_msg():
    print("Will it work?")

with Flow("test", executor=DaskExecutor()) as flow:
    print_msg()
do you see the same behavior?
I'm using
prefect register -p flow.py --project test
and
prefect run --name test --execute
but I also cannot reproduce this when using an agent.
p
in my version it says -p is not a valid argument. Could this issue have been fixed in recent versions of prefect?
we are using 0.14.6
z
Ah the register/run CLIs have been rehauled since then
p
I have been registering using flow.register() and running from UI
z
I'm not sure what's causing your issue so I can't say if a new version will have resolved it. At the very least, you should try upgrading to the newest version within 0.14.x. I'd recommend moving to 0.15 when you get the chance 🙂
The logging behavior should be the same whether you use the CLI or not.
p
The above sample does not execute correctly on kubernetes. This is the sample I am using.
Copy code
from prefect import task
from prefect.storage import Docker
from prefect.run_configs import KubernetesRun
from prefect.executors import DaskExecutor
from prefect import Flow
import prefect


@task(log_stdout=True)
def print_first():
    print("_First Message!")
    logger = prefect.context.get("logger")
    <http://logger.info|logger.info>("1234!")
    logger.error("5678!")


with Flow("log_flow_delay_after_3rd") as flow:
    a1 = print_first()

env_vars = {
    "PREFECT__LOGGING__LEVEL": "DEBUG",
}

storage = Docker(registry_url="FILLIN",
                 image_name="IMAGENAME",
                 image_tag="0.1.36",
                 local_image=False,
                 env_vars=env_vars)

executor = DaskExecutor(adapt_kwargs={"minimum": 1, "maximum": 1})

flow.executor = executor

flow.run_config = KubernetesRun(
    labels=["DASK"],
    cpu_request=1,
    memory_request=1,
)

storage.add_flow(flow)
flow.storage = storage

flow.register(project_name="log_test", build=False)

storage.build(push=True)
z
Can you clarify: Does your / my sample work without kubernetes?
Are you seeing this behavior on Kubernetes without a Dask Executor?
p
if I ran that as a local I do see all the log entries on the screen. See the same behaviour without the Dask Executor
z
So just to clarify, this is only an issue with the
KubernetesRun
?
p
Just tested on a local prefect server and no issues. Seems like the issue is only in Kubernetes
z
Great thanks! I'll try to reproduce and get back to you.
p
@Zanie Were you able to reproduce this issue?
z
Sorry Peter, this slipped off my radar
Let me see if I can find someone to help me reproduce, I'm very busy lately.
We've run this example in our cluster and the logs appear as they should 😕
t
Hi Peter. I've registered this flow on 0.14.6 with 1 minor change to the base_image and I still see the logs in Prefect Cloud. In my Docker storage I used
base_image = "prefecthq/prefect:0.14.6-python3.8"
but I don't believe this to be the problem. Anything interesting about the cluster it's running in?
p
Hello, Not sure if there is anything interesting about the cluster. They are supposed to be updating the version of prefect to the latest. I Will test on that instance when it is complete.