Is it OK to assume that when following the `scalin...
# prefect-server
t
Is it OK to assume that when following the
scaling out
section of the tutorial (https://docs.prefect.io/core/tutorial/06-parallel-execution.html#scaling-out), when using a remote
DaskExecutor
, logs from tasks will not be mirrored locally and we will have to build a custom logger for this? This limitation isn’t explicitly detailed, which is confusing
k
Hey @Tom Forbes, is the question if using the DaskExecution, will logs show up in Prefect? So scheduler logs will, but worker logs will not because Dask workers on that the environment variables and the logger gets restarted when it gets deserialized.
f
So in this case I’m assuming you would advise to gather worker’s stdout from whatever the Dask workers are running on (in our case with @Tom Forbes, EKS)? I’d be curious to know how this affects people’s workflow in real life. Like I’m testing with a LocalExecutor, and then I want to run the same Flow on some kind of remote dev cluster and suddenly I lost all the outputs/stacktraces.
k
Hey so my wording here was just a bit unclear earlier so wanna clarify (though I think you got it), neither scheduler nor worker logs are shipped to Prefect, only Prefect logs; any dask logs need to be managed separately from Prefect (or with a worker / scheduler plugin of some kind that ships them back to Prefect)
c
Adding on here - a benefit of using either Prefect Server or Prefect Cloud is that all Prefect logs (as Kevin clarifies) are shipped to a central location automatically
👍 1
runs not orchestrated through one of those services are stateless