Thread
#prefect-community
    Josh

    Josh

    3 months ago
    @Anna Geller or @Kevin Kho Any update on this issue? We’re running into the same issues with no logs appearing when using Prefect with LocalDaskExecutors on a Docker Agent in “processes” mode https://github.com/PrefectHQ/prefect/issues/5769
    Kevin Kho

    Kevin Kho

    3 months ago
    We haven’t had bandwidth to investigate this thoroughly yet as we get 2.0 out of beta. Can try digging myself today and seeing what I find.
    Ping me tom if I don’t get back to you
    I am still looking and am still clueless, but a good workaround is Dask LocalCluster though if you’re open to using that. Example below:
    from prefect import task, Flow
    from prefect.run_configs import ECSRun, DockerRun
    from prefect.storage import S3
    from prefect.executors import LocalDaskExecutor, DaskExecutor
    import prefect
    import time
    
    @task(log_stdout=True)
    def abc(x):
        time.sleep(5)
        <http://prefect.context.logger.info|prefect.context.logger.info>(x)
        print(x)
        return "hello"
    
    with Flow("ecs_test", run_config=DockerRun(image="prefecthq/prefect:1.2.0-python3.7", env={"PREFECT__LOGGING__LOG_LEVEL": "DEBUG"}), 
                          executor = DaskExecutor(cluster_kwargs={"n_workers": 4, "threads_per_worker": 1})) as flow:
        abc(1)
        abc(2)
    
    flow.storage = S3(bucket="coiled-prefect")
    flow.register("databricks")
    Josh

    Josh

    3 months ago
    Is it possible to have n_workers or threader per worker to be set dynamically at flow run time based on agent env variables?
    Kevin Kho

    Kevin Kho

    3 months ago
    Actually that is only possible for DaskExecutor and not LocalDaskExecutor. You can see this