Thread
#prefect-community
    t

    tas

    2 months ago
    Hi there! Is there a documentation on how to setup prefect server for dask cluster that is running on the same machine but in docker containers?(So scheduler is in a container and its workers are also in separate containers)? I've created that cluster using our codebase, but then whenever running the flow from Prefect UI, the task goes to dask scheduler, then worker, but then it fails with "requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=4200): Max retries exceeded with url". during initialising the run (it fails with that on dask worker). The task runs when I run it locally with flow.run(). Is there a config var that would point it to outside of dask container?
    The issue is directly related to the prefect trying to communicate using "localhost". If I run dask cluster without containers then its fine.
    Anna Geller

    Anna Geller

    2 months ago
    hard to give a single answer, we have tons of resources on that here https://discourse.prefect.io/tag/dask This is the best resource I can point you to atm for Prefect 1.0 https://discourse.prefect.io/t/how-can-i-configure-my-flow-to-run-with-dask/45
    t

    tas

    2 months ago
    That's what I was looking for, thanks!
    I've digged through all of that and more of yours github issues, but I still can't make my contenarized dask worker to use different host for communicating with prefect server than localhost. Am I missing something trivial here?
    Full traceback from dask worker:
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.8/site-packages/prefect/engine/cloud/task_runner.py", line 154, in initialize_run
        task_run_info = self.client.get_task_run_info(
      File "/opt/conda/lib/python3.8/site-packages/prefect/client/client.py", line 1494, in get_task_run_info
        result = self.graphql(mutation)  # type: Any
      File "/opt/conda/lib/python3.8/site-packages/prefect/client/client.py", line 443, in graphql
        result = <http://self.post|self.post>(
      File "/opt/conda/lib/python3.8/site-packages/prefect/client/client.py", line 398, in post
        response = self._request(
      File "/opt/conda/lib/python3.8/site-packages/prefect/client/client.py", line 647, in _request
        response = self._send_request(
      File "/opt/conda/lib/python3.8/site-packages/prefect/client/client.py", line 497, in _send_request
        response = <http://session.post|session.post>(
      File "/opt/conda/lib/python3.8/site-packages/requests/sessions.py", line 590, in post
        return self.request('POST', url, data=data, json=json, **kwargs)
      File "/opt/conda/lib/python3.8/site-packages/requests/sessions.py", line 542, in request
        resp = self.send(prep, **send_kwargs)
      File "/opt/conda/lib/python3.8/site-packages/requests/sessions.py", line 655, in send
        r = adapter.send(request, **kwargs)
      File "/opt/conda/lib/python3.8/site-packages/requests/adapters.py", line 516, in send
        raise ConnectionError(e, request=request)
    requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=4200): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x407abf5550>: Failed to establish a new connection: [Errno 111] Connection refused'))
    [2022-07-18 14:56:14+0200] INFO - prefect.CloudTaskRunner | Task 'step2': Finished task run for task with final state: 'Pending'
    /opt/conda/lib/python3.8/site-packages/prefect/utilities/logging.py:147: UserWarning: Failed to write logs with error: ConnectionError(MaxRetryError("HTTPConnectionPool(host='localhost', port=4200): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x40775569a0>: Failed to establish a new connection: [Errno 111] Connection refused'))")), Pending log length: 5,988, Max batch log length: 4,000,000, Queue size: 2
    t

    tas

    2 months ago
    Ah, alright, so prefect uses the config on the host and shares those with the dask worker(in thise case). Thanks a lot Anna, all good now!