Greetings! Our team used the prefect for 5 month ...
# prefect-server
Greetings! Our team used the prefect for 5 month and this is a very cool tool, thank you very much for it! Unfortunately, we have faced an issue that we cannot fix. It seems that we don't catch the architecture or smth else. I hope that our case will be useful for other teams using Prefect. We have an issue with running of the remotely registered Flow. [details moved to the thread]
It seems, that this issue comes from line flow = storage.get_flow( from
Thank you for the very detailed writeup. I may have to ask you to put some details in the thread to not crowd the main channel. So maybe I should explain the process that leads to this error.
DETAILS: Our environment: We are using a local Prefect Server. Also we have a docker container at the same local machine with server, that run the only local agent via
Copy code
from prefect.agent.local import LocalAgent
if __name__ == "__main__":
	agent = LocalAgent(name="the_agent", labels=[...])
At the file system of the container our code is placed. The way we are using to register our flows at the server is following: we run the python script at the agent container, where the following env variables are set:     - PREFECT__BACKEND=server    - PREFECT__SERVER__HOST=${PREFECT__SERVER__HOST}    - PREFECT__SERVER__PORT=${PREFECT__SERVER__PORT} /<home dir of a container>/reg/
Copy code
import contextlib

from prefect import Client
from prefect.utilities.exceptions import ClientError

from . common

def main():
  project = "dixy_reg"

  client = Client()
  with contextlib.suppress(ClientError):
    client.create_tenant(name="default", slug="default")

  client.register(common.init_flow(), project_name=project)

if __name__ == "__main__":
We just activate the conda env and run python -m reg.register. And it works. Now we want to create a Gitlab CI stage that will test our new code. We share the upcommig code from Gitlab with an Agent container, and we want to re-register flows the way that we used before, but at this time we are at a different container.   We use almost the same code, except that we explicitly specify the api_server param of prefect.Client. After the registration at the server we have got a new version of the flow. But when we are trying to run it, we have faced with the following error
Failed to load and execute Flow's environment: ModuleNotFoundError("No module named '/home/pipeline/'")
We can meet this prefix at the flow serialize method result:  
Copy code
       {'path': None,
        'secrets': [],
        'directory': '/home/pipeline/.prefect/flows',
        'stored_as_script': False,
        'flows': {'check_version': '/home/pipeline/.prefect/flows/check-version/2021-09-21t17-43-15-413079-00-00'},
        '__version__': '0.15.2',
        'type': 'Local'})
But when I print
Copy code
from /reg/ that listed above, I see the same thing:
Copy code
('storage', {'stored_as_script': False, 'directory': '/home/pipeline/.prefect/flows', 'path': None, 'flows': {'check_version': '/home/pipeline/.prefect/flows/check-version/2021-09-22t09-37-37-140541-00-00'}, 'secrets': [], '__version__': '0.15.2', 'type': 'Local'})
So it seems that the issue should be at the different place. Does anybody have an idea what is going wrong at our scheme of remote flow registration?
@Kevin Kho My mind is not fully english-compatible. Do you ask me to move details into a thread?
Yes if possible. Thank you! I am typing a bit of a long answer to you
When you register a Flow with Prefect, it places the flow in a Storage. When the agent is told to run a Flow, it picks up the Flow from Storage and executes it. In more concrete terms, I can register with S3 Storage, and then the agent will load the Flow from S3. The default storage is Local Storage. Prefect takes your flow and serializes and saves it inside
folder under your home directory. So if you register a Flow from Computer A, it lives on the harddrive of Computer A, only an agent on Computer A will run it. You will see the error you get if an agent on Computer B tried to run the flow. Going to your setup, I think the container is registering it and saving it locally inside the container but you might be trying to run the Flow from a machine or container that does not hold the Flow locally
The easiest thing to do is use a Storage like S3 that is accessible for multiple machines. Does that make sense?
Thank you for your explanation! Yes, it makes sense. Probably we will start with copying cloud pickle from one local storage to another. Do you think that the error message could be improved in this case, because the source of it is the absence of cloud pickle file, not the import issues.
Yeah certainly can be for Local Storage specifically. Will bring up to the team