When using the python `Deployment` object and spec...
# ask-community
i
When using the python
Deployment
object and specifying a remote S3 flow storage block and Kubernetes infrastructure, we are seeing strange behavior on flow execution. The deployment pushes the flow to S3 storage as expected (which is confirmed by the storage block in the Prefect Cloud UI being referenced in the deployment UI) but errs out with the following:
Flow could not be retrieved from deployment...FileNotFoundError: [Errno 2] No such file or directory: '/My/Local/Path/my_project/flow.py'
where the path is the absolute path of the machine that applied the deployment whereas the absolute path in the s3 bucket is just
<bucketname://flow.py>
. Here is the code we are using if anyone has any ideas?
Copy code
from prefect.deployments import Deployment
from prefect.infrastructure import KubernetesJob
from prefect.filesystems import S3
from my_project.flow import entrypoint

infrastructure = KubernetesJob(namespace="prefect2")

deployment = Deployment.build_from_flow(
    flow=entrypoint,
    name="my_deployment",
    work_queue_name="default",
    storage=S3.load("default-block"),
    infrastructure=infrastructure,
)
deployment.apply()
k
Did you get this error after running an agent?
i
Hi Khuyen - this is the error on the pod/container running the actual flow, the agent successfully starts a new pod for the flow.
k
Hmm. That is a weird error. It could be that the agent picked up another deployment (not the one that uses S3). Can you double check the name of the work queue and see if your agent is running on the right work queue?
i
Double checked and the agent is also running the “default” queue specified in our Deployment object; there are also no other deployments scheduled in this environment, just kicking this one-off from the Run tab in Cloud UI and I can see it fail in real time.
This is what the deployment shows under its UI in Cloud
k
Thank you for checking. Do you mind taking the screenshot of the path of your S3 as well? Just to double check
i
Sure
This is the flow as it appears in the storage block/bucket referenced by the deployment in the root dir of the bucket.
d
This is the same cause as this issue: https://github.com/PrefectHQ/prefect/issues/6469
🙏 1
i
It seems like the container is actually trying to maybe pull from s3 (?) because when i remove s3fs from the flow container’s image, i get an s3fs missing/remote filesystem error
Thanks for that link David. I’m getting the same behavior it looks like (including the CLI build/apply in the same folder working)
r
d
My workaround is to fix the entrypoint before calling `apply`:
Copy code
def fix_entrypoint(entrypoint: str) -> str:
    # Workaround until <https://github.com/PrefectHQ/prefect/issues/6469> is resolved
    flow_path, flow = entrypoint.split(':')
    flow_path = Path(flow_path).relative_to(Path('.').absolute())
    return f"{flow_path}:{flow}"

d = Deployment.build_from_flow(...)
d.entrypoint = fix_entrypoint(d.entrypoint)
d.apply()
🙌 2
i
Thank you David! Can you describe what is being passed to the
entrypoint
argument is in this function?
k
Thank you for raising this issue and the workaround. I’ll talk about this to the team
i
Thank you Khuyen. Would it help if I also opened up an issue on github?
k
Yes, that would be very helpful. Thank you. I’ll link to your issue in the existing issue
🙏 1
We will fix this as soon as possible. For now, I recommend that you use the CLI to create the deployment.
👍 1
i
Hi Khuyen — I opened an issue here: https://github.com/PrefectHQ/prefect/issues/6482
gratitude thank you 1
k
Thank you for the descriptive issue! We will try to fix this ASAP
👍 1
d
Thank you David! Can you describe what is being passed to the
entrypoint
argument is in this function?
The original
entrypoint
as set by
Deployment.build_from_flow
. It is an absolute path, and the workaround converts it to a path relative to the working directory, which is what
build_from_flow
uses when uploading the files to your target storage.