I’m also seeing an issue with `'PrefectSecret' obj...
# prefect-community
I’m also seeing an issue with
'PrefectSecret' object has no attribute 'secret_name'
which started around November 11. Possibly there was a change to my execution environment like using a newer/different Prefect agent at that point. Do I need to reinstall the Prefect Agent?
Ah, yeah. There was a change to implementation of
objects. Generally we recommend always running prefect flows with the prefect version they were registered with to avoid issues. In this case you'd either need to downgrade back to the version registered with your flow, or re-register your flow with the latest version of prefect.
re-registering the flow did not seem to fix it, although perhaps I was reregistering with the wrong Prefect version
To clarify, the important thing is that the Prefect version the flow was registered with matches the Prefect version the flow runs with. The version the agent runs shouldn't matter, unless you're using the local agent (which uses the same python environment to run flows).
If you use script-based storage (rather than pickle-based storage, which is the default) this requirement is much less strict. See https://docs.prefect.io/core/idioms/file-based.html for more information.
I see, for some simple pipeline I was using local, I understand using the Kubernetes environment would not have this issue>
oh, this seems to be storage specific, not execution specific
I am already using Docker but I do not have store_as_script=True
You shouldn't need to change execution environments to fix this - for local execution just ensure when you upgrade prefect you also re-register your flows (or register local flows with
If I use Docker storage, is it critical that I set
No, the version of prefect will be frozen as part of the image.
That makes sense
I was able to get passed that error. It seems the issue here related to using the
which does use the Prefect version deployed throughout the cluster.
Ah, do you have a long-running dask cluster flow runs connect to? If you're using a local dask executor (e.g.
with no kwargs) things should work fine (not that we'd recommend one over the other, just trying to understand your deployment).
I had been using a long running Dask cluster in order to get some other things working but that approach seems to have been causing more problems than it was solving, so I am switching this pipeline over to using
(in order to add a sidecar container to my job)
The pipeline is running fine now, although it seems the job (i.e.
) is hanging around in Kubernetes after the job exits
seems like flow-container exited (good) but my sidecar didn’t, so the job doesn’t get torn down. Probably some Kubernetes configuration I need to look for to not wait for the sidecar