Hey community! Thank you for your software :slight...
# prefect-community
m
Hey community! Thank you for your software ๐Ÿ™‚ I have an issue with setting up my orion workflow on kubernetes. I am following this doc, but stuck at Configure storage part for my GCP bucket. The doc suggests to call
Copy code
prefect storage create
in cli, but my 2.0b13 doesnโ€™t have no such command returning the error Error: No such command โ€˜storageโ€™. Should I create the storage using this doc ? There is no example for GCP, i found this one in the community. But there is no
Copy code
from prefect.blocks.storage import GoogleCloudStorageBlock
import in my venv ๐Ÿ˜ž
๐Ÿ™Œ 1
๐Ÿ™ 1
also, i dont have
Copy code
from prefect.flow_runners import KubernetesFlowRunner
since i upgraded from 2.0b11 to 2.0b13 The Kubernetes setup flow seems to be a bit screwed now
m
Storage is deprecated since version 2.0b11, so you should indeed move to file systems. I do not have any experience with GCP buckets, but it should probably be analogous to the AWS S3 example (with gs:// instead of s3:// as stated here, as prefect uses fsspec underneath). For the second question, you should use
KubernetesJob
from
prefect.infrastructure
as flow_runners are deprecated too. The community example you mentioned won't work, as
DeploymentSpec
is deprecated since 2.0b9 as well.
๐Ÿ™ 1
If you run into any issues, it might be useful to take a look at release notes, or check out the #announcements channel, as they are rapidly releasing new versions
๐Ÿ™Œ 1
๐Ÿ”ฅ 1
m
wow! the deprecations happen too fast ๐Ÿ™‚ Thank you, trying your solution
unfortunately, the release notes are quite outdated, too. Deployment.flow_runner field is still here in python code, but it does not work in runtime. Should I run set KubernetesJob inside the infrastructure field? The UI does not seem to parse it, as Flow Runner field is None
Should i just wait for the team to update the docs?
m
This worked for me on 2.0b12:
Copy code
client = OrionClient(ORION_URL)

d = Deployment(
    flow = FlowScript(path='./etl.py'),
    name = "test",
    tags=["orion"],
    infrastructure = KubernetesJob(stream_output=True, namespace="orion",),
    packager = DockerPackager(python_environment=PythonEnvironment(python_version='3.8'), registry_url="xxxx")
)

d.create(client=client)
m
thx, checking
r
@Mike Kovetsky This is how i did it using remotefilesystem You must install this package into your environment for it to work: pip install gcsfs
Deployment(
name="example",
flow=example_flow,
infrastructure=KubernetesJob(customizations=resource_limits}),
packager=FilePackager(filesystem=RemoteFileSystem(basepath="gcs://<BUCKET_NAME>") ))
๐Ÿ™ 1
๐Ÿš€ 2
m
now fighting with
Copy code
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host <http://storage.googleapis.com:443|storage.googleapis.com:443> ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)')]