On the Orion beta, I can’t see any route to adding...
# prefect-community
t
On the Orion beta, I can’t see any route to adding GCP credentials to a
KubernetesFlowRunner
instance other than building the keys into the image itself. Has anyone else found a solution for this?
z
We don’t allow custom job specifications, but will in the future. At that point, you could load them as secrets.
t
The
GOOGLE_APPLICATION_CREDENTIALS
env var is just a path to the json file containing the creds, so they need to already be on the filesystem. There doesn’t seem to be a way to customise the
KubernetesFlowRunner
Job template in Orion as there was in Prefect 1, so you can’t mount the secret into the container. Hopefully I’ve missed some config option?
@Zanie those messages crossed in the air. Thanks for responding, so is it just not possible right now?
z
Not without patching
Copy code
def _create_and_start_job(self, flow_run: FlowRun) -> str:
        k8s_env = [
            {"name": k, "value": v}
            for k, v in self._get_environment_variables().items()
        ]

        job_settings = dict(
            metadata={
                "generateName": self._slugify_flow_run_name(flow_run),
                "namespace": self.namespace,
                "labels": self._get_labels(flow_run),
            },
            spec={
                "template": {
                    "spec": {
                        "restartPolicy": self.restart_policy.value,
                        "containers": [
                            {
                                "name": "job",
                                "image": self.image,
                                "command": self._get_start_command(flow_run),
                                "env": k8s_env,
                            }
                        ],
                    }
                },
                "backoff_limit": 4,
            },
        )
I can see if I can sneak it into the next release
🙌 1
t
That’d be amazing, it’s one of the few things I can’t find a usable way around it, especially since K8s flow runner requires remote storage and so needs to communicate with GCS
It’s fine when I deploy since I use workload ID, but locally I’d have to set up a totally different storage system. The docs mention a local KV store for testing but then it’s not referenced anywhere else.