Thanks for the new release today! Very exciting. ...
# prefect-community
b
Thanks for the new release today! Very exciting. I'm trying to use the new python
Deployment
class. Previously, I defined a
KubernetesJob
infrastructure block in a different python script and used the
.save()
method to instantiate a block in the UI. It looks like this was removed in 2.1... How can I do this now? Moreover, I'm trying to understand what I pass in for the
Deployment
class in the
infrastructure
parameter. With the
storage
parameter, I just load the
storage
block that I previously defined. Can I do the same with the
infrastructure
block? If so, how do I load an existing
KubernetesJob
block? Or, is the new workflow to just input the
KubernetesJob
dictionary directly in your
Deployment
class? If that's the case, how do I tie this to a work queue? In general, I'm a big fan of the
KubernetesJob
block. idc whether it's an actual block or just a python dictionary I pass into the deployment. Either way, it's been extremely helpful to manage compute depending on the deployment. Would love to streamline it with CICD. I know that's in the works and I'm eager to see best practices with this and implement it.
a
you can totally do the same with the
KubernetesJob
block as you do with the storage block. it would be:
Copy code
KubernetesJob.load("name")
I think for CI/CD, the CLI experience is much easier, but up to you
βœ… 1
I'd say give it a try, see how it goes and if something doesn't work, report back and we'll try to help out
πŸ™Œ 2
πŸ™ 2
i
Sorry to piggyback on this but on the same topic - is there planned support for reading a kubernetes job manifest into a deployment build command using the prefect CLI, similarly to what you are doing here with
KubernetesJob blocks
Anna or is the recommended method to use blocks for more advanced use cases? https://github.com/anna-geller/dataflow-ops/blob/main/blocks/kubernetes-job/infra_from_yaml_manifest.py
Or would the CLI solution be to separately build and save a block remotely somewhere, then read that block back into the deployment build CLI command?
a
the typical workflow would be that you create a block first and then pass a reference to it on a deployment build CLI
πŸ™ 1
you can totally create a block from CI/CD and there is even overwrite=True flag which makes it easy to put it into CI/CD without it failing because the block already exists
πŸ‘€ 1
Copy code
k8s_job.save("yourname", overwrite=True)
i
Thanks Anna, that’s super helpful! In the case of a CI/CD pipeline, would the idea be then to build a custom
KubernetesJob
, saving it as a block to the local file system that is running the pipeline, then passing that block to the
--infra-block
flag in the CLI when running
deployment build
?
πŸ™Œ 1
a
I believe the easiest approach would be to treat infrastructure blocks as generic "building blocks" that can be optionally adjusted on a per deployment basis with --overrides flags this shows initial setup for blocks https://github.com/anna-geller/dataflow-ops/blob/main/.github/workflows/ecs_prefect_agent.yml and then you can set --override per deployment from CI when needed e.g. when you build a custom image as part of this CI and pass that image tag to the deployment build command with
--override image=xxx
πŸ™Œ 1
b
Excellent! Thank you both for the ideas! Really appreciate it.
Also, this is sweet as well: https://github.com/anna-geller/dataflow-ops/blob/main/deploy_cli/hello.bash#L2. I really appreciate you creating this resource!
πŸ™ 1
πŸ™Œ 1