Hey folks! small question: I am looking into work...
# ask-community
a
Hey folks! small question: I am looking into workers and writing my first
prefect.yaml
. I want to create multiple deployments with the same image. I don’t want the image to be built each time, so I am looking for a way to pull a specific image from GCR for each deployment. Here I don’t see any option to specify
pull
to pull a specific image - am I missing something?
k
you can specify the image used for your deployments as a job variable. so in each deployment definition,
Copy code
work_pool:
  job_variables:
    image: your/image:tag
if you're dynamically building and naming the image during CI, you can use env vars too
Copy code
work_pool:
  job_variables:
    image: {{ $MY_IMAGE }}
a
Thanks @Kevin Grismore! how will it connect to the GCR?
k
wherever your flows run will need to have the right permissions to pull images
a
you mean wherever I run
prefect deploy
?
k
you mean for building and pushing the image? or just creating deployments?
all the deployment knows about is the name of the image as a string
a
My flow is: • In CI, Build & push image to GCR • Still in CI - run
prefect deploy
does that sound reasonable?
@Kevin Grismore When running
prefect deploy --all
, I get asked:
Copy code
Would you like your workers to pull your flow code from a remote storage
location when running this flow? [y/n] (y)
I am not sure why - I want the worker to pull the image like I mentioned. This is my `prefect.yaml`:
Copy code
build: null
push: null
pull: null

# the definitions section allows you to define reusable components for your deployments
definitions:
  work_pool: &common_work_pool
    name: "correlation-engine"
    job_variables:
      image: "europe-west1-docker.pkg.dev/.../.../app:..."

# the deployments section allows you to provide configuration for deploying flows
deployments:
- name: "prod_test_run_all_tenant_spaces"
  schedule: "0 * * * *" # every hour
  entrypoint: "test.py:run_all_tenants_spaces"
  work_pool: *common_work_pool
k
sorry, was on a call
what you have looks reasonable, I think you may need to do
prefect deploy --no-prompt --all
a
Thanks @Kevin Grismore, giving this a go!
Hey @Kevin Grismore, now I am getting this in the pod that the job created:
Copy code
Normal   Created           46s    kubelet                                Created container prefect-job
  Warning  Failed            42s    kubelet                                Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "prefect": executable file not found in $PATH: unknown
b
hey @Adam Gold - we're doing something very similar to yours. Is this a custom docker image, where your
test.py
is contained inside your image at
/opt/prefect
? Something like:
Copy code
@flow
def run_all_tenants_spaces(...):
    ...
I'm curious how `parameter`s are passed into the flow run if you're using a custom docker image - doesn't seem documented anywhere. @Kevin Grismore Separately, on
prefect==2.19.8
, I'm getting
No such option: --no-prompt
a
Hey Brian,
prefect --no-prompt deploy
should fix it
I am using a custom image, not sure how
entrypoint
works and how the prefect worker calls the flow
run_all_tenants_spaces
b
i see, thanks. yeah, the
entrypoint
is tripping me up. it's a mandatory field in
prefect.yaml
, but when you run
prefect deploy
, prefect seems to be looking into the local working directory for the
filepath:func
to deploy. when instead, i expected it to be using entrypoint of the image's filesystem only at runtime?
a
It looks like the worker is running the image entrypoint as a script but we need it to run as a full package with
poetry
. Is there any way to change the command to
poetry run …
so it can run with all the dependencies? @Kevin Grismore
k
you don't need an
entrypoint
or
command
in your dockerfile because prefect will set the command for you default is
prefect flow-run execute
which runs some python that imports and executes the flow function described in your deployment's
entrypoint
.
if you want a custom command for your containers, you can set it on the work pool or deployment, but it should always end with
&& prefect flow-run execute
a
Thanks @Kevin Grismore it works now 🙂 For local development - would you recommend setting up a local cluster? Also - how can I make the job aware of config maps and secrets in the cluster and add to its env vars?
k
have you ever visited the advanced tab on your k8s work pool's edit page? there's a JSON-ified job manifest in there
and yeah, I use a local cluster too
a
I don’t want to hardcode the env vars - I want the pods to take them from a config map and a secret in the cluster
or you mean I can just use kubernetes spec there?
k
you can use the kubernetes spec in there, and you can add new template variables too
you can add any variables you want, then set them on the work pool, deployment, or individual flow runs
a
Awesome! thanks @Kevin Grismore One last question (I hope 😄 ) - how do you differentiate values for different environments??
I have this definition:
Copy code
definitions:
  work_pool: &common_work_pool
    name: "correlation-engine"
    job_variables:
      image: "europe-west1-docker.pkg.dev/{{ $$GOOGLE_PROJECT_ID }}/{{ $CIRCLE_PROJECT_REPONAME }}/app:{{ $$CIRCLE_SHA1 }}"
      config_map_ref: "{{ $CIRCLE_PROJECT_REPONAME }}-configmap"
      secret_ref: "{{ $CIRCLE_PROJECT_REPONAME }}-secret"
      env:
        CORE_URL: <http://core>
How can I have different values for local/prod?
k
what's your approach for differentiating? branches in your repo?
a
No, we use
helm
so different values.yaml files
actually, now that I can use config maps - I can still use helm for that.
k
yeah, the most straightforward way is to make different deployments
or have your deployment creation templated such that it gets a different name and the rest of the vars are different based on some condition