<@ULVA73B9P> I want to set a hard-coded job variab...
# ask-marvin
j
@Marvin I want to set a hard-coded job variable in my deployment. How can I access job variables in my flow? Is the only way to set them as part of "env", and use os.environ.get()?
m
In Prefect 3.x, you can set job variables for your deployment using the
job_variables
argument in the
.deploy()
method. These variables can be accessed in your flow runs and are visible under the Configuration tab in the Prefect UI. You can override them when running flows via the UI, CLI, or automations. Here's a concise approach to setting and accessing job variables: 1. Set Job Variables: - Define them in your deployment using the
job_variables
argument when deploying your flow. 2. Access Job Variables: - You can access these variables in your flow using the Prefect runtime API, which allows you to retrieve configuration and context dynamically during a flow run. For more detailed steps and examples, refer to the Prefect documentation on customizing deployments. If you were thinking about using environment variables, that's certainly a common approach, but Prefect's job variables provide a more integrated way to manage such configurations within the Prefect ecosystem.
n
I would say if you want to smuggle some value into a flow, then prefect variables are the way you could use job_variables to put them in the env, but i'd say this is more a use case for variables
j
That certainly would work. I was concerned at first that I would have manually change them in the UI, but there is an API to programmatically change variables. While I have you here, if you have any ideas for my use case, I would appreciate it. I intend to create many deployments/flows that all have a common set of parameters that are supplied to them. Let's say,
[a,b]
. I want to do some prechecks so that I never supply
a
and
b
at the same time. I can make a function to do this, and call it at the start of every flow, but I find that a bit ugly? for lack of a better term. Instead, I thought I could create a common flow X that is run before any other flow, and use X as the deployed flow that will call my true flow. The environment variable Y would be changed per deployments, so that X will call that flow Y after performing the prechecks.
This could be a bit convoluted approach just to avoid calling a function a few times, but if eventually my prechecks/flows start expanding alot, then it could create a lot of bloat.
n
I intend to create many deployments/flows that all have a common set of parameters that are supplied to them. Let's say,
[a,b]
. I want to do some prechecks so that I never supply
a
and
b
at the same time. I can make a function to do this, and call it at the start of every flow, but I find that a bit ugly? for lack of a better term.
might be lacking context, this sounds like some pydantic model
Copy code
class Params(BaseModel):
  a: int | None = None
  b: str | None = None

  @model_validator
  def assert_xor(self):
    if self.a and self.b:
      raise ValueError("dont do that")
    return self

@flow
def foo(params: Params): ...
j
I see, so I can define this class and use it as my params. If I deployed a flow like this, would
a
and
b
show up as individual parameters?
n
yep you get a nice form and we interpret enums and literals and submodels and all that, i had this example laying around
image.png
ie you get fields in the form for each field on the model and the UI should give the right input box for the type of the field
j
Oh wow, you can get a drop-down, that's really cool. I was wondering how that could be done. This is still a little bit different, as in my use case I want to be able to supply a string in accordance with my choice.
n
you'd be free to have a freeform
str
field as well
j
Yeah, I was just thinking that to be the case as well
Thank you, this has been very educational
n
catjam