oleander-whirlwind
10/06/2025, 1:12 PMMarvin
10/06/2025, 1:12 PMMarvin
10/06/2025, 1:16 PM# Create two virtualenvs and install Prefect + your deps in each
python3.10 -m venv .venv-py310
source .venv-py310/bin/activate
pip install -U prefect <deps-for-env-a>
python3.11 -m venv .venv-py311
source .venv-py311/bin/activate
pip install -U prefect <deps-for-env-b>
# Create two process work pools
prefect work-pool create py310 --type process
prefect work-pool create py311 --type process
# Start a worker from each env
# in .venv-py310
prefect worker start --pool py310
# in .venv-py311
prefect worker start --pool py311
# Deploy flows to the appropriate pool
from prefect import flow
@flow
def my_flow():
...
my_flow.deploy(name="my-flow-py310", work_pool_name="py310")
my_flow.deploy(name="my-flow-py311", work_pool_name="py311")
2) Separate container images (Docker or Kubernetes)
- When to use: You want fully isolated, reproducible environments via containers.
- How it works: Build two images (each with its own Python/deps) and set the image per deployment. A single worker can run both as it pulls the specified image.
Docker example:
# Create a Docker work pool
uv run --with 'prefect[docker]' prefect work-pool create docker-pool --type docker
# Start a Docker worker
uv run --with 'prefect[docker]' prefect worker start --pool docker-pool
# Deploy flows, pinning different images
from prefect import flow
@flow
def my_flow():
...
my_flow.deploy(
name="etl-py310",
work_pool_name="docker-pool",
image="yourrepo/etl:py310" # image contains py310 + deps
)
my_flow.deploy(
name="etl-py311",
work_pool_name="docker-pool",
image="yourrepo/etl:py311" # image contains py311 + deps
)
Kubernetes is similar: create a Kubernetes work pool, run a k8s worker, and set the container image per deployment.
Helpful docs:
- Managing work pools and workers
- Kubernetes integration
- Background on the model: Work pools and workers overview
Questions to tailor this for you:
- Are your “two environments” different Python versions, different dependency sets, or both?
- Do you prefer local virtualenvs or containerized execution (Docker/Kubernetes)?
- Are you running on Prefect Cloud or a local Prefect server?
I can share exact commands/Dockerfiles once I know which route you prefer.oleander-whirlwind
10/06/2025, 1:18 PMMarvin
10/06/2025, 1:18 PMMarvin
10/06/2025, 1:20 PM# Env A
python3.10 -m venv .venv-py310
source .venv-py310/bin/activate
pip install -U prefect <deps-for-env-a>
# Env B
python3.11 -m venv .venv-py311
source .venv-py311/bin/activate
pip install -U prefect <deps-for-env-b>
2) Create two Process work pools (run once)
prefect work-pool create py310 --type process
prefect work-pool create py311 --type process
3) Start a worker in each env (two terminals)
# Terminal 1
source .venv-py310/bin/activate
prefect worker start --pool py310
# Terminal 2
source .venv-py311/bin/activate
prefect worker start --pool py311
4) One Python script to define flows, deploy them to the right pools, and trigger runs
# launcher.py
from prefect import flow
from prefect.deployments.flow_runs import run_deployment
@flow
def flow_a():
print("Hello from Env A (py310)")
@flow
def flow_b():
print("Hello from Env B (py311)")
if __name__ == "__main__":
# One-time (or re-runnable) deployment to the matching pools
flow_a.deploy(name="flow-a-py310", work_pool_name="py310")
flow_b.deploy(name="flow-b-py311", work_pool_name="py311")
# Kick off runs; these will execute where their workers are (in different envs)
ra = run_deployment("flow-a/flow-a-py310")
rb = run_deployment("flow-b/flow-b-py311")
print("Submitted runs:", ra.id, rb.id)
Run python launcher.py from any environment. The flows won’t execute in the launcher’s env—they’ll run in the env of the worker assigned to each pool.
Alternative: Docker (single worker, different images)
- Build two images, e.g., yourrepo/app:py310 and yourrepo/app:py311.
- Create a Docker work pool and start a Docker worker:
uv run --with 'prefect[docker]' prefect work-pool create docker-pool --type docker
uv run --with 'prefect[docker]' prefect worker start --pool docker-pool
- In your code, set the image per deployment:
flow_a.deploy(name="flow-a-py310", work_pool_name="docker-pool", image="yourrepo/app:py310")
flow_b.deploy(name="flow-b-py311", work_pool_name="docker-pool", image="yourrepo/app:py311")
- Then trigger both with run_deployment as above.
Docs for reference:
- Managing work pools and workers
If you prefer, I can tailor the script for Docker (including example Dockerfiles) or confirm the exact run_deployment names after you share your flow names.oleander-whirlwind
10/06/2025, 1:23 PMMarvin
10/06/2025, 1:23 PMMarvin
10/06/2025, 1:23 PMsubprocess with the env B Python. No deployments or workers required.
Setup (done once):
# Create two envs with different numpy versions
python3 -m venv .venv-a
python3 -m venv .venv-b
# Env A: where you run Prefect and your main script
. .venv-a/bin/activate
pip install -U prefect numpy==1.26
deactivate
# Env B: used only by the subprocess task
. .venv-b/bin/activate
pip install numpy==2.0
deactivate
One script example (run it from env A):
# dev_two_envs.py
import os
import sys
import json
import subprocess
import textwrap
from prefect import flow, task
def venv_python(venv_path: str) -> str:
# Cross-platform path to venv python
return os.path.join(venv_path, "Scripts" if os.name == "nt" else "bin", "python")
ENV_A = os.path.abspath(".venv-a")
ENV_B = os.path.abspath(".venv-b")
PY_A = venv_python(ENV_A) # not strictly needed if you're already in env A
PY_B = venv_python(ENV_B)
@task
def task_in_env_a(values):
# Runs in the current interpreter (env A)
import numpy as np
arr = np.array(values)
return float(arr.sum())
@task
def task_in_env_b(values):
# Runs code in env B by calling its python in a subprocess
code = textwrap.dedent("""
import sys, json
import numpy as np
payload = json.loads(sys.stdin.read())
arr = np.array(payload["values"])
out = {
"mean": float(arr.mean()),
"numpy_version": np.__version__,
}
print(json.dumps(out))
""").strip()
proc = subprocess.run(
[PY_B, "-c", code],
input=json.dumps({"values": values}),
text=True,
capture_output=True,
check=True,
)
return json.loads(proc.stdout)
@flow
def my_flow():
vals = [1, 2, 3, 4]
# Task A uses numpy from env A
sum_result = task_in_env_a.submit(vals)
# Task B uses numpy from env B via subprocess
mean_result = task_in_env_b.submit(vals)
return {
"sum_env_a": sum_result.result(),
"mean_env_b": mean_result.result(),
}
if __name__ == "__main__":
# Activate env A before running this script:
# . .venv-a/bin/activate
# python dev_two_envs.py
results = my_flow()
print(results)
Notes and tips:
- You cannot import two different NumPy versions in the same interpreter. The subprocess boundary is what gives you isolation.
- The subprocess code reads JSON from stdin and writes JSON to stdout so it’s easy to pass inputs/outputs.
- Env B does not need Prefect installed unless you want to run Prefect inside that subprocess. It only needs the packages required for that task (e.g., NumPy).
- For large payloads, pass file paths instead of piping big JSON blobs.
- If you’re a conda user, you can swap the subprocess command to:
["conda", "run", "-n", "envB", "python", "-c", code]
If you’d like, I can adapt this to use a separate module/script file for the env-B code (sometimes cleaner), or show a Docker-based variation for local dev.