Arthur Shi
07/10/2025, 9:57 PMMarvin
07/10/2025, 9:57 PMMarvin
07/10/2025, 9:58 PMbash
export PREFECT_API_URL="<http://your-server-address:4200/api>"
b. Using the Prefect CLI:
bash
prefect config set PREFECT_API_URL="<http://your-server-address:4200/api>"
2. Where to Run Workflows
You can run your workflows from any machine that can reach your Prefect server. Here's the typical setup:
a. Development Machine: You can run flows directly during development using:
python
if __name__ == "__main__":
my_flow()
b. Production Environment: For production, you'll want to:
1. Create a deployment using flow.from_source()
and .deploy()
2. Set up a work pool in your server UI or via CLI
3. Run a worker that connects to your work pool to execute flows
Example deployment setup:
python
from prefect import flow
@flow
def my_flow():
# your flow code here
pass
if __name__ == "__main__":
my_flow.from_source(
source="local", # or your git repository
entrypoint="path/to/flow.py:my_flow"
).deploy(
name="my-deployment",
work_pool_name="my-work-pool"
)
3. Running Workers
After setting up your deployment, start a worker:
bash
prefect worker start -p "my-work-pool"
The worker will poll your self-hosted server for new flow runs and execute them when scheduled.
Key points to remember:
- Your server, client, and workers all need network connectivity to each other
- Workers need access to any resources your flows require (databases, APIs, etc.)
- You can have multiple workers across different machines all connecting to the same server
- Make sure your PREFECT_API_URL
is set correctly on all machines that need to communicate with the server
For more details, you can check out the documentation on:
- Self-hosted server setup
- Server conceptsArthur Shi
07/10/2025, 10:03 PMMarvin
07/10/2025, 10:03 PMMarvin
07/10/2025, 10:03 PMpython script.py
and you're done
2. Quick Iteration: Faster for development and testing
3. Local Debugging: Easier to use debuggers and step through code
4. No Infrastructure Overhead: Don't need to wait for worker provisioning
My recommendation:
- Use direct script execution during development and debugging
- Use deployments/work pools when:
- Others need to run your flows
- You need consistent execution environments
- You want better observability and management features
- The flow needs specific infrastructure or resource requirements
Even for ad hoc runs, deployments can be worth the small extra setup time if you value the additional control and visibility they provide. You can still trigger them manually through the UI or CLI when needed, while maintaining all the benefits of proper infrastructure management and observability.