Kurt Sys (Vintecc)
10/04/2025, 11:04 AM@flow
def my_favorite_flow():
return 42
def test_my_favorite_flow():
with prefect_test_harness(server_startup_timeout=120):
assert my_favorite_flow() == 42
results in:
======================== 1 failed in 123.19s (0:02:03) =========================
--- Logging error ---
Traceback (most recent call last):
File "/home/vintecc/py310/lib/python3.10/site-packages/prefect/logging/handlers.py", line 355, in emit
self.console.print(message, soft_wrap=True)
File "/home/vintecc/py310/lib/python3.10/site-packages/rich/console.py", line 1697, in print
with self:
File "/home/vintecc/py310/lib/python3.10/site-packages/rich/console.py", line 870, in __exit__
self._exit_buffer()
File "/home/vintecc/py310/lib/python3.10/site-packages/rich/console.py", line 826, in _exit_buffer
self._check_buffer()
File "/home/vintecc/py310/lib/python3.10/site-packages/rich/console.py", line 2038, in _check_buffer
self._write_buffer()
File "/home/vintecc/py310/lib/python3.10/site-packages/rich/console.py", line 2107, in _write_buffer
self.file.write(text)
ValueError: I/O operation on closed file.
Call stack:
File "/home/vintecc/py310/lib/python3.10/site-packages/prefect/server/api/server.py", line 960, in stop
<http://subprocess_server_logger.info|subprocess_server_logger.info>(
File "/usr/lib/python3.10/logging/__init__.py", line 1477, in info
self._log(INFO, msg, args, **kwargs)
File "/usr/lib/python3.10/logging/__init__.py", line 1624, in _log
self.handle(record)
File "/usr/lib/python3.10/logging/__init__.py", line 1634, in handle
self.callHandlers(record)
File "/usr/lib/python3.10/logging/__init__.py", line 1696, in callHandlers
hdlr.handle(record)
File "/usr/lib/python3.10/logging/__init__.py", line 968, in handle
self.emit(record)
File "/home/vintecc/py310/lib/python3.10/site-packages/prefect/logging/handlers.py", line 361, in emit
self.handleError(record)
Message: 'Stopping temporary server on <http://127.0.0.1:8799>'
Arguments: ()
Finished running tests!
Marvin
10/04/2025, 11:04 AMMarvin
10/04/2025, 11:08 AMpip install -U 'prefect>=3.4.22'
prefect version
Release notes: 3.4.22 notes.
If you still want to confirm the environment:
- Check your pydantic/Python versions:
python -c "import pydantic, sys; print('pydantic', pydantic.__version__, 'python', sys.version)"
- Try starting the server manually to see startup logs:
prefect server start --log-level DEBUG
If this fails or hangs, we’ll see the actual reason in the logs (e.g., database config, port conflict, etc.).
Notes and workarounds:
- For unit tests that don’t need API/state isolation, you can test flows without the harness:
from prefect import flow
@flow
def my_favorite_flow():
return 42
def test_my_favorite_flow():
assert my_favorite_flow() == 42
- If you do need the harness, this should work once the server starts correctly:
from prefect import flow
from prefect.testing.utilities import prefect_test_harness
@flow
def my_favorite_flow():
return 42
def test_my_favorite_flow():
with prefect_test_harness(server_startup_timeout=120):
assert my_favorite_flow() == 42
Could you share the output of:
- prefect version
- python -c "import pydantic; print(pydantic.__version__)"
And if upgrading doesn’t fix it, please run prefect server start --log-level DEBUG
and paste the first error or stall point. That’ll tell us exactly why the ephemeral server can’t start in your test harness.Kurt Sys (Vintecc)
10/04/2025, 11:43 AMpython -c "import pydantic, sys; print('pydantic', pydantic.__version__, 'python', sys.version)"
pydantic 2.11.9 python 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0]
Running the server runs fine
prefect server start
___ ___ ___ ___ ___ ___ _____
| _ \ _ \ __| __| __/ __|_ _|
| _/ / _|| _|| _| (__ | |
|_| |_|_\___|_| |___\___| |_|
Configure Prefect to communicate with the server with:
prefect config set PREFECT_API_URL=<http://127.0.0.1:4200/api>
View the API reference documentation at <http://127.0.0.1:4200/docs>
Check out the dashboard at <http://127.0.0.1:4200>
Marvin
10/04/2025, 11:43 AMKurt Sys (Vintecc)
10/04/2025, 11:44 AMprefect version
Version: 3.4.22
API version: 0.8.4
Python version: 3.10.12
Git commit: 6a6d114f
Built: Fri, Oct 03, 2025 06:15 PM
OS/Arch: linux/x86_64
Profile: ephemeral
Server type: server
Pydantic version: 2.11.9
Server:
Database: sqlite
SQLite version: 3.37.2
Integrations:
prefect-kubernetes: 0.6.5
prefect-docker: 0.6.6
Marvin
10/04/2025, 11:44 AMMarvin
10/04/2025, 11:45 AMprefect config set PREFECT_API_URL=<http://127.0.0.1:4200/api>
prefect config view
- Or via environment variable (shell):
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
Windows PowerShell:
setx PREFECT_API_URL "<http://127.0.0.1:4200/api>"
2) Quick connectivity checks
- From the API docs: OpenAPI docs and try GET /health
- Or curl:
curl <http://127.0.0.1:4200/api/health>
- Or Python:
python - << 'PY'
import asyncio
from prefect.client.orchestration import get_client
async def main():
async with get_client() as c:
ws = await c.read_workspace()
print("Connected to workspace:", ws.name)
asyncio.run(main())
PY
3) Create a work pool and start a worker (process type)
- Create the pool:
prefect work-pool create my-pool -t process
- Start a worker that pulls runs for that pool:
prefect worker start --pool my-pool
4) Deploy or run a simple flow
- Minimal flow (ad-hoc run; will appear in the UI if your API URL is set):
from prefect import flow
@flow
def hello(name="world"):
print(f"hello, {name}")
if __name__ == "__main__":
hello()
- Code-based deployment (Prefect 3.x):
from prefect import flow
@flow
def hello(name="world"):
print(f"hello, {name}")
if __name__ == "__main__":
flow.from_source(
source="file://.", # your repo or local directory
entrypoint="hello.py:hello", # path:callable
).deploy(
name="hello-deploy",
work_pool_name="my-pool",
tags=["dev"],
)
Note: In Prefect 3.x, use flow.from_source(...).deploy(...)
(not Deployment.build_from_flow
and not prefect deployment build
).
If you run into “it works in CLI but not in Python” issues, double-check you’re using the same environment:
- CLI path:
which prefect
- Python sees Prefect:
python -c "import prefect, sys; print('prefect', prefect.__version__, 'python', sys.version)"
- If needed, run the server via the Python module to force the same interpreter:
python -m prefect server start
Would you like me to help you create the pool/worker and a minimal deployment now, or are you seeing a specific error after starting the server?Kurt Sys (Vintecc)
10/04/2025, 11:46 AMpytest-xdist
is installed. is that an issue?Marvin
10/04/2025, 11:46 AMMarvin
10/04/2025, 11:47 AM-n auto
). If you do run your Prefect tests in parallel, a few caveats to avoid flaky behavior:
- Don’t start multiple Prefect servers at once on the same ports. Start one server per test session, or run those tests serially.
- If your tests create named Prefect resources (work pools, blocks, deployments), give them unique names per worker to avoid collisions. Example:
import os
wid = os.environ.get("PYTEST_XDIST_WORKER", "gw0")
pool_name = f"my-pool-{wid}"
- If tests start workers or long-lived processes, prefer serializing those tests (e.g., run them with -n 0
) so you don’t have multiple workers competing for the same queue or binding to the same ports.
- Keep “unit tests” of flows (calling the function directly) parallel if you like, and reserve serial execution for tests that interact with the server, work pools, or deployments.
If you’re not invoking pytest with xdist parallelism, its presence won’t affect Prefect at all. Are you seeing a specific error that makes you suspect xdist? If so, share the command and output and I’ll help narrow it down.Kurt Sys (Vintecc)
10/04/2025, 11:48 AMMarvin
10/04/2025, 11:48 AMMarvin
10/04/2025, 11:49 AMephemeral
- Integrations installed: prefect-docker 0.6.6, prefect-kubernetes 0.6.5
How can I help you next? For example, if you’re connecting your CLI to this server or setting up pools/workers, here are quick checks and commands:
- Create/switch profiles and set your API URL
prefect profile ls
prefect profile create my-server
prefect profile use my-server
prefect config set PREFECT_API_URL=<your-server-api-url>
prefect config view
Docs: Settings and profiles
- List/create work pools and start a worker
prefect work-pool ls
prefect work-pool create my-docker --type docker
prefect work-pool create my-k8s --type kubernetes
# Start a worker for a pool
prefect worker start --pool my-docker
# or
prefect worker start --pool my-k8s
Tip: You already have the Docker/K8s integrations installed, so the above should work as-is.
Docs overview: Prefect 3 docs (see Work Pools/Workers sections)
- Deploy flows (3.x)
- From a project directory:
prefect deploy ./path/to/flow.py:my_flow -n my-deployment --pool my-docker
- From code (for remote source repos), use flow.from_source(...).deploy(...)
in your Python:
from prefect import flow
@flow
def my_flow():
return "hello"
deployment = my_flow.from_source(
source="<https://github.com/<owner>/<repo>>",
entrypoint="path/to/flow.py:my_flow",
).deploy(
name="my-deployment",
work_pool_name="my-docker",
)
Docs: Connecting and deploying (3.x)
If you share what you’re trying to do (e.g., point CLI to your server, create a Docker/K8s pool, start a worker, or deploy a flow), I’ll tailor the exact steps.Marvin
10/04/2025, 11:49 AMpython.testing.pytestEnabled
- python.testing.pytestArgs
- python.testing.cwd
- python.envFile
(env vars passed to tests)
- Check your workspace file:
- .vscode/settings.json
(project) and your User settings
- Check config files that can inject arguments:
- pyproject.toml
, pytest.ini
, or tox.ini
for something like:
[tool.pytest.ini_options] # in pyproject.toml
addopts = "-n auto"
or
[pytest] # in pytest.ini / tox.ini
addopts = -n auto
- Check env vars that VS Code may load (via python.envFile
or devcontainer):
- PYTEST_ADDOPTS
(will append args)
- PYTEST_DISABLE_PLUGIN_AUTOLOAD
(disables plugin autoload if set to 1)
- Look at the Test output:
- View > Output > select “Python Test Log” (or “Testing”)
- If xdist is active you’ll see gw0/gw1
worker output or messaging about “numprocesses”.
2) Ways to ensure xdist is not used for Prefect tests
Pick the approach that fits your workflow:
- Remove/avoid parallel flags
- Delete -n auto
(or any -n ...
) from:
- python.testing.pytestArgs
in VS Code settings
- addopts
in `pytest.ini`/`pyproject.toml`/`tox.ini`
- PYTEST_ADDOPTS
env var
- Explicitly disable the xdist plugin
- One-off via args (good for VS Code):
// .vscode/settings.json
{
"python.testing.pytestEnabled": true,
"python.testing.pytestArgs": ["-p", "no:xdist", "tests"]
}
- Project-wide via config:
[pytest]
addopts = -p no:xdist
- Environment variable (works well in devcontainers):
// .devcontainer/devcontainer.json
{
"containerEnv": { "PYTEST_ADDOPTS": "-p no:xdist" }
}
or in your VS Code env file:
# .env
PYTEST_ADDOPTS=-p no:xdist
and in settings:
"python.envFile": "${workspaceFolder}/.env"
- If you sometimes want xdist for other tests
- Mark Prefect “integration/e2e” tests and run them separately without xdist:
# test file
import pytest
pytestmark = pytest.mark.serial
[pytest]
markers =
serial: run without xdist
Then in VS Code, have:
- Normal runs: no special marker (can use xdist if you want)
- Prefect runs: add args to exclude parallelism or add -p no:xdist
and -m serial
Quick verification
- In the devcontainer terminal:
echo "$PYTEST_ADDOPTS"
pytest --version
pytest -q --help | sed -n '1,80p' # if you see -n/xdist options, the plugin is available
- Run tests once from VS Code and check the output for any xdist “gw0/gw1” lines; if none, you’re not parallelizing.
If you share your current .vscode/settings.json
and any pytest.ini/pyproject.toml
snippets, I can suggest the minimal change to disable xdist just for the Prefect-related runs.Kurt Sys (Vintecc)
10/04/2025, 11:51 AMpytest python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py
=============================================================================================================================== test session starts ===============================================================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.6.0
rootdir: /workspace
configfile: pytest.ini
plugins: devtools-0.12.2, docker-3.2.3, anyio-4.11.0, asyncio-0.23.8, xdist-3.8.0, postgresql-7.0.2, dash-3.2.0, hydra-core-1.3.2, Faker-37.8.0, cov-6.2.1
asyncio: mode=strict
collected 1 item
python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py
F [100%]
==================================================================================================================================== FAILURES =====================================================================================================================================
______________________________________________________________________________________________________________________________ test_my_favorite_flow ______________________________________________________________________________________________________________________________
def test_my_favorite_flow():
> with prefect_test_harness(server_startup_timeout=120):
python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py:11:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.10/contextlib.py:135: in __enter__
return next(self.gen)
/home/vintecc/py310/lib/python3.10/site-packages/prefect/testing/utilities.py:168: in prefect_test_harness
test_server.start(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <prefect.server.api.server.SubprocessASGIServer object at 0x77ee36386200>, timeout = 120
def start(self, timeout: Optional[int] = None) -> None:
"""
Start the server in a separate process. Safe to call multiple times; only starts
the server once.
Args:
timeout: The maximum time to wait for the server to start
"""
if not self.running:
if self.port is None:
self.port = self.find_available_port()
assert self.port is not None, "Port must be provided or available"
help_message = (
f"Starting temporary server on {self.address}\nSee "
"<https://docs.prefect.io/v3/concepts/server#how-to-guides> "
"for more information on running a dedicated Prefect server."
)
<http://subprocess_server_logger.info|subprocess_server_logger.info>(help_message)
try:
self.running = True
self.server_process = self._run_uvicorn_command()
atexit.register(self.stop)
with httpx.Client() as client:
response = None
elapsed_time = 0
max_wait_time = (
timeout
or PREFECT_SERVER_EPHEMERAL_STARTUP_TIMEOUT_SECONDS.value()
)
while elapsed_time < max_wait_time:
if self.server_process.poll() == 3:
self.port = self.find_available_port()
self.server_process = self._run_uvicorn_command()
continue
try:
response = client.get(f"{self.api_url}/health")
except httpx.ConnectError:
pass
else:
if response.status_code == 200:
break
time.sleep(0.1)
elapsed_time += 0.1
if response:
response.raise_for_status()
if not response:
error_message = "Timed out while attempting to connect to ephemeral Prefect API server."
if self.server_process.poll() is not None:
error_message += f" Ephemeral server process exited with code {self.server_process.returncode}."
if self.server_process.stdout:
error_message += (
f" stdout: {self.server_process.stdout.read()}"
)
if self.server_process.stderr:
error_message += (
f" stderr: {self.server_process.stderr.read()}"
)
> raise RuntimeError(error_message)
E RuntimeError: Timed out while attempting to connect to ephemeral Prefect API server.
/home/vintecc/py310/lib/python3.10/site-packages/prefect/server/api/server.py:921: RuntimeError
------------------------------------------------------------------------------------------------------------------------------ Captured stderr call -------------------------------------------------------------------------------------------------------------------------------
13:49:16.879 | INFO | prefect - Starting temporary server on <http://127.0.0.1:8787>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
-------------------------------------------------------------------------------------------------------------------------------- Captured log call --------------------------------------------------------------------------------------------------------------------------------
INFO prefect:server.py:881 Starting temporary server on <http://127.0.0.1:8787>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
Marvin
10/04/2025, 11:52 AMKurt Sys (Vintecc)
10/04/2025, 11:52 AMpytest python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py
=============================================================================================================================== test session starts ===============================================================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.6.0
rootdir: /workspace
configfile: pytest.ini
plugins: devtools-0.12.2, docker-3.2.3, anyio-4.11.0, asyncio-0.23.8, xdist-3.8.0, postgresql-7.0.2, dash-3.2.0, hydra-core-1.3.2, Faker-37.8.0, cov-6.2.1
asyncio: mode=strict
collected 1 item
python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py
F [100%]
==================================================================================================================================== FAILURES =====================================================================================================================================
______________________________________________________________________________________________________________________________ test_my_favorite_flow ______________________________________________________________________________________________________________________________
def test_my_favorite_flow():
> with prefect_test_harness(server_startup_timeout=120):
python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py:11:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.10/contextlib.py:135: in __enter__
return next(self.gen)
/home/vintecc/py310/lib/python3.10/site-packages/prefect/testing/utilities.py:168: in prefect_test_harness
test_server.start(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <prefect.server.api.server.SubprocessASGIServer object at 0x77ee36386200>, timeout = 120
def start(self, timeout: Optional[int] = None) -> None:
...
if self.server_process.stderr:
error_message += (
f" stderr: {self.server_process.stderr.read()}"
)
> raise RuntimeError(error_message)
E RuntimeError: Timed out while attempting to connect to ephemeral Prefect API server.
/home/vintecc/py310/lib/python3.10/site-packages/prefect/server/api/server.py:921: RuntimeError
------------------------------------------------------------------------------------------------------------------------------ Captured stderr call -------------------------------------------------------------------------------------------------------------------------------
13:49:16.879 | INFO | prefect - Starting temporary server on <http://127.0.0.1:8787>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
-------------------------------------------------------------------------------------------------------------------------------- Captured log call --------------------------------------------------------------------------------------------------------------------------------
INFO prefect:server.py:881 Starting temporary server on <http://127.0.0.1:8787>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
Marvin
10/04/2025, 11:52 AMKurt Sys (Vintecc)
10/04/2025, 11:53 AMpytest python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py
=============================================================================================================================== test session starts ===============================================================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.6.0
rootdir: /workspace
configfile: pytest.ini
plugins: devtools-0.12.2, docker-3.2.3, anyio-4.11.0, asyncio-0.23.8, xdist-3.8.0, postgresql-7.0.2, dash-3.2.0, hydra-core-1.3.2, Faker-37.8.0, cov-6.2.1
asyncio: mode=strict
collected 1 item
python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py
F [100%]
== FAILURES ==
__ test_my_favorite_flow __
def test_my_favorite_flow():
> with prefect_test_harness(server_startup_timeout=120):
python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py:11:
_ _
/usr/lib/python3.10/contextlib.py:135: in __enter__
return next(self.gen)
/home/vintecc/py310/lib/python3.10/site-packages/prefect/testing/utilities.py:168: in prefect_test_harness
test_server.start(
_ _
self = <prefect.server.api.server.SubprocessASGIServer object at 0x77ee36386200>, timeout = 120
def start(self, timeout: Optional[int] = None) -> None:
...
> raise RuntimeError(error_message)
E RuntimeError: Timed out while attempting to connect to ephemeral Prefect API server.
/home/vintecc/py310/lib/python3.10/site-packages/prefect/server/api/server.py:921: RuntimeError
-- Captured stderr call --
13:49:16.879 | INFO | prefect - Starting temporary server on <http://127.0.0.1:8787>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
-- Captured log call --
INFO prefect:server.py:881 Starting temporary server on <http://127.0.0.1:8787>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
Marvin
10/04/2025, 11:53 AMMarvin
10/04/2025, 11:55 AMprefect_test_harness
spins up a temporary Prefect API at http://127.0.0.1:8787 and waits for /health.
- Your test timed out connecting to that ephemeral API, so the server likely didn’t bind or your process couldn’t reach 127.0.0.1:8787.
Quick things to try (most to least likely)
1) Ensure proxies are not intercepting localhost
- In devcontainers, proxy vars can cause httpx to route 127.0.0.1 through a proxy.
env | grep -i proxy
# if you see HTTP_PROXY / HTTPS_PROXY, set:
export NO_PROXY=127.0.0.1,localhost
# for permanence, add to devcontainer.json or your .env used by VS Code
2) Make sure nothing else is using port 8787
ss -lntp | grep :8787 || true
lsof -iTCP:8787 -sTCP:LISTEN -P -n || true
# kill any stray process if found
3) Disable xdist for this test run (even if you didn’t pass -n, VS Code or configs can add it)
- One-off:
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py
- Or set in VS Code or env:
# .vscode/settings.json
"python.testing.pytestArgs": ["-p", "no:xdist", "python/fraio/fraio_orchestrator/tests"]
# or
export PYTEST_ADDOPTS="-p no:xdist"
4) Get more diagnostics
- Increase logging:
export PREFECT_LOGGING_LEVEL=DEBUG
pytest -q -p no:xdist path/to/test.py -k test_my_favorite_flow -vv
- Try starting the same server class manually to catch immediate errors:
python - << 'PY'
from prefect.server.api.server import SubprocessASGIServer
s = SubprocessASGIServer(host="127.0.0.1", port=8787)
s.start(timeout=60)
print("started")
s.stop()
print("stopped")
PY
If the manual start fails or the port is already in use, that’s your root cause. If proxies were set and adding NO_PROXY fixes it, that’s the culprit.
Also, to ensure VS Code isn’t sneaking in parallelism:
- Check .vscode/settings.json and pytest config files for -n
or addopts
, and env var PYTEST_ADDOPTS
.
- You can force-disable the plugin project-wide:
# pytest.ini
[pytest]
addopts = -p no:xdist
Docs for testing flows: Test workflows in Prefect 3
If you can share:
- Output of env | grep -i proxy
- Result of ss -lntp | grep :8787
- Any addopts
in your pytest.ini/pyproject.toml
…I’ll pinpoint the minimal fix.Kurt Sys (Vintecc)
10/04/2025, 12:20 PM(py310) /workspace 815-poc-workflow-orchestration ± env | grep -i proxy
(py310) ✘ /workspace 815-poc-workflow-orchestration ± ss -lntp | grep :8787
LISTEN 0 511 127.0.0.1:8787 0.0.0.0:*
no addopts.
export PREFECT_LOGGING_LEVEL=DEBUG
pytest -q -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow -vv
========================================================================================================= test session starts =========================================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.6.0 -- /home/vintecc/py310/bin/python3
cachedir: .pytest_cache
rootdir: /workspace
configfile: pytest.ini
plugins: devtools-0.12.2, docker-3.2.3, anyio-4.11.0, asyncio-0.23.8, xdist-3.8.0, postgresql-7.0.2, dash-3.2.0, hydra-core-1.3.2, Faker-37.8.0, cov-6.2.1
asyncio: mode=strict
collected 1 item
python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow
and after that, the timeout error again.Marvin
10/04/2025, 12:20 PMMarvin
10/04/2025, 12:23 PMsudo ss -lntp | grep :8787 || true
sudo lsof -iTCP:8787 -sTCP:LISTEN -P -n || true
- Hit the health endpoint directly:
curl -v <http://127.0.0.1:8787/api/health>
If this returns non-200 or hangs, the server on 8787 isn’t serving the API correctly and the harness is timing out waiting for it.
Make sure localhost isn’t proxied
- Even if your proxy vars are empty, explicitly set NO_PROXY:
export NO_PROXY=127.0.0.1,localhost
Two reliable workarounds
A) Use a different port just for tests (skip the harness, start the same server class yourself)
- This avoids any 8787 conflicts and gives you clearer errors
# conftest.py
import os
import pytest
from prefect.server.api.server import SubprocessASGIServer
@pytest.fixture(scope="session", autouse=True)
def local_prefect_api():
port = 8788 # choose a free port
srv = SubprocessASGIServer(port=port)
srv.start(timeout=120)
os.environ["PREFECT_API_URL"] = f"<http://127.0.0.1:{port}/api>"
try:
yield
finally:
srv.stop()
Then run your test without prefect_test_harness
. If this works, the issue is specific to the harness’ default port/use.
B) Point tests at a dedicated server you start up separately
- Start it once:
prefect server start
# default API at <http://127.0.0.1:4200/api>
- In your test env:
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist ...
This bypasses the ephemeral server entirely.
If you prefer to keep using prefect_test_harness
, try freeing 8787 (kill whatever is bound), then re-run. But if something in your devcontainer periodically occupies 8787, option A (custom fixture on another port) is the most robust.
Send me:
- The output of the sudo lsof/ss commands above
- The response from curl <http://127.0.0.1:8787/api/health>
and I’ll pinpoint whether it’s a port collision or the API process failing during startup.
Reference: Testing workflows in Prefect 3Kurt Sys (Vintecc)
10/04/2025, 12:26 PMMarvin
10/04/2025, 12:26 PMMarvin
10/04/2025, 12:26 PMprefect_test_harness
starts an API at http://127.0.0.1:8787 and waits for /api/health.
- Your test times out. You do have something listening on 127.0.0.1:8787, but we don’t know if it’s the Prefect API or another process.
Targeted checks
1) See what’s actually bound to 8787
sudo ss -lntp | grep :8787 || true
sudo lsof -iTCP:8787 -sTCP:LISTEN -P -n || true
Please paste the command names/PIDs you see.
2) Does the health endpoint respond?
curl -v <http://127.0.0.1:8787/api/health>
- 200 OK with JSON => server is up; then the harness timing out is odd and we’ll inspect logs.
- 404/hang/connection refused => something else is on 8787, or the API process failed to start.
3) Show raw server logs during the test
Run once with stdout/stderr not captured and more logging:
export PREFECT_LOGGING_LEVEL=DEBUG
pytest -s -vv -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
If logs are still sparse, try isolating pytest plugins:
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 PREFECT_LOGGING_LEVEL=DEBUG \
pytest -s -vv -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
This helps rule out interference from third-party plugins.
Two quick workarounds while we debug
A) Minimal script outside pytest to confirm the harness itself
python - << 'PY'
import os, time, httpx
from prefect.testing.utilities import prefect_test_harness
os.environ["PREFECT_LOGGING_LEVEL"] = "DEBUG"
try:
with prefect_test_harness(server_startup_timeout=120):
print("Harness started")
r = httpx.get("<http://127.0.0.1:8787/api/health>", timeout=5)
print("Health:", r.status_code, r.text)
time.sleep(1)
print("Harness stopped")
except Exception as e:
print("Harness failed:", e)
raise
PY
- If this succeeds, the issue is pytest/plugin-related.
- If it fails the same way, it’s the server start or port conflict.
B) Bypass the harness and use a dedicated server or a custom port
- Start a dedicated server once:
prefect server start
# API at <http://127.0.0.1:4200/api>
- Point the test to it:
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
Or start the same embedded server on a different port in a session fixture:
# conftest.py
import os, pytest
from prefect.server.api.server import SubprocessASGIServer
@pytest.fixture(scope="session", autouse=True)
def local_prefect_api():
port = 8788
srv = SubprocessASGIServer(port=port)
srv.start(timeout=120)
os.environ["PREFECT_API_URL"] = f"<http://127.0.0.1:{port}/api>"
try:
yield
finally:
srv.stop()
Docs for reference: Testing workflows in Prefect 3
If you can share:
- The lsof/ss output for 8787
- The result of curl <http://127.0.0.1:8787/api/health>
- Any additional logs from the -s -vv
run
…I’ll pinpoint the root cause and we’ll get your simplest test working.Kurt Sys (Vintecc)
10/04/2025, 12:30 PMMarvin
10/04/2025, 12:31 PMMarvin
10/04/2025, 12:32 PM# Show the owning process
ps -Af | grep 8787 | grep -v grep || true
# or (if available)
lsof -iTCP:8787 -sTCP:LISTEN -P -n || true
# or
ss -lntp | grep :8787 || true
If it’s not a Prefect uvicorn process, kill it and retry the test.
2) Verify the health endpoint
curl -v <http://127.0.0.1:8787/api/health>
- 200 OK: server is healthy; the timeout would be surprising — paste response.
- 404/hang: not Prefect or server crashed; proceed to step 3.
3) Run the same server class manually with debug logs
This isolates Prefect from pytest/plugins and shows server errors directly.
python - << 'PY'
import os, time, httpx
from prefect.server.api.server import SubprocessASGIServer
os.environ["PREFECT_LOGGING_LEVEL"] = "DEBUG"
srv = SubprocessASGIServer(port=8787)
try:
srv.start(timeout=120)
print("Server started")
r = httpx.get("<http://127.0.0.1:8787/api/health>", timeout=5)
print("Health:", r.status_code, r.text)
finally:
srv.stop()
print("Server stopped")
PY
- If this fails, the console will show the underlying crash (e.g., sqlite permission, missing asset, etc.). Share the output.
- If this succeeds but pytest still fails, it’s pytest/plugin interference — go to step 4.
4) Rule out plugin interference quickly
Run the single test with plugins disabled and no xdist:
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 PREFECT_LOGGING_LEVEL=DEBUG \
pytest -s -vv -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
If this works, we’ll re-enable only the plugins you need.
5) Temporary unblock: bypass the harness
If you need to move now, point tests at a dedicated server (we’ll still fix the harness):
prefect server start # in one terminal (API at <http://127.0.0.1:4200/api>)
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
Please paste:
- The owning process from step 1
- The result of curl to /api/health
- The full output of the manual SubprocessASGIServer run
With that, I can pinpoint whether it’s a port collision or a server init crash and give you the exact fix. For reference, Prefect’s test guide (which you used) is here: Testing workflows in Prefect 3.Kurt Sys (Vintecc)
10/04/2025, 12:41 PMprefect_test_harness
has issues starting the server. This should work out of the box, and it doesn't. So I'm missing something trivial, the question is: what!?Marvin
10/04/2025, 12:41 PMMarvin
10/04/2025, 12:42 PM# See what’s set
env | grep '^PREFECT_' || true
# Temporarily clear anything set
unset PREFECT_API_URL PREFECT_API_KEY PREFECT_PROFILE PREFECT_HOME \
PREFECT_SERVER_API_HOST PREFECT_SERVER_API_PORT \
PREFECT_SERVER_DATABASE_CONNECTION_URL
# Use a clean profile (not 'ephemeral')
prefect profile create dev || true
prefect profile use dev
prefect config view
2) Ensure the temp dir is writable (devcontainers sometimes set TMPDIR oddly)
echo "$TMPDIR"
export TMPDIR=/tmp
3) Run the harness outside pytest to surface real startup errors (no plugin interference)
python - << 'PY'
import os, time, httpx
from prefect.testing.utilities import prefect_test_harness
os.environ["PREFECT_LOGGING_LEVEL"] = "DEBUG"
with prefect_test_harness(server_startup_timeout=120):
print("harness started")
r = httpx.get("<http://127.0.0.1:8787/api/health>", timeout=5)
print("health:", r.status_code, r.text)
print("harness stopped")
PY
- If this fails, the stdout will usually show the exact crash (e.g., permissions, missing env, sqlite error). Paste that and I’ll decode it.
4) Strip pytest plugin interference just for this run
Some plugins tweak event loops and HTTP behavior. Disable autoload once:
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 PREFECT_LOGGING_LEVEL=DEBUG \
pytest -s -vv -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
5) As an immediate unblock, bypass the harness
- Start a dedicated server once:
prefect server start
# API at <http://127.0.0.1:4200/api>
- Point the test at it:
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
Why these help
- Active profile “ephemeral” or forced Prefect env can collide with the harness’ own ephemeral settings.
- TMPDIR misconfiguration/permissions in devcontainers can cause the server subprocess to crash during DB init.
- Pytest plugins (asyncio/anyio/cov/docker) occasionally interfere with subprocess startup or IO — disabling autoload isolates that quickly.
If the standalone harness script still times out, the console output will contain the root cause. Share that and I’ll give you the exact fix. Reference: Testing workflows in Prefect 3Kurt Sys (Vintecc)
10/04/2025, 12:59 PM(py310) /workspace 815-poc-workflow-orchestration ± python - << 'PY'
import os, time, httpx
from prefect.testing.utilities import prefect_test_harness
os.environ["PREFECT_LOGGING_LEVEL"] = "DEBUG"
with prefect_test_harness(server_startup_timeout=120):
print("harness started")
r = httpx.get("<http://127.0.0.1:8787/api/health>", timeout=5)
print("health:", r.status_code, r.text)
print("harness stopped")
PY
14:58:17.692 | DEBUG | prefect.profiles - Using profile 'dev'
and
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 PREFECT_LOGGING_LEVEL=DEBUG \
pytest -s -vv -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
== test session starts ==
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.6.0 -- /home/vintecc/py310/bin/python3
cachedir: .pytest_cache
rootdir: /workspace
configfile: pytest.ini
collecting ... 14:58:46.633 | DEBUG | prefect.profiles - Using profile 'dev'
collected 1 item
python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow 14:58:47.702 | INFO | prefect - Starting temporary server on <http://127.0.0.1:8726>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
FAILED
== FAILURES ==
__test_my_favorite_flow __
@contextlib.contextmanager
def map_httpcore_exceptions() -> typing.Iterator[None]:
...
> yield
/home/vintecc/py310/lib/python3.10/site-packages/httpx/_transports/default.py:101:
_ _
/home/...
/usr/lib/python3.10/contextlib.py:153: in __exit__
self.gen.throw(typ, value, traceback)
_ _
map = {<class 'TimeoutError'>: <class 'httpcor...
E httpx.ReadTimeout: timed out
/home/vintecc/py310/lib/python3.10/site-packages/httpx/_transports/default.py:118: ReadTimeout
-- Captured log call --
INFO prefect:server.py:881 Starting temporary server on <http://127.0.0.1:8726>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
== short test summary info ==
FAILED python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow - httpx.ReadTimeout: timed out
== 1 failed in 7.53s ==
14:58:52.969 | INFO | prefect - Stopping temporary server on <http://127.0.0.1:8726>
14:58:53.029 | DEBUG | prefect.profiles - Using profile 'dev'
(py310) ✘ /workspace 815-poc-workflow-orchestration ±
Marvin
10/04/2025, 1:01 PM(py310) /workspace 815-poc-workflow-orchestration ± python - << 'PY' import os time httpx from prefect testing utilities import prefect_test_harness os environ[ PREFECT_LOGGING_LEVEL ] = DEBUG with prefect_test_harness(server_startup_timeout=120) print( harness started ) r = httpx get( <http //127 0 0 1 8787/api/health> timeout=5) print( health r status_code r text) print( harness stopped ) PY 14 58 17 692 | DEBUG | prefect profiles - Using profile 'dev'
and ``` PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 PREFECT_LOGGING_LEVEL=DEBUG \ pytest -s -vv -p no xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py -k test_my_favorite_flow == test session starts == platform linux -- Python 3 10 12 pytest-7 4 4 pluggy-1 6 0 -- /home/vintecc/py310/bin/python3 cachedir pytest_cache rootdir /workspace configfile pytest ini collecting 14 58 46 633 | DEBUG | prefect profiles - Using profile 'dev' collected 1 item python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py test_my_favorite_flow 14 58 47 702 | INFO | prefect - Starting temporary server on <http //127 0 0 1 8726> See <https //docs prefect io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server FAILED == FAILURES == __test_my_favorite_flow __ @contextlib contextmanager def map_httpcore_exceptions() -> typing Iterator[None] global HTTPCORE_EXC_MAP if len(HTTPCORE_EXC_MAP) == 0 HTTPCORE_EXC_MAP = _load_httpcore_exceptions() try > yield /home/vintecc/py310/lib/python3 10/site-packages/httpx/_transports/default py 101 _ _ /home/ /usr/lib/python3 10/contextlib py 153 in exit self gen throw(typ value traceback) _ _ map = {<class 'TimeoutError'> <class 'httpcore ReadTimeout'> <class 'OSError'> <class 'httpcore ReadError'>} @contextlib contextmanager def map_exceptions(map ExceptionMapping) -> typing Iterator[None] try yield except Exception as exc # noqa PIE786 for from_exc to_exc in map items() if isinstance(exc from_exc) > raise to_exc(exc) from exc E httpcore ReadTimeout timed out /home/vintecc/py310/lib/python3 10/site-packages/httpcore/_exceptions py 14 ReadTimeout The above exception was the direct cause of the following exception def test_my_favorite_flow() > with prefect_test_harness(server_startup_timeout=120) python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py 11 _ _ /usr/lib/python3 10/contextlib py 153 in exit self gen throw(typ value traceback) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ @contextlib contextmanager def map_httpcore_exceptions() -> typing Iterator[None] if mapped_exc is None # pragma no cover raise message = str(exc) > raise mapped_exc(message) from exc E httpx ReadTimeout timed out /home/vintecc/py310/lib/python3 10/site-packages/httpx/_transports/default py 118 ReadTimeout ---------------------------------------------------------------------------------------------------------- Captured log call ---------------------------------------------------------------------------------------------------------- INFO prefect server py 881 Starting temporary server on <http //127 0 0 1 8726> See <https //docs prefect io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server == short test summary info == FAILED python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py test_my_favorite_flow - httpx ReadTimeout timed out == 1 failed in 7 53s == 14Marvin
10/04/2025, 1:01 PMMarvin
10/04/2025, 1:03 PMenv | grep '^PREFECT_' || true
unset PREFECT_API_URL PREFECT_API_KEY PREFECT_PROFILE PREFECT_HOME \
PREFECT_SERVER_API_HOST PREFECT_SERVER_API_PORT \
PREFECT_SERVER_DATABASE_CONNECTION_URL
prefect profile create dev || true
prefect profile use dev
prefect config view
2) Ensure the temp directory is writable and simple
- Devcontainers sometimes set TMPDIR oddly; force /tmp.
echo "$TMPDIR"
export TMPDIR=/tmp
3) Sanity-check aiosqlite can create/use a DB in TMPDIR
python - << 'PY'
import asyncio, aiosqlite, os, tempfile
path = os.path.join(tempfile.gettempdir(), "prefect-ephemeral.db")
async def main():
async with aiosqlite.connect(path) as db:
await db.execute("select 1")
await db.commit()
print("aiosqlite ok at:", path)
asyncio.run(main())
PY
4) Start the same server class directly to surface the real error
- This bypasses pytest/plugins and shows startup failures.
python - << 'PY'
import os, httpx
from prefect.server.api.server import SubprocessASGIServer
os.environ["PREFECT_LOGGING_LEVEL"] = "DEBUG"
srv = SubprocessASGIServer() # let it pick a free port
try:
srv.start(timeout=120)
print("started on:", srv.api_url if hasattr(srv, "api_url") else "unknown")
# try health
url = getattr(srv, "api_url", "<http://127.0.0.1:4200/api>")
print("health GET:", url + "/health")
r = httpx.get(url + "/health", timeout=5)
print("health:", r.status_code, r.text)
finally:
srv.stop()
print("stopped")
PY
- If this throws, the traceback will show the exact startup problem (e.g., permission/DB/migration). Paste it and I’ll give the precise fix.
5) If you need an immediate unblock
- Run a dedicated server and point tests at it (no harness):
prefect server start
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
Why these steps
- Stray PREFECT_* env or an active “ephemeral”/custom profile can conflict with the harness’ own ephemeral settings.
- TMPDIR/permissions in devcontainers frequently trip the ephemeral SQLite DB creation.
- The direct SubprocessASGIServer run will reveal the underlying error message that the harness swallows behind a timeout.
Run steps 1–4. If step 4 fails, please share the traceback/stdout — that will tell us exactly what’s preventing the server from starting.Kurt Sys (Vintecc)
10/04/2025, 1:07 PM(py310) /workspace 815-poc-workflow-orchestration ± python - << 'PY'
import os, httpx
from prefect.server.api.server import SubprocessASGIServer
os.environ["PREFECT_LOGGING_LEVEL"] = "DEBUG"
srv = SubprocessASGIServer() # let it pick a free port
try:
srv.start(timeout=120)
print("started on:", srv.api_url if hasattr(srv, "api_url") else "unknown")
# try health
url = getattr(srv, "api_url", "<http://127.0.0.1:4200/api>")
print("health GET:", url + "/health")
r = httpx.get(url + "/health", timeout=5)
print("health:", r.status_code, r.text)
finally:
srv.stop()
print("stopped")
PY
15:04:19.257 | DEBUG | prefect.profiles - Using profile 'dev'
stopped
Traceback (most recent call last):
File "<stdin>", line 6, in <module>
File "/home/vintecc/py310/lib/python3.10/site-packages/prefect/server/api/server.py", line 921, in start
raise RuntimeError(error_message)
RuntimeError: Timed out while attempting to connect to ephemeral Prefect API server.
Now, let's try to solve it, instead of confirming the timeout issueMarvin
10/04/2025, 1:07 PMMarvin
10/04/2025, 1:08 PMpython - << 'PY'
import sys, subprocess, os
print("parent sys.executable:", sys.executable)
cp = subprocess.run([sys.executable, "-c",
"import sys, prefect; print('child sees prefect', prefect.__version__, 'python', sys.version)"],
capture_output=True, text=True)
print(cp.returncode, cp.stdout, cp.stderr)
PY
If the child cannot import Prefect, activate the same venv in your test run or adjust PATH so the child uses the venv’s python.
2) Reset Prefect settings and use a clean, writable home
Stray PREFECT_* settings or an unwritable HOME can break the ephemeral server.
# Clear conflicting settings for the current shell
unset PREFECT_API_URL PREFECT_API_KEY PREFECT_PROFILE PREFECT_SERVER_API_HOST \
PREFECT_SERVER_API_PORT PREFECT_SERVER_DATABASE_CONNECTION_URL
# Use a clean, writable Prefect home and an explicit SQLite file
export PREFECT_HOME=/tmp/prefect-home
mkdir -p "$PREFECT_HOME"
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db
# Keep temp files in /tmp (some devcontainer TMPDIRs cause issues)
export TMPDIR=/tmp
# Optional: keep server logs noisy
export PREFECT_LOGGING_LEVEL=DEBUG
3) Start the same server class again after the reset
python - << 'PY'
import os, httpx
from prefect.server.api.server import SubprocessASGIServer
print("PREFECT_HOME:", os.environ.get("PREFECT_HOME"))
print("DB URL:", os.environ.get("PREFECT_SERVER_DATABASE_CONNECTION_URL"))
srv = SubprocessASGIServer()
try:
srv.start(timeout=120)
url = getattr(srv, "api_url", "<http://127.0.0.1:4200/api>")
r = httpx.get(url + "/health", timeout=5)
print("health:", r.status_code, r.text)
finally:
srv.stop()
print("stopped")
PY
4) Disable pytest plugin interference just for this test run
Even though the failure reproduces outside pytest for you, this helps others reading the thread and is safe to keep:
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 pytest -s -vv -p no:xdist path/to/test.py -k test_my_favorite_flow
5) If you need to move now, bypass the harness
Start a dedicated server and point tests at it:
prefect server start
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist path/to/test.py -k test_my_favorite_flow
Why these help
- The harness runs in a child process. If that process does not inherit your venv, Prefect isn’t importable and the server crashes instantly — you only see a timeout.
- Devcontainers sometimes set TMPDIR/HOME to locations that are not writable the way the server expects during SQLite/migration init. Pointing PREFECT_HOME and the DB to /tmp avoids subtle permission/mount issues.
- Clearing PREFECT_* avoids profile/env collisions with the harness’s own ephemeral settings.
If it still times out after steps 1–3, please paste:
- The output of the “child sees prefect” check (step 1)
- The printed PREFECT_HOME and DB URL and the full stdout/stderr from the SubprocessASGIServer script in step 3
With those, I can give you a precise one-line fix. For reference: Testing workflows in Prefect 3Kurt Sys (Vintecc)
10/04/2025, 1:15 PM✗ python -m pytest -q python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow -s
15:13:23.299 | INFO | prefect - Starting temporary server on <http://127.0.0.1:8515>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
15:13:29.323 | INFO | Flow run 'gregarious-sidewinder' - Beginning flow run 'gregarious-sidewinder' for flow 'my-favorite-flow'
15:13:29.326 | INFO | Flow run 'gregarious-sidewinder' - View at <http://127.0.0.1:8515/runs/flow-run/c8d49e24-6812-4380-8ae4-cdea392a298f>
15:13:29.340 | INFO | Flow run 'gregarious-sidewinder' - Finished in state Completed()
15:13:29.352 | INFO | prefect - Stopping temporary server on <http://127.0.0.1:8515>
.
1 passed in 9.13s
So let's focus on what can be wrong inside the devcontainerMarvin
10/04/2025, 1:15 PMMarvin
10/04/2025, 1:16 PMpython - << 'PY'
import sys, subprocess
print("parent:", sys.executable)
cp = subprocess.run(
[sys.executable, "-c", "import sys, prefect; print('child:', sys.executable, 'prefect', prefect.__version__)"],
capture_output=True, text=True
)
print("rc:", cp.returncode)
print("stdout:\n", cp.stdout)
print("stderr:\n", cp.stderr)
PY
If rc != 0 or it can’t import prefect, your venv isn’t being used by the child. We’ll fix PATH/venv activation.
2) Force a clean, writable home and DB location
This removes TMP/HOME/DB permission issues that are common in devcontainers.
export PREFECT_HOME=/tmp/prefect-home
mkdir -p "$PREFECT_HOME"
export TMPDIR=/tmp
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db
export PREFECT_LOGGING_LEVEL=DEBUG
3) Start the same server app in-process to get the real stack trace
This avoids the silent timeout and will print why startup fails.
python - << 'PY'
import os, uvicorn
from prefect.server.api.server import create_app
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]=os.environ.get("PREFECT_HOME", "/tmp/prefect-home")
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]=os.environ.get(
"PREFECT_SERVER_DATABASE_CONNECTION_URL",
"sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db",
)
app = create_app(ephemeral=True)
# Run with a single worker to keep it simple; Ctrl+C to stop if it runs
uvicorn.run(app, host="127.0.0.1", port=8899, log_level="debug")
PY
- If it throws, you’ll see the exact error (e.g., SQLite path/permissions, migration error, missing dependency). Paste that error and I’ll give you the one-line fix.
- If it runs, open http://127.0.0.1:8899/api/health in the container to confirm. Then the issue is specific to the subprocess startup; proceed to step 4.
4) Force a clean subprocess start method
On some container setups, a forked child can hang during ASGI/uvicorn startup. Forcing “spawn” often fixes it.
python - << 'PY'
import os, httpx, multiprocessing as mp
mp.set_start_method("spawn", force=True)
from prefect.server.api.server import SubprocessASGIServer
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]="/tmp/prefect-home"
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]="sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db"
srv = SubprocessASGIServer()
try:
srv.start(timeout=120)
url = getattr(srv, "api_url", "<http://127.0.0.1:4200/api>")
r = httpx.get(url + "/health", timeout=5)
print("health:", r.status_code, r.text)
finally:
srv.stop()
print("stopped")
PY
5) Immediate unblock for your tests
If you need to proceed while we pinpoint the root cause:
- Start a dedicated server in the devcontainer:
prefect server start
- Point tests to it:
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow -s
Why these steps help
- Child import check catches venv/path mismatches (very common in devcontainers).
- Explicit PREFECT_HOME/TMPDIR/DB avoids container filesystem quirks that break SQLite/migrations.
- Running the app in-process surfaces the exact exception; we won’t have to guess.
- Forcing spawn avoids rare fork-related deadlocks on some container bases.
Please share:
- The output of the child-import test (step 1)
- The first error/traceback you see from the in-process uvicorn run (step 3)Marvin
10/04/2025, 1:16 PMKurt Sys (Vintecc)
10/04/2025, 1:28 PMpython - << 'PY'
import os, uvicorn
from prefect.server.api.server import create_app
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]=os.environ.get("PREFECT_HOME", "/tmp/prefect-home")
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]=os.environ.get(
"PREFECT_SERVER_DATABASE_CONNECTION_URL",
"sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db",
)
app = create_app(ephemeral=True)
# Run with a single worker to keep it simple; Ctrl+C to stop if it runs
uvicorn.run(app, host="127.0.0.1", port=8899, log_level="debug")
PY
INFO: Started server process [351156]
INFO: Waiting for application startup.
15:27:01.782 | DEBUG | prefect.profiles - Using profile 'ephemeral'
but:
curl <http://127.0.0.1:8899/api/health>
hangsMarvin
10/04/2025, 1:28 PMMarvin
10/04/2025, 1:29 PMpython - << 'PY'
import sys, subprocess
print("parent:", sys.executable)
cp = subprocess.run(
[sys.executable, "-c", "import sys, prefect; print('child:', sys.executable, 'prefect', prefect.__version__)"],
capture_output=True, text=True
)
print("rc:", cp.returncode)
print("stdout:\n", cp.stdout)
print("stderr:\n", cp.stderr)
PY
If rc != 0 or it can’t import prefect, the server subprocess dies immediately. We’ll fix the venv path.
2) Ensure SQLite async driver is present
python - << 'PY'
try:
import aiosqlite, sqlite3, sqlalchemy, alembic
print("aiosqlite", aiosqlite.__version__, "sqlite", sqlite3.sqlite_version, "sqlalchemy", sqlalchemy.__version__, "alembic", alembic.__version__)
except Exception as e:
print("IMPORT ERROR:", e)
raise
PY
If aiosqlite is missing, install it inside the devcontainer:
pip install aiosqlite
3) Use a clean, writable Prefect home and DB file
Make sure the directory exists before starting the server.
export PREFECT_HOME=/tmp/prefect-home
mkdir -p "$PREFECT_HOME"
export TMPDIR=/tmp
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db
export PREFECT_LOGGING_LEVEL=DEBUG
4) Try API-only app to isolate UI startup
If API-only works, the hang is in UI mounting/copy.
python - << 'PY'
import os, uvicorn
from prefect.server.api.server import create_api_app
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]="/tmp/prefect-home"
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]="sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db"
app = create_api_app()
uvicorn.run(app, host="127.0.0.1", port=8899, log_level="debug")
PY
Then:
curl -v <http://127.0.0.1:8899/health>
Note: with API-only, health is at /health (not /api/health).
5) If API-only works, try full app again after ensuring the directory exists
Your earlier run didn’t mkdir /tmp/prefect-home before starting. Do that, then:
python - << 'PY'
import os, uvicorn
from prefect.server.api.server import create_app
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]="/tmp/prefect-home"
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]="sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db"
app = create_app(ephemeral=True)
uvicorn.run(app, host="127.0.0.1", port=8899, log_level="debug")
PY
Health for the full app is at:
curl -v <http://127.0.0.1:8899/api/health>
6) If it still hangs, force a fresh process start method
Some container bases misbehave with forked children.
python - << 'PY'
import os, httpx, multiprocessing as mp
mp.set_start_method("spawn", force=True)
from prefect.server.api.server import SubprocessASGIServer
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]="/tmp/prefect-home"
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]="sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db"
srv = SubprocessASGIServer()
try:
srv.start(timeout=120)
url = getattr(srv, "api_url")
print("api:", url)
r = httpx.get(url + "/health", timeout=5)
print("health:", r.status_code, r.text)
finally:
srv.stop()
PY
Immediate unblock for tests
- Since “prefect server start” works in the container, you can point tests at it and proceed:
```
prefect server start # separate terminal
export PREFECT_API_URL=http://127.0.0.1:4200/apiMarvin
10/04/2025, 1:29 PMKurt Sys (Vintecc)
10/04/2025, 2:33 PMprefect server start --host 0.0.0.0 --no-ui --no-scheduler --log-level DEBUG
Configure Prefect to communicate with the server with:
prefect config set PREFECT_API_URL=<http://0.0.0.0:4200/api>
View the API reference documentation at <http://0.0.0.0:4200/docs>
The dashboard is disabled. Set `PREFECT_UI_ENABLED=1` to re-enable it.
INFO: Started server process [326410]
INFO: Waiting for application startup.
16:29:51.966 | DEBUG | prefect.server.services.cancellationcleanup - Starting CancellationCleanup
16:29:51.967 | DEBUG | prefect.server.services.cancellationcleanup - About to run CancellationCleanup...
16:29:52.099 | DEBUG | prefect.server.services.foreman - Starting Foreman
16:29:52.099 | DEBUG | prefect.server.services.foreman - About to run Foreman...
16:29:52.101 | DEBUG | prefect.server.services.marklateruns - Starting MarkLateRuns
16:29:52.101 | DEBUG | prefect.server.services.marklateruns - About to run MarkLateRuns...
16:29:52.102 | DEBUG | prefect.server.services.failexpiredpauses - Starting FailExpiredPauses
16:29:52.102 | DEBUG | prefect.server.services.failexpiredpauses - About to run FailExpiredPauses...
16:29:52.103 | DEBUG | prefect.server.services.repossessor - Starting Repossessor
16:29:52.103 | DEBUG | prefect.server.services.repossessor - About to run Repossessor...
16:29:52.104 | DEBUG | prefect.server.services.task_run_recorder - TaskRunRecorder started
16:29:52.104 | DEBUG | prefect.server.services.telemetry - Starting Telemetry
16:29:52.104 | DEBUG | prefect.server.services.telemetry - About to run Telemetry...
16:29:52.105 | DEBUG | prefect.server.events.services.event_persister - Event persister started
16:29:52.105 | DEBUG | prefect.server.services.proactivetriggers - Starting ProactiveTriggers
16:29:52.105 | DEBUG | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
16:29:52.106 | DEBUG | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:29:52.106096+02:00
16:29:52.106 | DEBUG | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
16:29:52.106 | DEBUG | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:29:52.106577+02:00
16:29:52.106 | DEBUG | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
16:29:52.106 | DEBUG | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:29:52.106986+02:00
...
Never actually starts.Marvin
10/04/2025, 2:33 PMMarvin
10/04/2025, 2:34 PMexport PREFECT_LOGGING_LEVEL=DEBUG
export PREFECT_HOME=/tmp/prefect-home
rm -rf /tmp/prefect-home && mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
2) Reset the database and start the API-only server
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- You should see “Application startup complete.” and be able to GET:
curl -v <http://127.0.0.1:4200/api/health>
3) If step 2 still hangs, confirm the SQLite driver is present
python - << 'PY'
try:
import aiosqlite, sqlite3, sqlalchemy, alembic
print("aiosqlite", aiosqlite.__version__, "sqlite", sqlite3.sqlite_version, "sqlalchemy", sqlalchemy.__version__, "alembic", alembic.__version__)
except Exception as e:
print("IMPORT ERROR:", e)
raise
PY
- If aiosqlite is missing, install it:
pip install aiosqlite
4) If it’s still stuck, run the API-only app in-process to surface the real error
- This bypasses the subprocess wrapper and prints any migration/DB errors directly:
python - << 'PY'
import os, uvicorn
from prefect.server.api.server import create_api_app
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]="/tmp/prefect-home"
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]="sqlite+aiosqlite:////tmp/prefect-home/server.db"
app = create_api_app()
uvicorn.run(app, host="127.0.0.1", port=8899, log_level="debug")
PY
Then:
curl -v <http://127.0.0.1:8899/health>
Why this works
- Devcontainers commonly mount your workspace over a bind/NFS volume. SQLite file locks can stall on those mounts. Using /tmp (container-local) sidesteps that entirely.
- A stale or partially-initialized DB can also block startup; prefect server database reset -y
clears it.
- If the aiosqlite driver is absent or mismatched, startup can hang; step 3 confirms the deps.
If you run those steps and it still doesn’t start, please paste:
- The output of the database reset command
- The first 30 lines after prefect server start …
with DEBUG
- The output from the API-only run (step 4), including any traceback
We’ll use that to pinpoint the exact blocker and give you a one-liner fix. Links: Settings and profiles, Testing workflows.Kurt Sys (Vintecc)
10/04/2025, 2:36 PMprefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
I get the same issue again:
The dashboard is disabled. Set `PREFECT_UI_ENABLED=1` to re-enable it.
INFO: Started server process [328375]
INFO: Waiting for application startup.
16:35:35.220 | DEBUG | prefect.server.services.cancellationcleanup - Starting CancellationCleanup
16:35:35.221 | DEBUG | prefect.server.services.cancellationcleanup - About to run CancellationCleanup...
16:35:35.224 | DEBUG | prefect.server.services.foreman - Starting Foreman
16:35:35.224 | DEBUG | prefect.server.services.foreman - About to run Foreman...
16:35:35.225 | DEBUG | prefect.server.services.marklateruns - Starting MarkLateRuns
16:35:35.226 | DEBUG | prefect.server.services.marklateruns - About to run MarkLateRuns...
16:35:35.227 | DEBUG | prefect.server.services.failexpiredpauses - Starting FailExpiredPauses
16:35:35.227 | DEBUG | prefect.server.services.failexpiredpauses - About to run FailExpiredPauses...
16:35:35.227 | DEBUG | prefect.server.services.repossessor - Starting Repossessor
16:35:35.228 | DEBUG | prefect.server.services.repossessor - About to run Repossessor...
16:35:35.228 | DEBUG | prefect.server.services.recentdeploymentsscheduler - Starting RecentDeploymentsScheduler
16:35:35.228 | DEBUG | prefect.server.services.recentdeploymentsscheduler - About to run RecentDeploymentsScheduler...
16:35:35.230 | DEBUG | prefect.server.services.scheduler - Starting Scheduler
16:35:35.230 | DEBUG | prefect.server.services.scheduler - About to run Scheduler...
16:35:35.231 | DEBUG | prefect.server.services.task_run_recorder - TaskRunRecorder started
16:35:35.232 | DEBUG | prefect.server.services.telemetry - Starting Telemetry
16:35:35.232 | DEBUG | prefect.server.services.telemetry - About to run Telemetry...
16:35:35.233 | DEBUG | prefect.server.events.services.event_persister - Event persister started
16:35:35.233 | DEBUG | prefect.server.services.proactivetriggers - Starting ProactiveTriggers
16:35:35.233 | DEBUG | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
16:35:35.233 | DEBUG | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:35:35.233795+02:00
16:35:35.234 | DEBUG | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
16:35:35.234 | DEBUG | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:35:35.234253+02:00
16:35:35.234 | DEBUG | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
16:35:35.234 | DEBUG | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:35:35.234648+02:00
...
So, inside the devcontainer, it doesn't startMarvin
10/04/2025, 2:36 PMprefect server database reset -y prefect server start --host 127 0 0 1 --no-ui --log-level DEBUG
I get the same issue again ``` Configure Prefect to communicate with the server with prefect config set PREFECT_API_URL=<http //127 0 0 1 4200/api> View the API reference documentation at <http //127 0 0 1 4200/docs> The dashboard is disabled Set PREFECT_UI_ENABLED=1
to re-enable it INFO Started server process [328375] INFO Waiting for application startup 16 35 35 220 | DEBUG | prefect server services cancellationcleanup - Starting CancellationCleanup 16 35 35 221 | DEBUG | prefect server services cancellationcleanup - About to run CancellationCleanup 16 35 35 224 | DEBUG | prefect server services foreman - Starting Foreman 16 35 35 224 | DEBUG | prefect server services foreman - About to run Foreman 16 35 35 225 | DEBUG | prefect server services marklateruns - Starting MarkLateRuns 16 35 35 226 | DEBUG | prefect server services marklateruns - About to run MarkLateRuns 16 35 35 227 | DEBUG | prefect server services failexpiredpauses - Starting FailExpiredPauses 16 35 35 227 | DEBUG | prefect server services failexpiredpauses - About to run FailExpiredPauses 16 35 35 227 | DEBUG | prefect server services repossessor - Starting Repossessor 16 35 35 228 | DEBUG | prefect server services repossessor - About to run Repossessor 16 35 35 228 | DEBUG | prefect server services recentdeploymentsscheduler - Starting RecentDeploymentsScheduler 16 35 35 228 | DEBUG | prefect server services recentdeploymentsscheduler - About to run RecentDeploymentsScheduler 16 35 35 230 | DEBUG | prefect server services scheduler - Starting Scheduler 16 35 35 230 | DEBUG | prefect server services scheduler - About to run Scheduler 16 35 35 231 | DEBUG | prefect server services task_run_recorder - TaskRunRecorder started 16 35 35 232 | DEBUG | prefect server services telemetry - Starting Telemetry 16 35 35 232 | DEBUG | prefect server services telemetry - About to run Telemetry 16 35 35 233 | DEBUG | prefect server events services event_persister - Event persister started 16 35 35 233 | DEBUG | prefect server services proactivetriggers - Starting ProactiveTriggers 16 35 35 233 | DEBUG | prefect server services proactivetriggers - About to run ProactiveTriggers 16 35 35 233 | DEBUG | prefect server services proactivetriggers - Finished running ProactiveTriggers Next run at 2025-10-04 16 35 35 233795+02 00 16 35 35 234 | DEBUG | prefect server services proactivetriggers - About to run ProactiveTriggers 16 35 35 234 | DEBUG | prefect server services proactivetriggers - Finished running ProactiveTriggers Next run at 2025-10-04 16 35 35 234253+02 00 16 35 35 234 | DEBUG | prefect server services proactivetriggers - About to run ProactiveTriggers 16 35 35 234 | DEBUG | prefect server services proactivetriggers - Finished running ProactiveTriggers Next run at 2025-10-04 16 35 35 234648+02 00 16 35 35 234 | DEBUG | prefect server services proactivetriggers - About to run ProactiveTriggers 16 35 35 235 | DEBUGMarvin
10/04/2025, 2:37 PMMarvin
10/04/2025, 2:39 PMexport PREFECT_LOGGING_LEVEL=DEBUG
export PREFECT_HOME=/tmp/prefect-home
rm -rf /tmp/prefect-home && mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
2) Start the API with NO SERVICES
- If this responds, we’ve proven a service is hanging startup.
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-services --no-ui --log-level DEBUG
# In another shell:
curl -v <http://127.0.0.1:4200/api/health>
3) If that works, re-enable services but disable analytics/telemetry
- Telemetry calling out through a proxy-less container is a common hang.
export PREFECT_SERVER_ANALYTICS_ENABLED=0
prefect server start --host 127.0.0.1 --no-ui --analytics-off --log-level DEBUG
curl -v <http://127.0.0.1:4200/api/health>
4) If you want the harness to work without services (most tests don’t need them), set this before pytest
export PREFECT_API_SERVICES_ENABLED=0
export PREFECT_SERVER_ANALYTICS_ENABLED=0
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow -s
5) Double-check the child interpreter can import Prefect
- If the server subprocess can’t import Prefect, it dies and you see only a timeout.
python - << 'PY'
import sys, subprocess
cp = subprocess.run([sys.executable, "-c", "import prefect; print(prefect.__version__)"], capture_output=True, text=True)
print("rc:", cp.returncode, "out:", cp.stdout, "err:", cp.stderr)
PY
6) Confirm the async SQLite driver is present
python - << 'PY'
import aiosqlite, sqlite3
print("aiosqlite", aiosqlite.__version__, "sqlite", sqlite3.sqlite_version)
PY
If aiosqlite is missing, install it in the devcontainer:
pip install aiosqlite
Outcome guide:
- If “--no-services” starts and health is OK, a service is hanging startup; keeping services off for tests is fine. Otherwise disable just analytics (step 3).
- If even “--no-services” hangs, the issue is deeper (DB/driver/permissions). The clean /tmp DB + aiosqlite check in steps 1 and 6 should address that; if not, share the first 30 lines after startup at DEBUG and we’ll pinpoint the exact blocker.
Reference: Testing workflows in Prefect 3Kurt Sys (Vintecc)
10/04/2025, 2:43 PM16:40:06.489 | DEBUG | prefect.server.events.services.event_persister - Event persister started 16:40:06.489 | DEBUG | prefect.server.services.proactivetriggers - Starting ProactiveTriggers 16:40:06.489 | DEBUG | prefect.server.services.proactivetriggers - About to run ProactiveTriggers... 16:40:06.490 | DEBUG | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 14:40:11.489737+00:00 16:40:06.490 | INFO | prefect.server.events.services.triggers - ReactiveTriggers starting with unique consumer name: reactive-triggers-kurtsys-ThinkPad-P16v-Gen-1-920427-701abdf6 16:40:06.490 | DEBUG | prefect.server.events.triggers - Loading automations 16:40:06.491 | INFO | prefect.server.events.actions - Starting action message handler 16:40:06.491 | DEBUG | prefect.server.events.services.actions - Actions started INFO: Application startup complete.
This is on the devcontainer:
16:41:20.185 | DEBUG | prefect.server.events.services.event_persister - Event persister started 16:41:20.186 | DEBUG | prefect.server.services.proactivetriggers - Starting ProactiveTriggers 16:41:20.186 | DEBUG | prefect.server.services.proactivetriggers - About to run ProactiveTriggers... 16:41:20.186 | DEBUG | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:41:20.186571+02:00 16:41:20.186 | DEBUG | prefect.server.services.proactivetriggers - About to run ProactiveTriggers... 1
what's the problem inside the devcontainer?Marvin
10/04/2025, 2:43 PMMarvin
10/04/2025, 2:45 PMexport PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-services --no-ui --log-level DEBUG
- In another shell:
curl -v <http://127.0.0.1:4200/api/health>
If this responds, the webserver is fine and a service is the blocker.
2) Disable analytics/telemetry and start services
- This is the most frequent culprit in containers with restricted egress:
export PREFECT_SERVER_ANALYTICS_ENABLED=0
prefect server start --host 127.0.0.1 --no-ui --analytics-off --log-level DEBUG
- Check for “Application startup complete.” and:
curl -v <http://127.0.0.1:4200/api/health>
3) Make this permanent in your devcontainer/test env
- In devcontainer.json:
"containerEnv": {
"PREFECT_HOME": "/tmp/prefect-home",
"PREFECT_SERVER_DATABASE_CONNECTION_URL": "sqlite+aiosqlite:////tmp/prefect-home/server.db",
"PREFECT_SERVER_ANALYTICS_ENABLED": "0",
"PREFECT_LOGGING_LEVEL": "DEBUG"
}
- For pytest runs (so prefect_test_harness won’t hang):
export PREFECT_SERVER_ANALYTICS_ENABLED=0
pytest -p no:xdist ...
Optional sanity checks
- Confirm async SQLite driver is present:
python - << 'PY'
import aiosqlite, sqlite3
print("aiosqlite", aiosqlite.__version__, "sqlite", sqlite3.sqlite_version)
PY
- Confirm the server subprocess can import Prefect (venv is consistent):
python - << 'PY'
import sys, subprocess
cp = subprocess.run([sys.executable, "-c", "import prefect; print(prefect.__version__)"], capture_output=True, text=True)
print("rc:", cp.returncode, "out:", cp.stdout, "err:", cp.stderr)
PY
If:
- “--no-services” works but “--analytics-off” still hangs, we can selectively disable other services next (but analytics is the usual fix).
- Even “--no-services” hangs, then something deeper (filesystem/driver) is still off; but you’ve already moved DB to /tmp and reset, so that’s unlikely.
Once “--analytics-off” allows “Application startup complete.”, your prefect_test_harness should work as long as you export PREFECT_SERVER_ANALYTICS_ENABLED=0
in the test environment.
Want me to provide a minimal .env and .vscode/settings.json snippet to make this automatic for your devcontainer and VS Code Python tests?Kurt Sys (Vintecc)
10/04/2025, 2:48 PMMarvin
10/04/2025, 2:48 PMMarvin
10/04/2025, 2:49 PMPREFECT_API_SERVICES_*
namespace.
Try this bisect approach inside the devcontainer:
1) Start with everything off, then enable in groups
export PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG
# Disable all services
export PREFECT_API_SERVICES_ENABLED=0
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
# Should say "Application startup complete."
2) Enable services gradually to find the culprit
- Group 1: core scheduling loop
export PREFECT_API_SERVICES_ENABLED=1
export PREFECT_API_SERVICES_SCHEDULER_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- Group 2: late runs + pause expirations
export PREFECT_API_SERVICES_LATE_RUNS_ENABLED=1
export PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- Group 3: repossessor + cancellation cleanup + task run recorder
export PREFECT_API_SERVICES_REPOSSESSOR_ENABLED=1
export PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED=1
export PREFECT_API_SERVICES_TASK_RUN_RECORDER_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- Group 4: event system
export PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED=1
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=1
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
If a start hangs after enabling a group, the culprit is within that group; then toggle one-by-one within that group to find the exact service.
Most common offenders in restricted containers:
- TRIGGERS or EVENT_LOGGER can perform outbound calls (telemetry/analytics); if your container has no egress or DNS/proxy misconfig, they can hang.
- SCHEDULER rarely hangs, but it’s essential; if it hangs, it’s likely DB-related rather than networking.
For pytest/prefect_test_harness
Once you’ve identified the problematic service(s), export the corresponding env var(s) before running tests so the ephemeral server inherits them, e.g.:
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
pytest -p no:xdist ...
If you want, share which service flips it from “Application startup complete.” to “stuck,” and I’ll propose a minimal permanent configuration for your devcontainer.Kurt Sys (Vintecc)
10/04/2025, 2:54 PMMarvin
10/04/2025, 2:54 PMMarvin
10/04/2025, 2:54 PMprefect config view | grep -E 'PREFECT_API_URL|source'
- If you see http://0.0.0.0:4200/api, change it to a routable address from inside the container, e.g.:
prefect config set PREFECT_API_URL=<http://127.0.0.1:4200/api>
# or, if you’re binding to a different interface/port, use that reachable address
- Also ensure localhost is not proxied:
export NO_PROXY=127.0.0.1,localhost
Then retry a full start with services:
prefect server start --host 127.0.0.1 --log-level DEBUG
# Expect to see: "Application startup complete."
curl -v <http://127.0.0.1:4200/api/health>
Notes
- The CLI banner can print 0.0.0.0 when you pass --host 0.0.0.0. Do not set clients (including the server’s own services) to 0.0.0.0 — always use an address you can connect to (127.0.0.1 inside the container, or the container’s IP/hostname as appropriate).
- If your container has restricted egress, you can still disable analytics to be safe:
export PREFECT_SERVER_ANALYTICS_ENABLED=0
Once PREFECT_API_URL is corrected, prefect_test_harness should work in the devcontainer without --no-services. If it still doesn’t, paste the output of:
- prefect config view (just the PREFECT_API_URL line)
- The first lines after “Waiting for application startup.” at DEBUG
…but I’m confident the 0.0.0.0 API URL is the blocker here.Kurt Sys (Vintecc)
10/04/2025, 2:55 PMprefect config view | grep -E 'PREFECT_API_URL|source'
PREFECT_API_URL='<http://127.0.0.1:4200/api>' (from profile)
Marvin
10/04/2025, 2:55 PMMarvin
10/04/2025, 2:56 PMexport PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG
export NO_PROXY=127.0.0.1,localhost
# If your container has restricted egress, keep analytics off
export PREFECT_SERVER_ANALYTICS_ENABLED=0
2) Enable services except the event ones and verify startup completes
# Turn services on globally
export PREFECT_API_SERVICES_ENABLED=1
# Keep event-related services off first
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
# event_persister is usually fine; leave it on to start
export PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED=1
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
# Expect "Application startup complete."
3) If that works, re-enable event services one-by-one to find the culprit
- Re-enable triggers:
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- If still fine, re-enable event logger:
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- If either re-enable causes the hang, you’ve found the offending service.
4) If it still hangs even with all three event services off
Then the hang is elsewhere; test core services one at a time:
- Scheduler only:
export PREFECT_API_SERVICES_ENABLED=0
export PREFECT_API_SERVICES_SCHEDULER_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- Then add late_runs, pause_expirations, repossessor, cancellation_cleanup, task_run_recorder one-by-one:
export PREFECT_API_SERVICES_LATE_RUNS_ENABLED=1
# ...then next one, etc.
Useful visibility
- See which services Prefect thinks are enabled:
prefect config view --show-defaults | grep PREFECT_API_SERVICES_
Once you’ve identified the offending service(s), keep them disabled in your devcontainer and tests:
- For CLI/server:
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
prefect server start --host 127.0.0.1 --no-ui
- For pytest/prefect_test_harness:
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
pytest -p no:xdist ...
If you report back which exact service causes the stall, I’ll suggest a minimal permanent config and potential root-cause (e.g., network egress/proxy/DNS quirks specific to that service) so you can decide whether to keep it off in dev or tweak networking to allow it.Kurt Sys (Vintecc)
10/04/2025, 2:58 PMPREFECT_API_SERVICES_ENABLED
on, it fails. So let's see if we can debug the network?Marvin
10/04/2025, 2:58 PMMarvin
10/04/2025, 2:59 PMexport PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG
export NO_PROXY=127.0.0.1,localhost
export PREFECT_SERVER_ANALYTICS_ENABLED=0
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
# Then
curl -v <http://127.0.0.1:4200/api/health>
If this reaches “Application startup complete.” and health returns 200, telemetry was the blocker.
Make it sticky (devcontainer/tests)
- Add to your devcontainer env (or .env used by VS Code):
PREFECT_HOME=/tmp/prefect-home
PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
PREFECT_SERVER_ANALYTICS_ENABLED=0
NO_PROXY=127.0.0.1,localhost
- For pytest/prefect_test_harness:
export PREFECT_SERVER_ANALYTICS_ENABLED=0
pytest -p no:xdist ...
If it still hangs after disabling analytics, next suspects are the event services. You can keep services on and just turn off the event ones:
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
# event_persister is usually fine; leave it on
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
But in most devcontainers, disabling analytics alone resolves this. Once confirmed, your prefect_test_harness should work in the container as well.Kurt Sys (Vintecc)
10/04/2025, 3:02 PMexport PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG
export NO_PROXY=127.0.0.1,localhost
export PREFECT_SERVER_ANALYTICS_ENABLED=0
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
doesn't complete the startupMarvin
10/04/2025, 3:02 PMMarvin
10/04/2025, 3:03 PMexport PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG
# Networking sanity
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)
# Services: ON globally, but disable event-related ones
export PREFECT_API_SERVICES_ENABLED=1
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
# event_persister can stay on; it’s usually fine
export PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED=1
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
Then:
curl -v <http://127.0.0.1:4200/api/health>
- If this now reaches “Application startup complete.”, the triggers/event_logger path was the culprit.
2) If it still hangs, try scheduler-only to see if core services are fine
export PREFECT_API_SERVICES_ENABLED=0
export PREFECT_API_SERVICES_SCHEDULER_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- If scheduler-only works, add services back one-by-one:
- late_runs, pause_expirations, repossessor, cancellation_cleanup, task_run_recorder
- This pinpoints the exact one.
3) Force local loopback (avoid IPv6/proxy pitfalls) and fast-fail network calls
- Keep clients on IPv4 only and fast-fail if any callback goes out-of-container:
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
export PREFECT_API_REQUEST_TIMEOUT=5
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)
4) For tests (prefect_test_harness), apply the same env
Add these to your devcontainer env or the shell running pytest:
export PREFECT_HOME=/tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)
Optional network debug
- See where the server is trying to connect during startup:
# In one shell
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
# In another shell, watch outbound loopback attempts
ss -tnp state syn-sent | grep 4200 || true
ss -tnp | grep 'python' || true
Why this helps
- Triggers/event logger open streaming connections back to the API; in containers with odd proxy/DNS/IPv6 settings, those can hang.
- NO_PROXY including ::1 and $(hostname) avoids proxying loopback or hostname->loopback.
- PREFECT_API_REQUEST_TIMEOUT avoids indefinite hangs and will surface an error if a service keeps trying to call out.
Please run step 1 first and tell me:
- Do you get “Application startup complete.” with triggers+event_logger disabled?
If yes, we’ll make that the default in your devcontainer and re-enable them later if needed. If not, tell me which single service (from step 2) causes the hang.Kurt Sys (Vintecc)
10/04/2025, 3:09 PMusing profile 'local'
- how did I get that profile?Marvin
10/04/2025, 3:10 PMMarvin
10/04/2025, 3:11 PMprefect profile populate-defaults
prefect profile ls
prefect profile use local
prefect profile inspect local
- You or a tool set PREFECT_PROFILE=local in your environment
The active profile is chosen in this order:
1) CLI flag: prefect --profile local ...
2) Env var: PREFECT_PROFILE=local
3) The “active” entry in your profiles file
- Your profiles file was updated
Profiles live at:
- Linux/macOS: ~/.prefect/profiles.toml
- Or a custom path if PREFECT_PROFILES_PATH is set
Quickly check what happened:
# Show active profile and where each value comes from
prefect config view
# List and inspect profiles
prefect profile ls
prefect profile inspect local
# See if PREFECT_PROFILE env var is set
env | grep ^PREFECT_PROFILE
If you want to make “local” your stable default in the devcontainer:
- Add to your container’s env:
PREFECT_PROFILE=local
- Or run once:
prefect profile use local
Docs: Settings and profilesKurt Sys (Vintecc)
10/04/2025, 3:16 PMexport PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG
# Networking sanity
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)
# Services: ON globally, but disable event-related ones
export PREFECT_API_SERVICES_ENABLED=1
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
# event_persister can stay on; it's usually fine
export PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED=1
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG > prefect.log 2>&1
Marvin
10/04/2025, 3:16 PMMarvin
10/04/2025, 3:17 PMexport PREFECT_HOME=/tmp/prefect-home
rm -rf /tmp/prefect-home && mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
export PREFECT_SERVER_ANALYTICS_ENABLED=0
export PREFECT_API_REQUEST_TIMEOUT=5
# Helper to clear all per-service flags
unset $(env | awk -F= '/^PREFECT_API_SERVICES_/ {print $1}')
services=(CANCELLATION_CLEANUP EVENT_PERSISTER EVENT_LOGGER FOREMAN LATE_RUNS SCHEDULER PAUSE_EXPIRATIONS REPOSSESSOR TASK_RUN_RECORDER TRIGGERS)
for s in "${services[@]}"; do
echo "=== Testing $s ==="
# Disable global services, then enable just one
export PREFECT_API_SERVICES_ENABLED=0
unset $(env | awk -F= '/^PREFECT_API_SERVICES_/ {print $1}')
export PREFECT_API_SERVICES_${s}_ENABLED=1
# Fresh DB each run to avoid cross-run effects
rm -f /tmp/prefect-home/server.db
timeout 30s prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG > /tmp/prefect_${s}.log 2>&1
if grep -q "Application startup complete" /tmp/prefect_${s}.log; then
echo "$s: OK"
else
echo "$s: HANG"
tail -n +1 /tmp/prefect_${s}.log | sed -n '1,120p'
fi
done
What this tells us
- The first service that prints HANG is the culprit. The last lines in that log usually show which component is stalling. Share the first failing service and the last ~50 lines of its log.
If you want to try a quick network-focused run first
- Keep all services enabled but make network failures fail fast, and ensure loopback is not proxied:
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)
export PREFECT_API_REQUEST_TIMEOUT=5
export PREFECT_SERVER_ANALYTICS_ENABLED=0
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG > prefect.log 2>&1
tail -n 200 prefect.log
Common outcomes
- TRIGGERS or EVENT_LOGGER hangs in restricted networks. If so, leave them off in dev:
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
- If SCHEDULER or FOREMAN hangs, it’s typically DB/loopback config; but you’ve already moved DB to /tmp and set NO_PROXY, so we’ll inspect that log to see the exact call that’s stalling.
Once we know which service is the blocker, I’ll give you the minimal permanent env for your devcontainer and for pytest so prefect_test_harness works reliably.Kurt Sys (Vintecc)
10/04/2025, 3:27 PMexport PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
(there are still a lot of logs, now about Repossessor
, but I guess that's fine). So, Can we figure out what the issue is?
Also, it should be ok if I just add this env variable to the devcontainer?Marvin
10/04/2025, 3:27 PMMarvin
10/04/2025, 3:27 PMexport PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
- Keep this in your devcontainer config so both CLI and prefect_test_harness inherit it. Example devcontainer.json:
"containerEnv": {
"PREFECT_HOME": "/tmp/prefect-home",
"PREFECT_SERVER_DATABASE_CONNECTION_URL": "sqlite+aiosqlite:////tmp/prefect-home/server.db",
"PREFECT_API_SERVICES_TRIGGERS_ENABLED": "0",
"PREFECT_SERVER_ANALYTICS_ENABLED": "0",
"NO_PROXY": "127.0.0.1,localhost,::1,${containerWorkspaceFolderBasename}"
}
Or in a .env that VS Code loads:
PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
PREFECT_SERVER_ANALYTICS_ENABLED=0
If you want to dig into why Triggers hangs in your container
- Quick SSE connectivity check (API-only run is fine):
- Start server (with services enabled but triggers off):
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- In another shell, try streaming events:
curl -N -H "Accept: text/event-stream" <http://127.0.0.1:4200/api/events/stream?topics=*>
If this hangs or errors unexpectedly, it points at loopback/proxy/DNS behavior in the container. Ensuring NO_PROXY includes 127.0.0.1, localhost, ::1, and the container hostname usually helps:
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)
- Force quick failures instead of indefinite waits while testing:
export PREFECT_API_REQUEST_TIMEOUT=5
If Triggers starts after this, a retry loop was waiting on a blocked connection.
- Confirm the server is using IPv4 loopback consistently:
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
What you lose with triggers disabled
- Reactive automations (event-driven actions). Core orchestration (scheduling, runs, logs, UI) still works.
If you’d like, share a short log snippet from a start attempt with only Triggers enabled (and DEBUG logging) and I can try to spot the exact step it stalls on. Otherwise, adding PREFECT_API_SERVICES_TRIGGERS_ENABLED=0 to your devcontainer is a pragmatic, safe fix for local development and testing.Kurt Sys (Vintecc)
10/04/2025, 3:31 PMcurl -N -H "Accept: text/event-stream" <http://127.0.0.1:4200/api/events/stream>\?topics\=\*
{"detail":"Not Found"}%
Right, but now, can I trigger a flow now? I guess not?Marvin
10/04/2025, 3:31 PMMarvin
10/04/2025, 3:32 PMfrom prefect import flow
@flow
def my_flow():
return "hello"
if __name__ == "__main__":
my_flow()
- Run a deployment via CLI
1) Create a deployment (either with CLI or code). For CLI:
prefect deploy ./path/to/flow.py:my_flow -n my-deploy --pool my-pool
2) Start a worker for your pool:
prefect work-pool create my-pool --type process # once
prefect worker start --pool my-pool # keep this running
3) Trigger the deployment:
prefect deployment run my_flow/my-deploy --watch
- Run a deployment via code (3.x style)
from prefect import flow
@flow
def my_flow():
return "hello"
my_flow.from_source(
source="file://.", # or your repo
entrypoint="path/to/flow.py:my_flow",
).deploy(
name="my-deploy",
work_pool_name="my-pool",
)
Notes
- Your curl to /api/events/stream returning 404 is expected if you hit API-only or the endpoint path differs across modes — but it doesn’t affect running flows.
- With TRIGGERS disabled you just won’t get reactive event-driven automations; scheduled runs, manual triggers, logging, results, UI, etc., remain fully functional.
- Make sure your CLI points at the server:
prefect config set PREFECT_API_URL=<http://127.0.0.1:4200/api>
If you want, I can give you a minimal end-to-end snippet right now that:
- Creates a pool
- Starts a worker
- Creates a deployment
- Triggers it and watches logs
Or share your flow’s path/name and I’ll tailor the exact commands.Kurt Sys (Vintecc)
10/04/2025, 3:35 PMMarvin
10/04/2025, 3:35 PMMarvin
10/04/2025, 3:36 PMKurt Sys (Vintecc)
10/04/2025, 3:39 PMMarvin
10/04/2025, 3:40 PMMarvin
10/04/2025, 3:41 PMPREFECT_HOME=/tmp/prefect-home
PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
PREFECT_API_URL=<http://127.0.0.1:4200/api>
PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
PREFECT_SERVER_ANALYTICS_ENABLED=0
NO_PROXY=127.0.0.1,localhost,::1,${containerWorkspaceFolderBasename}
PREFECT_LOGGING_LEVEL=DEBUG
This will:
- Keep everything running
- Disable the Triggers service only in dev (fine unless you actively test automations)
If you want to debug Triggers specifically
- Force quick network error reporting:
export PREFECT_API_REQUEST_TIMEOUT=5
- Start with Triggers only to capture logs:
export PREFECT_API_SERVICES_ENABLED=0
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=1
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG > triggers.log 2>&1
tail -n 200 triggers.log
If it’s a proxy/loopback issue, you’ll typically see httpx timeouts or connection errors. If you share the last ~100 lines from that log, I can pinpoint the exact failure.
What you can still do now
- Manual runs and scheduled deployments work normally with Triggers disabled:
- Create a pool, start a worker, deploy, and trigger with:
prefect work-pool create my-pool --type process
prefect worker start --pool my-pool
prefect deploy ./path/to/flow.py:my_flow -n my-deploy --pool my-pool
prefect deployment run my_flow/my-deploy --watch
Docs on event-driven automations if/when you re-enable Triggers: Event triggers in Prefect 3