<@ULVA73B9P> the most basic test fails when I run ...
# ask-marvin
k
@Marvin the most basic test fails when I run it, from https://docs.prefect.io/v3/how-to-guides/workflows/test-workflows
Copy code
@flow
def my_favorite_flow():
    return 42


def test_my_favorite_flow():
    with prefect_test_harness(server_startup_timeout=120):
        assert my_favorite_flow() == 42
results in:
Copy code
======================== 1 failed in 123.19s (0:02:03) =========================
--- Logging error ---
Traceback (most recent call last):
  File "/home/vintecc/py310/lib/python3.10/site-packages/prefect/logging/handlers.py", line 355, in emit
    self.console.print(message, soft_wrap=True)
  File "/home/vintecc/py310/lib/python3.10/site-packages/rich/console.py", line 1697, in print
    with self:
  File "/home/vintecc/py310/lib/python3.10/site-packages/rich/console.py", line 870, in __exit__
    self._exit_buffer()
  File "/home/vintecc/py310/lib/python3.10/site-packages/rich/console.py", line 826, in _exit_buffer
    self._check_buffer()
  File "/home/vintecc/py310/lib/python3.10/site-packages/rich/console.py", line 2038, in _check_buffer
    self._write_buffer()
  File "/home/vintecc/py310/lib/python3.10/site-packages/rich/console.py", line 2107, in _write_buffer
    self.file.write(text)
ValueError: I/O operation on closed file.
Call stack:
  File "/home/vintecc/py310/lib/python3.10/site-packages/prefect/server/api/server.py", line 960, in stop
    <http://subprocess_server_logger.info|subprocess_server_logger.info>(
  File "/usr/lib/python3.10/logging/__init__.py", line 1477, in info
    self._log(INFO, msg, args, **kwargs)
  File "/usr/lib/python3.10/logging/__init__.py", line 1624, in _log
    self.handle(record)
  File "/usr/lib/python3.10/logging/__init__.py", line 1634, in handle
    self.callHandlers(record)
  File "/usr/lib/python3.10/logging/__init__.py", line 1696, in callHandlers
    hdlr.handle(record)
  File "/usr/lib/python3.10/logging/__init__.py", line 968, in handle
    self.emit(record)
  File "/home/vintecc/py310/lib/python3.10/site-packages/prefect/logging/handlers.py", line 361, in emit
    self.handleError(record)
Message: 'Stopping temporary server on <http://127.0.0.1:8799>'
Arguments: ()
Finished running tests!
m
thought for 213.7 seconds
Thanks for sharing the details — that’s frustrating for such a simple test. What’s happening here is likely two separate things: - The test harness’s temporary API server did not come up within your 120s timeout, so the context manager exited with a failure (that’s why the test takes ~123s). - During teardown, Prefect’s Rich console handler tries to log “Stopping temporary server…” after pytest has already closed stdout, which triggers the “I/O operation on closed file” noise. This logging error is a known issue and not the root cause of the failure. See issue #14845. The real problem is the server failing to start in your environment. A likely cause (given your Python 3.10 path in the traceback) is a compatibility issue with pydantic 2.10 on older Prefect versions. We recently fixed server startup errors with pydantic 2.10 on Python 3.9/3.10. Upgrading Prefect usually resolves this immediately:
Copy code
pip install -U 'prefect>=3.4.22'
prefect version
Release notes: 3.4.22 notes. If you still want to confirm the environment: - Check your pydantic/Python versions:
Copy code
python -c "import pydantic, sys; print('pydantic', pydantic.__version__, 'python', sys.version)"
- Try starting the server manually to see startup logs:
Copy code
prefect server start --log-level DEBUG
If this fails or hangs, we’ll see the actual reason in the logs (e.g., database config, port conflict, etc.). Notes and workarounds: - For unit tests that don’t need API/state isolation, you can test flows without the harness:
Copy code
from prefect import flow

@flow
def my_favorite_flow():
    return 42

def test_my_favorite_flow():
    assert my_favorite_flow() == 42
- If you do need the harness, this should work once the server starts correctly:
Copy code
from prefect import flow
from prefect.testing.utilities import prefect_test_harness

@flow
def my_favorite_flow():
    return 42

def test_my_favorite_flow():
    with prefect_test_harness(server_startup_timeout=120):
        assert my_favorite_flow() == 42
Could you share the output of: -
prefect version
-
python -c "import pydantic; print(pydantic.__version__)"
And if upgrading doesn’t fix it, please run
prefect server start --log-level DEBUG
and paste the first error or stall point. That’ll tell us exactly why the ephemeral server can’t start in your test harness.
k
@Marvin I confirmed the environment, without installing - prefect is/should be installed:
Copy code
python -c "import pydantic, sys; print('pydantic', pydantic.__version__, 'python', sys.version)"
pydantic 2.11.9 python 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0]
Running the server runs fine
Copy code
prefect server start

 ___ ___ ___ ___ ___ ___ _____
| _ \ _ \ __| __| __/ __|_   _|
|  _/   / _|| _|| _| (__  | |
|_| |_|_\___|_| |___\___| |_|

Configure Prefect to communicate with the server with:

    prefect config set PREFECT_API_URL=<http://127.0.0.1:4200/api>

View the API reference documentation at <http://127.0.0.1:4200/docs>

Check out the dashboard at <http://127.0.0.1:4200>
m
thought for 116.4 seconds
k
@Marvin and prefect:
Copy code
prefect version
Version:              3.4.22
API version:          0.8.4
Python version:       3.10.12
Git commit:           6a6d114f
Built:                Fri, Oct 03, 2025 06:15 PM
OS/Arch:              linux/x86_64
Profile:              ephemeral
Server type:          server
Pydantic version:     2.11.9
Server:
  Database:           sqlite
  SQLite version:     3.37.2
Integrations:
  prefect-kubernetes: 0.6.5
  prefect-docker:     0.6.6
m
thought for 300.2 seconds
Nice — that confirms you’ve got a compatible environment (Pydantic 2.x, Python 3.10) and your local Prefect server is running. If your goal is to run flows against this local server, here are the next steps to wire things up and verify: 1) Point your CLI and Python runs at the local server - Using Prefect config:
Copy code
prefect config set PREFECT_API_URL=<http://127.0.0.1:4200/api>
prefect config view
- Or via environment variable (shell):
Copy code
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
Windows PowerShell:
Copy code
setx PREFECT_API_URL "<http://127.0.0.1:4200/api>"
2) Quick connectivity checks - From the API docs: OpenAPI docs and try GET /health - Or curl:
Copy code
curl <http://127.0.0.1:4200/api/health>
- Or Python:
Copy code
python - << 'PY'
import asyncio
from prefect.client.orchestration import get_client

async def main():
    async with get_client() as c:
        ws = await c.read_workspace()
        print("Connected to workspace:", ws.name)

asyncio.run(main())
PY
3) Create a work pool and start a worker (process type) - Create the pool:
Copy code
prefect work-pool create my-pool -t process
- Start a worker that pulls runs for that pool:
Copy code
prefect worker start --pool my-pool
4) Deploy or run a simple flow - Minimal flow (ad-hoc run; will appear in the UI if your API URL is set):
Copy code
from prefect import flow

@flow
def hello(name="world"):
    print(f"hello, {name}")

if __name__ == "__main__":
    hello()
- Code-based deployment (Prefect 3.x):
Copy code
from prefect import flow

@flow
def hello(name="world"):
    print(f"hello, {name}")

if __name__ == "__main__":
    flow.from_source(
        source="file://.",                 # your repo or local directory
        entrypoint="hello.py:hello",       # path:callable
    ).deploy(
        name="hello-deploy",
        work_pool_name="my-pool",
        tags=["dev"],
    )
Note: In Prefect 3.x, use
flow.from_source(...).deploy(...)
(not
Deployment.build_from_flow
and not
prefect deployment build
). If you run into “it works in CLI but not in Python” issues, double-check you’re using the same environment: - CLI path:
Copy code
which prefect
- Python sees Prefect:
Copy code
python -c "import prefect, sys; print('prefect', prefect.__version__, 'python', sys.version)"
- If needed, run the server via the Python module to force the same interpreter:
Copy code
python -m prefect server start
Would you like me to help you create the pool/worker and a minimal deployment now, or are you seeing a specific error after starting the server?
k
@Marvin I'm having a devcontainer, in which
pytest-xdist
is installed. is that an issue?
m
thought for 28.0 seconds
Short answer: just having pytest-xdist installed is not an issue for Prefect. It only comes into play when you actually run pytest with parallel workers (e.g.,
-n auto
). If you do run your Prefect tests in parallel, a few caveats to avoid flaky behavior: - Don’t start multiple Prefect servers at once on the same ports. Start one server per test session, or run those tests serially. - If your tests create named Prefect resources (work pools, blocks, deployments), give them unique names per worker to avoid collisions. Example:
Copy code
import os
  wid = os.environ.get("PYTEST_XDIST_WORKER", "gw0")
  pool_name = f"my-pool-{wid}"
- If tests start workers or long-lived processes, prefer serializing those tests (e.g., run them with
-n 0
) so you don’t have multiple workers competing for the same queue or binding to the same ports. - Keep “unit tests” of flows (calling the function directly) parallel if you like, and reserve serial execution for tests that interact with the server, work pools, or deployments. If you’re not invoking pytest with xdist parallelism, its presence won’t affect Prefect at all. Are you seeing a specific error that makes you suspect xdist? If so, share the command and output and I’ll help narrow it down.
k
@Marvin ok, that might be an issue, since I'm running them in vscode, so: 1. can I check how vscode is configured when it comes to tests 2. can I make sure xdist isn't used when running the prefect tests
m
thought for 33.6 seconds
Thanks! I see: - Prefect 3.4.22 - Server (self-hosted) with SQLite - Profile:
ephemeral
- Integrations installed: prefect-docker 0.6.6, prefect-kubernetes 0.6.5 How can I help you next? For example, if you’re connecting your CLI to this server or setting up pools/workers, here are quick checks and commands: - Create/switch profiles and set your API URL
Copy code
prefect profile ls
prefect profile create my-server
prefect profile use my-server
prefect config set PREFECT_API_URL=<your-server-api-url>
prefect config view
Docs: Settings and profiles - List/create work pools and start a worker
Copy code
prefect work-pool ls
prefect work-pool create my-docker --type docker
prefect work-pool create my-k8s --type kubernetes

# Start a worker for a pool
prefect worker start --pool my-docker
# or
prefect worker start --pool my-k8s
Tip: You already have the Docker/K8s integrations installed, so the above should work as-is. Docs overview: Prefect 3 docs (see Work Pools/Workers sections) - Deploy flows (3.x) - From a project directory:
Copy code
prefect deploy ./path/to/flow.py:my_flow -n my-deployment --pool my-docker
- From code (for remote source repos), use
flow.from_source(...).deploy(...)
in your Python:
Copy code
from prefect import flow

@flow
def my_flow():
    return "hello"

deployment = my_flow.from_source(
    source="<https://github.com/<owner>/<repo>>",
    entrypoint="path/to/flow.py:my_flow",
).deploy(
    name="my-deployment",
    work_pool_name="my-docker",
)
Docs: Connecting and deploying (3.x) If you share what you’re trying to do (e.g., point CLI to your server, create a Docker/K8s pool, start a worker, or deploy a flow), I’ll tailor the exact steps.
Great questions. 1) How to see how VS Code is running your tests - Open Settings and search “pytest”: - Check: -
python.testing.pytestEnabled
-
python.testing.pytestArgs
-
python.testing.cwd
-
python.envFile
(env vars passed to tests) - Check your workspace file: -
.vscode/settings.json
(project) and your User settings - Check config files that can inject arguments: -
pyproject.toml
,
pytest.ini
, or
tox.ini
for something like:
Copy code
[tool.pytest.ini_options]  # in pyproject.toml
    addopts = "-n auto"
or
Copy code
[pytest]                   # in pytest.ini / tox.ini
    addopts = -n auto
- Check env vars that VS Code may load (via
python.envFile
or devcontainer): -
PYTEST_ADDOPTS
(will append args) -
PYTEST_DISABLE_PLUGIN_AUTOLOAD
(disables plugin autoload if set to 1) - Look at the Test output: - View > Output > select “Python Test Log” (or “Testing”) - If xdist is active you’ll see
gw0/gw1
worker output or messaging about “numprocesses”. 2) Ways to ensure xdist is not used for Prefect tests Pick the approach that fits your workflow: - Remove/avoid parallel flags - Delete
-n auto
(or any
-n ...
) from: -
python.testing.pytestArgs
in VS Code settings -
addopts
in `pytest.ini`/`pyproject.toml`/`tox.ini` -
PYTEST_ADDOPTS
env var - Explicitly disable the xdist plugin - One-off via args (good for VS Code):
Copy code
// .vscode/settings.json
    {
      "python.testing.pytestEnabled": true,
      "python.testing.pytestArgs": ["-p", "no:xdist", "tests"]
    }
- Project-wide via config:
Copy code
[pytest]
    addopts = -p no:xdist
- Environment variable (works well in devcontainers):
Copy code
// .devcontainer/devcontainer.json
    {
      "containerEnv": { "PYTEST_ADDOPTS": "-p no:xdist" }
    }
or in your VS Code env file:
Copy code
# .env
    PYTEST_ADDOPTS=-p no:xdist
and in settings:
Copy code
"python.envFile": "${workspaceFolder}/.env"
- If you sometimes want xdist for other tests - Mark Prefect “integration/e2e” tests and run them separately without xdist:
Copy code
# test file
    import pytest
    pytestmark = pytest.mark.serial
Copy code
[pytest]
    markers = 
      serial: run without xdist
Then in VS Code, have: - Normal runs: no special marker (can use xdist if you want) - Prefect runs: add args to exclude parallelism or add
-p no:xdist
and
-m serial
Quick verification - In the devcontainer terminal:
Copy code
echo "$PYTEST_ADDOPTS"
  pytest --version
  pytest -q --help | sed -n '1,80p'   # if you see -n/xdist options, the plugin is available
- Run tests once from VS Code and check the output for any xdist “gw0/gw1” lines; if none, you’re not parallelizing. If you share your current
.vscode/settings.json
and any
pytest.ini/pyproject.toml
snippets, I can suggest the minimal change to disable xdist just for the Prefect-related runs.
k
@Marvin if I run only 1 test, that shouldn't be an issue?
Copy code
pytest python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py 
=============================================================================================================================== test session starts ===============================================================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.6.0
rootdir: /workspace
configfile: pytest.ini
plugins: devtools-0.12.2, docker-3.2.3, anyio-4.11.0, asyncio-0.23.8, xdist-3.8.0, postgresql-7.0.2, dash-3.2.0, hydra-core-1.3.2, Faker-37.8.0, cov-6.2.1
asyncio: mode=strict
collected 1 item                                                                                                                                                                                                                                                                  

python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py 
F                                                                                                                                                                                                    [100%]

==================================================================================================================================== FAILURES =====================================================================================================================================
______________________________________________________________________________________________________________________________ test_my_favorite_flow ______________________________________________________________________________________________________________________________

    def test_my_favorite_flow():
>       with prefect_test_harness(server_startup_timeout=120):

python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py:11: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.10/contextlib.py:135: in __enter__
    return next(self.gen)
/home/vintecc/py310/lib/python3.10/site-packages/prefect/testing/utilities.py:168: in prefect_test_harness
    test_server.start(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <prefect.server.api.server.SubprocessASGIServer object at 0x77ee36386200>, timeout = 120

    def start(self, timeout: Optional[int] = None) -> None:
        """
        Start the server in a separate process. Safe to call multiple times; only starts
        the server once.
    
        Args:
            timeout: The maximum time to wait for the server to start
        """
        if not self.running:
            if self.port is None:
                self.port = self.find_available_port()
            assert self.port is not None, "Port must be provided or available"
            help_message = (
                f"Starting temporary server on {self.address}\nSee "
                "<https://docs.prefect.io/v3/concepts/server#how-to-guides> "
                "for more information on running a dedicated Prefect server."
            )
            <http://subprocess_server_logger.info|subprocess_server_logger.info>(help_message)
            try:
                self.running = True
                self.server_process = self._run_uvicorn_command()
                atexit.register(self.stop)
                with httpx.Client() as client:
                    response = None
                    elapsed_time = 0
                    max_wait_time = (
                        timeout
                        or PREFECT_SERVER_EPHEMERAL_STARTUP_TIMEOUT_SECONDS.value()
                    )
                    while elapsed_time < max_wait_time:
                        if self.server_process.poll() == 3:
                            self.port = self.find_available_port()
                            self.server_process = self._run_uvicorn_command()
                            continue
                        try:
                            response = client.get(f"{self.api_url}/health")
                        except httpx.ConnectError:
                            pass
                        else:
                            if response.status_code == 200:
                                break
                        time.sleep(0.1)
                        elapsed_time += 0.1
                    if response:
                        response.raise_for_status()
                    if not response:
                        error_message = "Timed out while attempting to connect to ephemeral Prefect API server."
                        if self.server_process.poll() is not None:
                            error_message += f" Ephemeral server process exited with code {self.server_process.returncode}."
                        if self.server_process.stdout:
                            error_message += (
                                f" stdout: {self.server_process.stdout.read()}"
                            )
                        if self.server_process.stderr:
                            error_message += (
                                f" stderr: {self.server_process.stderr.read()}"
                            )
>                       raise RuntimeError(error_message)
E                       RuntimeError: Timed out while attempting to connect to ephemeral Prefect API server.

/home/vintecc/py310/lib/python3.10/site-packages/prefect/server/api/server.py:921: RuntimeError
------------------------------------------------------------------------------------------------------------------------------ Captured stderr call -------------------------------------------------------------------------------------------------------------------------------
13:49:16.879 | INFO    | prefect - Starting temporary server on <http://127.0.0.1:8787>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
-------------------------------------------------------------------------------------------------------------------------------- Captured log call --------------------------------------------------------------------------------------------------------------------------------
INFO     prefect:server.py:881 Starting temporary server on <http://127.0.0.1:8787>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
m
Your message was too long, here's your message at the allowed limit: if I run only 1 test that shouldn't be an issue? ```pytest python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py =============================================================================================================================== test session starts =============================================================================================================================== platform linux -- Python 3 10 12 pytest-7 4 4 pluggy-1 6 0 rootdir /workspace configfile pytest ini plugins devtools-0 12 2 docker-3 2 3 anyio-4 11 0 asyncio-0 23 8 xdist-3 8 0 postgresql-7 0 2 dash-3 2 0 hydra-core-1 3 2 Faker-37 8 0 cov-6 2 1 asyncio mode=strict collected 1 item python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py F [100%] ==================================================================================================================================== FAILURES ===================================================================================================================================== ______________________________________________________________________________________________________________________________ test_my_favorite_flow ______________________________________________________________________________________________________________________________ def test_my_favorite_flow() > with prefect_test_harness(server_startup_timeout=120) python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py 11 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3 10/contextlib py 135 in enter return next(self gen) /home/vintecc/py310/lib/python3 10/site-packages/prefect/testing/utilities py 168 in prefect_test_harness test_server start( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <prefect server api server SubprocessASGIServer object at 0x77ee36386200> timeout = 120 def start(self timeout Optional[int] = None) -> None Start the server in a separate process Safe to call multiple times; only starts the server once Args timeout The maximum time to wait for the server to start if not self running if self port is None self port = self find_available_port() assert self port is not None Port must be provided or available help_message = ( f Starting temporary server on {self address}\nSee <https //docs prefect io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server ) <http //subprocess_server_logger info|subprocess_server_logger info>(help_message) try self running = True self server_process = self _run_uvicorn_command() atexit register(self
k
@Marvin ok, shorter:
Copy code
pytest python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py 
=============================================================================================================================== test session starts ===============================================================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.6.0
rootdir: /workspace
configfile: pytest.ini
plugins: devtools-0.12.2, docker-3.2.3, anyio-4.11.0, asyncio-0.23.8, xdist-3.8.0, postgresql-7.0.2, dash-3.2.0, hydra-core-1.3.2, Faker-37.8.0, cov-6.2.1
asyncio: mode=strict
collected 1 item                                                                                                                                                                                                                                                                  

python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py 
F                                                                                                                                                                                                    [100%]

==================================================================================================================================== FAILURES =====================================================================================================================================
______________________________________________________________________________________________________________________________ test_my_favorite_flow ______________________________________________________________________________________________________________________________

    def test_my_favorite_flow():
>       with prefect_test_harness(server_startup_timeout=120):

python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py:11: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.10/contextlib.py:135: in __enter__
    return next(self.gen)
/home/vintecc/py310/lib/python3.10/site-packages/prefect/testing/utilities.py:168: in prefect_test_harness
    test_server.start(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <prefect.server.api.server.SubprocessASGIServer object at 0x77ee36386200>, timeout = 120

    def start(self, timeout: Optional[int] = None) -> None:
        ...
                        if self.server_process.stderr:
                            error_message += (
                                f" stderr: {self.server_process.stderr.read()}"
                            )
>                       raise RuntimeError(error_message)
E                       RuntimeError: Timed out while attempting to connect to ephemeral Prefect API server.

/home/vintecc/py310/lib/python3.10/site-packages/prefect/server/api/server.py:921: RuntimeError
------------------------------------------------------------------------------------------------------------------------------ Captured stderr call -------------------------------------------------------------------------------------------------------------------------------
13:49:16.879 | INFO    | prefect - Starting temporary server on <http://127.0.0.1:8787>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
-------------------------------------------------------------------------------------------------------------------------------- Captured log call --------------------------------------------------------------------------------------------------------------------------------
INFO     prefect:server.py:881 Starting temporary server on <http://127.0.0.1:8787>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
m
Your message was too long, here's your message at the allowed limit: ok shorter ```pytest python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py =============================================================================================================================== test session starts =============================================================================================================================== platform linux -- Python 3 10 12 pytest-7 4 4 pluggy-1 6 0 rootdir /workspace configfile pytest ini plugins devtools-0 12 2 docker-3 2 3 anyio-4 11 0 asyncio-0 23 8 xdist-3 8 0 postgresql-7 0 2 dash-3 2 0 hydra-core-1 3 2 Faker-37 8 0 cov-6 2 1 asyncio mode=strict collected 1 item python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py F [100%] ==================================================================================================================================== FAILURES ===================================================================================================================================== ______________________________________________________________________________________________________________________________ test_my_favorite_flow ______________________________________________________________________________________________________________________________ def test_my_favorite_flow() > with prefect_test_harness(server_startup_timeout=120) python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py 11 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3 10/contextlib py 135 in enter return next(self gen) /home/vintecc/py310/lib/python3 10/site-packages/prefect/testing/utilities py 168 in prefect_test_harness test_server start( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <prefect server api server SubprocessASGIServer object at 0x77ee36386200> timeout = 120 def start(self timeout Optional[int] = None) -> None if self server_process stderr error_message += ( f stderr {self server_process stderr read()} ) > raise RuntimeError(error_message) E RuntimeError Timed out while attempting to connect to ephemeral Prefect API server /home/vintecc/py310/lib/python3 10/site-packages/prefect/server/api/server py 921 RuntimeError ------------------------------------------------------------------------------------------------------------------------------ Captured stderr call ------------------------------------------------------------------------------------------------------------------------------- 13 49 16 879 | INFO | prefect - Starting temporary server on <http //127 0 0 1 8787> See <https //docs prefect io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server -------------------------------------------------------------------------------------------------------------------------------- Captured log call -------------------------------------------------------------------------------------------------------------------------------- INFO prefect server py 881 Starting temporary server on <http //127 0 0 1 8787> See <https //docs prefect io/v3/concepts/server#how-to-guides> for more information on running a
k
@Marvin again, shorter:
Copy code
pytest python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py 
=============================================================================================================================== test session starts ===============================================================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.6.0
rootdir: /workspace
configfile: pytest.ini
plugins: devtools-0.12.2, docker-3.2.3, anyio-4.11.0, asyncio-0.23.8, xdist-3.8.0, postgresql-7.0.2, dash-3.2.0, hydra-core-1.3.2, Faker-37.8.0, cov-6.2.1
asyncio: mode=strict
collected 1 item                                                                                                                                                                                                                                                                  

python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py 
F                                                                                                                                                                                                    [100%]

== FAILURES ==
__ test_my_favorite_flow __

    def test_my_favorite_flow():
>       with prefect_test_harness(server_startup_timeout=120):

python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py:11: 
 _ _
/usr/lib/python3.10/contextlib.py:135: in __enter__
    return next(self.gen)
/home/vintecc/py310/lib/python3.10/site-packages/prefect/testing/utilities.py:168: in prefect_test_harness
    test_server.start(
 _ _

self = <prefect.server.api.server.SubprocessASGIServer object at 0x77ee36386200>, timeout = 120

    def start(self, timeout: Optional[int] = None) -> None:
        ...
>                       raise RuntimeError(error_message)
E                       RuntimeError: Timed out while attempting to connect to ephemeral Prefect API server.

/home/vintecc/py310/lib/python3.10/site-packages/prefect/server/api/server.py:921: RuntimeError
-- Captured stderr call --
13:49:16.879 | INFO    | prefect - Starting temporary server on <http://127.0.0.1:8787>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
-- Captured log call --
INFO     prefect:server.py:881 Starting temporary server on <http://127.0.0.1:8787>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
m
thought for 129.1 seconds
Thanks for the detailed output — that helps. What your failure means -
prefect_test_harness
spins up a temporary Prefect API at http://127.0.0.1:8787 and waits for /health. - Your test timed out connecting to that ephemeral API, so the server likely didn’t bind or your process couldn’t reach 127.0.0.1:8787. Quick things to try (most to least likely) 1) Ensure proxies are not intercepting localhost - In devcontainers, proxy vars can cause httpx to route 127.0.0.1 through a proxy.
Copy code
env | grep -i proxy
# if you see HTTP_PROXY / HTTPS_PROXY, set:
export NO_PROXY=127.0.0.1,localhost
# for permanence, add to devcontainer.json or your .env used by VS Code
2) Make sure nothing else is using port 8787
Copy code
ss -lntp | grep :8787 || true
lsof -iTCP:8787 -sTCP:LISTEN -P -n || true
# kill any stray process if found
3) Disable xdist for this test run (even if you didn’t pass -n, VS Code or configs can add it) - One-off:
Copy code
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py
- Or set in VS Code or env:
Copy code
# .vscode/settings.json
"python.testing.pytestArgs": ["-p", "no:xdist", "python/fraio/fraio_orchestrator/tests"]
# or
export PYTEST_ADDOPTS="-p no:xdist"
4) Get more diagnostics - Increase logging:
Copy code
export PREFECT_LOGGING_LEVEL=DEBUG
pytest -q -p no:xdist path/to/test.py -k test_my_favorite_flow -vv
- Try starting the same server class manually to catch immediate errors:
Copy code
python - << 'PY'
from prefect.server.api.server import SubprocessASGIServer
s = SubprocessASGIServer(host="127.0.0.1", port=8787)
s.start(timeout=60)
print("started")
s.stop()
print("stopped")
PY
If the manual start fails or the port is already in use, that’s your root cause. If proxies were set and adding NO_PROXY fixes it, that’s the culprit. Also, to ensure VS Code isn’t sneaking in parallelism: - Check .vscode/settings.json and pytest config files for
-n
or
addopts
, and env var
PYTEST_ADDOPTS
. - You can force-disable the plugin project-wide:
Copy code
# pytest.ini
[pytest]
addopts = -p no:xdist
Docs for testing flows: Test workflows in Prefect 3 If you can share: - Output of
env | grep -i proxy
- Result of
ss -lntp | grep :8787
- Any
addopts
in your pytest.ini/pyproject.toml …I’ll pinpoint the minimal fix.
k
@Marvin
Copy code
(py310)  /workspace   815-poc-workflow-orchestration ±  env | grep -i proxy
(py310)  ✘  /workspace   815-poc-workflow-orchestration ±  ss -lntp | grep :8787
LISTEN 0      511             127.0.0.1:8787       0.0.0.0:*
no addopts.
Copy code
export PREFECT_LOGGING_LEVEL=DEBUG
pytest -q -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow -vv 
========================================================================================================= test session starts =========================================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.6.0 -- /home/vintecc/py310/bin/python3
cachedir: .pytest_cache
rootdir: /workspace
configfile: pytest.ini
plugins: devtools-0.12.2, docker-3.2.3, anyio-4.11.0, asyncio-0.23.8, xdist-3.8.0, postgresql-7.0.2, dash-3.2.0, hydra-core-1.3.2, Faker-37.8.0, cov-6.2.1
asyncio: mode=strict
collected 1 item                                                                                                                                                                                                                      

python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow
and after that, the timeout error again.
m
thought for 138.9 seconds
Thanks — that narrows it down. xdist is off, yet we still time out and we can see 127.0.0.1:8787 is LISTENing. Next quick checks and two practical workarounds: Validate what’s on 8787 and if it’s healthy - Identify the process bound to 8787 (run with sudo to see PIDs inside the devcontainer):
Copy code
sudo ss -lntp | grep :8787 || true
sudo lsof -iTCP:8787 -sTCP:LISTEN -P -n || true
- Hit the health endpoint directly:
Copy code
curl -v <http://127.0.0.1:8787/api/health>
If this returns non-200 or hangs, the server on 8787 isn’t serving the API correctly and the harness is timing out waiting for it. Make sure localhost isn’t proxied - Even if your proxy vars are empty, explicitly set NO_PROXY:
Copy code
export NO_PROXY=127.0.0.1,localhost
Two reliable workarounds A) Use a different port just for tests (skip the harness, start the same server class yourself) - This avoids any 8787 conflicts and gives you clearer errors
Copy code
# conftest.py
import os
import pytest
from prefect.server.api.server import SubprocessASGIServer

@pytest.fixture(scope="session", autouse=True)
def local_prefect_api():
    port = 8788  # choose a free port
    srv = SubprocessASGIServer(port=port)
    srv.start(timeout=120)
    os.environ["PREFECT_API_URL"] = f"<http://127.0.0.1:{port}/api>"
    try:
        yield
    finally:
        srv.stop()
Then run your test without
prefect_test_harness
. If this works, the issue is specific to the harness’ default port/use. B) Point tests at a dedicated server you start up separately - Start it once:
Copy code
prefect server start
# default API at <http://127.0.0.1:4200/api>
- In your test env:
Copy code
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist ...
This bypasses the ephemeral server entirely. If you prefer to keep using
prefect_test_harness
, try freeing 8787 (kill whatever is bound), then re-run. But if something in your devcontainer periodically occupies 8787, option A (custom fixture on another port) is the most robust. Send me: - The output of the sudo lsof/ss commands above - The response from
curl <http://127.0.0.1:8787/api/health>
and I’ll pinpoint whether it’s a port collision or the API process failing during startup. Reference: Testing workflows in Prefect 3
k
@Marvin haha, that reference to testing workflows, that's exactly where it fails 😉. Really, I just can't make the most simple example to work. So let's figure out what's going wrong?
m
thought for 35.6 seconds
Got it — let’s isolate why the ephemeral server isn’t responding. What we know -
prefect_test_harness
starts an API at http://127.0.0.1:8787 and waits for /api/health. - Your test times out. You do have something listening on 127.0.0.1:8787, but we don’t know if it’s the Prefect API or another process. Targeted checks 1) See what’s actually bound to 8787
Copy code
sudo ss -lntp | grep :8787 || true
sudo lsof -iTCP:8787 -sTCP:LISTEN -P -n || true
Please paste the command names/PIDs you see. 2) Does the health endpoint respond?
Copy code
curl -v <http://127.0.0.1:8787/api/health>
- 200 OK with JSON => server is up; then the harness timing out is odd and we’ll inspect logs. - 404/hang/connection refused => something else is on 8787, or the API process failed to start. 3) Show raw server logs during the test Run once with stdout/stderr not captured and more logging:
Copy code
export PREFECT_LOGGING_LEVEL=DEBUG
pytest -s -vv -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
If logs are still sparse, try isolating pytest plugins:
Copy code
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 PREFECT_LOGGING_LEVEL=DEBUG \
pytest -s -vv -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
This helps rule out interference from third-party plugins. Two quick workarounds while we debug A) Minimal script outside pytest to confirm the harness itself
Copy code
python - << 'PY'
import os, time, httpx
from prefect.testing.utilities import prefect_test_harness
os.environ["PREFECT_LOGGING_LEVEL"] = "DEBUG"
try:
    with prefect_test_harness(server_startup_timeout=120):
        print("Harness started")
        r = httpx.get("<http://127.0.0.1:8787/api/health>", timeout=5)
        print("Health:", r.status_code, r.text)
        time.sleep(1)
    print("Harness stopped")
except Exception as e:
    print("Harness failed:", e)
    raise
PY
- If this succeeds, the issue is pytest/plugin-related. - If it fails the same way, it’s the server start or port conflict. B) Bypass the harness and use a dedicated server or a custom port - Start a dedicated server once:
Copy code
prefect server start
# API at <http://127.0.0.1:4200/api>
- Point the test to it:
Copy code
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
Or start the same embedded server on a different port in a session fixture:
Copy code
# conftest.py
import os, pytest
from prefect.server.api.server import SubprocessASGIServer

@pytest.fixture(scope="session", autouse=True)
def local_prefect_api():
    port = 8788
    srv = SubprocessASGIServer(port=port)
    srv.start(timeout=120)
    os.environ["PREFECT_API_URL"] = f"<http://127.0.0.1:{port}/api>"
    try:
        yield
    finally:
        srv.stop()
Docs for reference: Testing workflows in Prefect 3 If you can share: - The lsof/ss output for 8787 - The result of
curl <http://127.0.0.1:8787/api/health>
- Any additional logs from the
-s -vv
run …I’ll pinpoint the root cause and we’ll get your simplest test working.
k
@Marvin I already used that reference: it does not work! I already shared most of the stuff, please try to figure out why the server is not starting properly. This is the main issue!
m
thought for 100.5 seconds
Thanks for sticking with this — let’s zero in on why the ephemeral API on 127.0.0.1:8787 isn’t answering health even though something is listening there. Most likely causes - Another process is already bound to 8787 (not Prefect), so the harness keeps polling /api/health and times out. - The harness’ server process starts but crashes during init (e.g., DB path/permissions), so the socket remains but the app never serves /api/health. Targeted checks and fixes 1) Identify what’s on 8787 and kill it if it’s not Prefect
Copy code
# Show the owning process
ps -Af | grep 8787 | grep -v grep || true
# or (if available)
lsof -iTCP:8787 -sTCP:LISTEN -P -n || true
# or
ss -lntp | grep :8787 || true
If it’s not a Prefect uvicorn process, kill it and retry the test. 2) Verify the health endpoint
Copy code
curl -v <http://127.0.0.1:8787/api/health>
- 200 OK: server is healthy; the timeout would be surprising — paste response. - 404/hang: not Prefect or server crashed; proceed to step 3. 3) Run the same server class manually with debug logs This isolates Prefect from pytest/plugins and shows server errors directly.
Copy code
python - << 'PY'
import os, time, httpx
from prefect.server.api.server import SubprocessASGIServer

os.environ["PREFECT_LOGGING_LEVEL"] = "DEBUG"

srv = SubprocessASGIServer(port=8787)
try:
    srv.start(timeout=120)
    print("Server started")
    r = httpx.get("<http://127.0.0.1:8787/api/health>", timeout=5)
    print("Health:", r.status_code, r.text)
finally:
    srv.stop()
    print("Server stopped")
PY
- If this fails, the console will show the underlying crash (e.g., sqlite permission, missing asset, etc.). Share the output. - If this succeeds but pytest still fails, it’s pytest/plugin interference — go to step 4. 4) Rule out plugin interference quickly Run the single test with plugins disabled and no xdist:
Copy code
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 PREFECT_LOGGING_LEVEL=DEBUG \
pytest -s -vv -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
If this works, we’ll re-enable only the plugins you need. 5) Temporary unblock: bypass the harness If you need to move now, point tests at a dedicated server (we’ll still fix the harness):
Copy code
prefect server start   # in one terminal (API at <http://127.0.0.1:4200/api>)
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
Please paste: - The owning process from step 1 - The result of curl to /api/health - The full output of the manual SubprocessASGIServer run With that, I can pinpoint whether it’s a port collision or a server init crash and give you the exact fix. For reference, Prefect’s test guide (which you used) is here: Testing workflows in Prefect 3.
k
@Marvin stop focussing on that port. Whatever port,
prefect_test_harness
has issues starting the server. This should work out of the box, and it doesn't. So I'm missing something trivial, the question is: what!?
m
thought for 87.8 seconds
Totally hear you — let’s focus on the trivial-but-common culprits that break the ephemeral server under pytest in devcontainers. Top 5 things that usually fix this 1) Clear conflicting Prefect env and use a normal profile (avoid “ephemeral” as your active profile) The harness relies on its own ephemeral settings; a forced profile/env can interfere.
Copy code
# See what’s set
env | grep '^PREFECT_' || true

# Temporarily clear anything set
unset PREFECT_API_URL PREFECT_API_KEY PREFECT_PROFILE PREFECT_HOME \
      PREFECT_SERVER_API_HOST PREFECT_SERVER_API_PORT \
      PREFECT_SERVER_DATABASE_CONNECTION_URL

# Use a clean profile (not 'ephemeral')
prefect profile create dev || true
prefect profile use dev
prefect config view
2) Ensure the temp dir is writable (devcontainers sometimes set TMPDIR oddly)
Copy code
echo "$TMPDIR"
export TMPDIR=/tmp
3) Run the harness outside pytest to surface real startup errors (no plugin interference)
Copy code
python - << 'PY'
import os, time, httpx
from prefect.testing.utilities import prefect_test_harness
os.environ["PREFECT_LOGGING_LEVEL"] = "DEBUG"
with prefect_test_harness(server_startup_timeout=120):
    print("harness started")
    r = httpx.get("<http://127.0.0.1:8787/api/health>", timeout=5)
    print("health:", r.status_code, r.text)
print("harness stopped")
PY
- If this fails, the stdout will usually show the exact crash (e.g., permissions, missing env, sqlite error). Paste that and I’ll decode it. 4) Strip pytest plugin interference just for this run Some plugins tweak event loops and HTTP behavior. Disable autoload once:
Copy code
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 PREFECT_LOGGING_LEVEL=DEBUG \
pytest -s -vv -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
5) As an immediate unblock, bypass the harness - Start a dedicated server once:
Copy code
prefect server start
# API at <http://127.0.0.1:4200/api>
- Point the test at it:
Copy code
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
Why these help - Active profile “ephemeral” or forced Prefect env can collide with the harness’ own ephemeral settings. - TMPDIR misconfiguration/permissions in devcontainers can cause the server subprocess to crash during DB init. - Pytest plugins (asyncio/anyio/cov/docker) occasionally interfere with subprocess startup or IO — disabling autoload isolates that quickly. If the standalone harness script still times out, the console output will contain the root cause. Share that and I’ll give you the exact fix. Reference: Testing workflows in Prefect 3
k
@Marvin
Copy code
(py310)  /workspace   815-poc-workflow-orchestration ±  python - << 'PY'
import os, time, httpx
from prefect.testing.utilities import prefect_test_harness
os.environ["PREFECT_LOGGING_LEVEL"] = "DEBUG"
with prefect_test_harness(server_startup_timeout=120):
    print("harness started")
    r = httpx.get("<http://127.0.0.1:8787/api/health>", timeout=5)
    print("health:", r.status_code, r.text)
print("harness stopped")
PY

14:58:17.692 | DEBUG   | prefect.profiles - Using profile 'dev'
and
Copy code
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 PREFECT_LOGGING_LEVEL=DEBUG \
pytest -s -vv -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
== test session starts ==
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.6.0 -- /home/vintecc/py310/bin/python3
cachedir: .pytest_cache
rootdir: /workspace
configfile: pytest.ini
collecting ... 14:58:46.633 | DEBUG   | prefect.profiles - Using profile 'dev'
collected 1 item                                                                                                                                                                                                                      

python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow 14:58:47.702 | INFO    | prefect - Starting temporary server on <http://127.0.0.1:8726>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
FAILED

== FAILURES ==
__test_my_favorite_flow __

    @contextlib.contextmanager
    def map_httpcore_exceptions() -> typing.Iterator[None]:
...
>           yield

/home/vintecc/py310/lib/python3.10/site-packages/httpx/_transports/default.py:101: 
_ _
/home/...
/usr/lib/python3.10/contextlib.py:153: in __exit__
    self.gen.throw(typ, value, traceback)
_ _

map = {<class 'TimeoutError'>: <class 'httpcor...
E           httpx.ReadTimeout: timed out

/home/vintecc/py310/lib/python3.10/site-packages/httpx/_transports/default.py:118: ReadTimeout
-- Captured log call --
INFO     prefect:server.py:881 Starting temporary server on <http://127.0.0.1:8726>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
== short test summary info ==
FAILED python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow - httpx.ReadTimeout: timed out
== 1 failed in 7.53s ==
14:58:52.969 | INFO    | prefect - Stopping temporary server on <http://127.0.0.1:8726>
14:58:53.029 | DEBUG   | prefect.profiles - Using profile 'dev'
(py310)  ✘  /workspace   815-poc-workflow-orchestration ± 
m
Your message was too long, here's your message at the allowed limit:
Copy code
(py310) /workspace   815-poc-workflow-orchestration ±  python - << 'PY' import os time httpx from prefect testing utilities import prefect_test_harness os environ[ PREFECT_LOGGING_LEVEL ] = DEBUG with prefect_test_harness(server_startup_timeout=120) print( harness started ) r = httpx get( <http //127 0 0 1 8787/api/health> timeout=5) print( health r status_code r text) print( harness stopped ) PY 14 58 17 692 | DEBUG | prefect profiles - Using profile 'dev'
and ``` PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 PREFECT_LOGGING_LEVEL=DEBUG \ pytest -s -vv -p no xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py -k test_my_favorite_flow == test session starts == platform linux -- Python 3 10 12 pytest-7 4 4 pluggy-1 6 0 -- /home/vintecc/py310/bin/python3 cachedir pytest_cache rootdir /workspace configfile pytest ini collecting 14 58 46 633 | DEBUG | prefect profiles - Using profile 'dev' collected 1 item python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py test_my_favorite_flow 14 58 47 702 | INFO | prefect - Starting temporary server on <http //127 0 0 1 8726> See <https //docs prefect io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server FAILED == FAILURES == __test_my_favorite_flow __ @contextlib contextmanager def map_httpcore_exceptions() -> typing Iterator[None] global HTTPCORE_EXC_MAP if len(HTTPCORE_EXC_MAP) == 0 HTTPCORE_EXC_MAP = _load_httpcore_exceptions() try > yield /home/vintecc/py310/lib/python3 10/site-packages/httpx/_transports/default py 101 _ _ /home/ /usr/lib/python3 10/contextlib py 153 in exit self gen throw(typ value traceback) _ _ map = {<class 'TimeoutError'> <class 'httpcore ReadTimeout'> <class 'OSError'> <class 'httpcore ReadError'>} @contextlib contextmanager def map_exceptions(map ExceptionMapping) -> typing Iterator[None] try yield except Exception as exc # noqa PIE786 for from_exc to_exc in map items() if isinstance(exc from_exc) > raise to_exc(exc) from exc E httpcore ReadTimeout timed out /home/vintecc/py310/lib/python3 10/site-packages/httpcore/_exceptions py 14 ReadTimeout The above exception was the direct cause of the following exception def test_my_favorite_flow() > with prefect_test_harness(server_startup_timeout=120) python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py 11 _ _ /usr/lib/python3 10/contextlib py 153 in exit self gen throw(typ value traceback) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ @contextlib contextmanager def map_httpcore_exceptions() -> typing Iterator[None] if mapped_exc is None # pragma no cover raise message = str(exc) > raise mapped_exc(message) from exc E httpx ReadTimeout timed out /home/vintecc/py310/lib/python3 10/site-packages/httpx/_transports/default py 118 ReadTimeout ---------------------------------------------------------------------------------------------------------- Captured log call ---------------------------------------------------------------------------------------------------------- INFO prefect server py 881 Starting temporary server on <http //127 0 0 1 8726> See <https //docs prefect io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server == short test summary info == FAILED python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy py test_my_favorite_flow - httpx ReadTimeout timed out == 1 failed in 7 53s == 14
thought for 74.8 seconds
I hear you — let’s ignore the port and fix why the ephemeral server fails to come up. In practice, the failures I see most often in devcontainers are due to environment/profile conflicts or temp-dir/SQLite permission issues. Here’s a tight checklist that usually resolves it: 1) Reset Prefect env and use a clean profile - The harness brings its own ephemeral config; stray PREFECT_* env vars can break it.
Copy code
env | grep '^PREFECT_' || true

unset PREFECT_API_URL PREFECT_API_KEY PREFECT_PROFILE PREFECT_HOME \
      PREFECT_SERVER_API_HOST PREFECT_SERVER_API_PORT \
      PREFECT_SERVER_DATABASE_CONNECTION_URL

prefect profile create dev || true
prefect profile use dev
prefect config view
2) Ensure the temp directory is writable and simple - Devcontainers sometimes set TMPDIR oddly; force /tmp.
Copy code
echo "$TMPDIR"
export TMPDIR=/tmp
3) Sanity-check aiosqlite can create/use a DB in TMPDIR
Copy code
python - << 'PY'
import asyncio, aiosqlite, os, tempfile
path = os.path.join(tempfile.gettempdir(), "prefect-ephemeral.db")
async def main():
    async with aiosqlite.connect(path) as db:
        await db.execute("select 1")
        await db.commit()
    print("aiosqlite ok at:", path)
asyncio.run(main())
PY
4) Start the same server class directly to surface the real error - This bypasses pytest/plugins and shows startup failures.
Copy code
python - << 'PY'
import os, httpx
from prefect.server.api.server import SubprocessASGIServer
os.environ["PREFECT_LOGGING_LEVEL"] = "DEBUG"
srv = SubprocessASGIServer()  # let it pick a free port
try:
    srv.start(timeout=120)
    print("started on:", srv.api_url if hasattr(srv, "api_url") else "unknown")
    # try health
    url = getattr(srv, "api_url", "<http://127.0.0.1:4200/api>")
    print("health GET:", url + "/health")
    r = httpx.get(url + "/health", timeout=5)
    print("health:", r.status_code, r.text)
finally:
    srv.stop()
    print("stopped")
PY
- If this throws, the traceback will show the exact startup problem (e.g., permission/DB/migration). Paste it and I’ll give the precise fix. 5) If you need an immediate unblock - Run a dedicated server and point tests at it (no harness):
Copy code
prefect server start
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py -k test_my_favorite_flow
Why these steps - Stray PREFECT_* env or an active “ephemeral”/custom profile can conflict with the harness’ own ephemeral settings. - TMPDIR/permissions in devcontainers frequently trip the ephemeral SQLite DB creation. - The direct SubprocessASGIServer run will reveal the underlying error message that the harness swallows behind a timeout. Run steps 1–4. If step 4 fails, please share the traceback/stdout — that will tell us exactly what’s preventing the server from starting.
k
@Marvin ok, so again, the same thing:
Copy code
(py310)  /workspace   815-poc-workflow-orchestration ±  python - << 'PY'
import os, httpx
from prefect.server.api.server import SubprocessASGIServer
os.environ["PREFECT_LOGGING_LEVEL"] = "DEBUG"
srv = SubprocessASGIServer()  # let it pick a free port
try:
    srv.start(timeout=120)
    print("started on:", srv.api_url if hasattr(srv, "api_url") else "unknown")
    # try health
    url = getattr(srv, "api_url", "<http://127.0.0.1:4200/api>")
    print("health GET:", url + "/health")
    r = httpx.get(url + "/health", timeout=5)
    print("health:", r.status_code, r.text)
finally:
    srv.stop()
    print("stopped")
PY
15:04:19.257 | DEBUG   | prefect.profiles - Using profile 'dev'
stopped
Traceback (most recent call last):
  File "<stdin>", line 6, in <module>
  File "/home/vintecc/py310/lib/python3.10/site-packages/prefect/server/api/server.py", line 921, in start
    raise RuntimeError(error_message)
RuntimeError: Timed out while attempting to connect to ephemeral Prefect API server.
Now, let's try to solve it, instead of confirming the timeout issue
m
thought for 107.8 seconds
Thanks for pushing on this — you’re right, the core problem is the ephemeral Prefect API subprocess never becomes healthy. Let’s fix the typical root causes that block SubprocessASGIServer in devcontainers/venvs. What most often fixes this in practice 1) Ensure the child subprocess can import Prefect with the same interpreter The harness spawns a child Python process. If that child can’t import Prefect, the server dies immediately and you only see a timeout. - Verify the exact interpreter and that a child process sees Prefect:
Copy code
python - << 'PY'
import sys, subprocess, os
print("parent sys.executable:", sys.executable)
cp = subprocess.run([sys.executable, "-c",
    "import sys, prefect; print('child sees prefect', prefect.__version__, 'python', sys.version)"],
    capture_output=True, text=True)
print(cp.returncode, cp.stdout, cp.stderr)
PY
If the child cannot import Prefect, activate the same venv in your test run or adjust PATH so the child uses the venv’s python. 2) Reset Prefect settings and use a clean, writable home Stray PREFECT_* settings or an unwritable HOME can break the ephemeral server.
Copy code
# Clear conflicting settings for the current shell
unset PREFECT_API_URL PREFECT_API_KEY PREFECT_PROFILE PREFECT_SERVER_API_HOST \
      PREFECT_SERVER_API_PORT PREFECT_SERVER_DATABASE_CONNECTION_URL

# Use a clean, writable Prefect home and an explicit SQLite file
export PREFECT_HOME=/tmp/prefect-home
mkdir -p "$PREFECT_HOME"
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db

# Keep temp files in /tmp (some devcontainer TMPDIRs cause issues)
export TMPDIR=/tmp

# Optional: keep server logs noisy
export PREFECT_LOGGING_LEVEL=DEBUG
3) Start the same server class again after the reset
Copy code
python - << 'PY'
import os, httpx
from prefect.server.api.server import SubprocessASGIServer
print("PREFECT_HOME:", os.environ.get("PREFECT_HOME"))
print("DB URL:", os.environ.get("PREFECT_SERVER_DATABASE_CONNECTION_URL"))
srv = SubprocessASGIServer()
try:
    srv.start(timeout=120)
    url = getattr(srv, "api_url", "<http://127.0.0.1:4200/api>")
    r = httpx.get(url + "/health", timeout=5)
    print("health:", r.status_code, r.text)
finally:
    srv.stop()
    print("stopped")
PY
4) Disable pytest plugin interference just for this test run Even though the failure reproduces outside pytest for you, this helps others reading the thread and is safe to keep:
Copy code
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 pytest -s -vv -p no:xdist path/to/test.py -k test_my_favorite_flow
5) If you need to move now, bypass the harness Start a dedicated server and point tests at it:
Copy code
prefect server start
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist path/to/test.py -k test_my_favorite_flow
Why these help - The harness runs in a child process. If that process does not inherit your venv, Prefect isn’t importable and the server crashes instantly — you only see a timeout. - Devcontainers sometimes set TMPDIR/HOME to locations that are not writable the way the server expects during SQLite/migration init. Pointing PREFECT_HOME and the DB to /tmp avoids subtle permission/mount issues. - Clearing PREFECT_* avoids profile/env collisions with the harness’s own ephemeral settings. If it still times out after steps 1–3, please paste: - The output of the “child sees prefect” check (step 1) - The printed PREFECT_HOME and DB URL and the full stdout/stderr from the SubprocessASGIServer script in step 3 With those, I can give you a precise one-line fix. For reference: Testing workflows in Prefect 3
k
@Marvin so, let's take another route: I can run it outside my devcontainer, but it fails inside my devcontainer:
Copy code
✗ python -m pytest -q python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow -s

15:13:23.299 | INFO    | prefect - Starting temporary server on <http://127.0.0.1:8515>
See <https://docs.prefect.io/v3/concepts/server#how-to-guides> for more information on running a dedicated Prefect server.
15:13:29.323 | INFO    | Flow run 'gregarious-sidewinder' - Beginning flow run 'gregarious-sidewinder' for flow 'my-favorite-flow'
15:13:29.326 | INFO    | Flow run 'gregarious-sidewinder' - View at <http://127.0.0.1:8515/runs/flow-run/c8d49e24-6812-4380-8ae4-cdea392a298f>
15:13:29.340 | INFO    | Flow run 'gregarious-sidewinder' - Finished in state Completed()
15:13:29.352 | INFO    | prefect - Stopping temporary server on <http://127.0.0.1:8515>
.
1 passed in 9.13s
So let's focus on what can be wrong inside the devcontainer
m
thought for 53.3 seconds
Got it — since it works on your host but not inside the devcontainer, something about the container environment is blocking the server subprocess from becoming healthy. Let’s zero-in on the few things that typically break SubprocessASGIServer in containers and get concrete signal from the server process. Do these inside the devcontainer: 1) Verify the child process can import Prefect with the same interpreter If the child can’t import Prefect, the server dies immediately and you only see a timeout.
Copy code
python - << 'PY'
import sys, subprocess
print("parent:", sys.executable)
cp = subprocess.run(
    [sys.executable, "-c", "import sys, prefect; print('child:', sys.executable, 'prefect', prefect.__version__)"],
    capture_output=True, text=True
)
print("rc:", cp.returncode)
print("stdout:\n", cp.stdout)
print("stderr:\n", cp.stderr)
PY
If rc != 0 or it can’t import prefect, your venv isn’t being used by the child. We’ll fix PATH/venv activation. 2) Force a clean, writable home and DB location This removes TMP/HOME/DB permission issues that are common in devcontainers.
Copy code
export PREFECT_HOME=/tmp/prefect-home
mkdir -p "$PREFECT_HOME"
export TMPDIR=/tmp
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db
export PREFECT_LOGGING_LEVEL=DEBUG
3) Start the same server app in-process to get the real stack trace This avoids the silent timeout and will print why startup fails.
Copy code
python - << 'PY'
import os, uvicorn
from prefect.server.api.server import create_app
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]=os.environ.get("PREFECT_HOME", "/tmp/prefect-home")
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]=os.environ.get(
    "PREFECT_SERVER_DATABASE_CONNECTION_URL",
    "sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db",
)
app = create_app(ephemeral=True)
# Run with a single worker to keep it simple; Ctrl+C to stop if it runs
uvicorn.run(app, host="127.0.0.1", port=8899, log_level="debug")
PY
- If it throws, you’ll see the exact error (e.g., SQLite path/permissions, migration error, missing dependency). Paste that error and I’ll give you the one-line fix. - If it runs, open http://127.0.0.1:8899/api/health in the container to confirm. Then the issue is specific to the subprocess startup; proceed to step 4. 4) Force a clean subprocess start method On some container setups, a forked child can hang during ASGI/uvicorn startup. Forcing “spawn” often fixes it.
Copy code
python - << 'PY'
import os, httpx, multiprocessing as mp
mp.set_start_method("spawn", force=True)

from prefect.server.api.server import SubprocessASGIServer
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]="/tmp/prefect-home"
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]="sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db"

srv = SubprocessASGIServer()
try:
    srv.start(timeout=120)
    url = getattr(srv, "api_url", "<http://127.0.0.1:4200/api>")
    r = httpx.get(url + "/health", timeout=5)
    print("health:", r.status_code, r.text)
finally:
    srv.stop()
    print("stopped")
PY
5) Immediate unblock for your tests If you need to proceed while we pinpoint the root cause: - Start a dedicated server in the devcontainer:
Copy code
prefect server start
- Point tests to it:
Copy code
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow -s
Why these steps help - Child import check catches venv/path mismatches (very common in devcontainers). - Explicit PREFECT_HOME/TMPDIR/DB avoids container filesystem quirks that break SQLite/migrations. - Running the app in-process surfaces the exact exception; we won’t have to guess. - Forcing spawn avoids rare fork-related deadlocks on some container bases. Please share: - The output of the child-import test (step 1) - The first error/traceback you see from the in-process uvicorn run (step 3)
- Whether forcing spawn (step 4) makes SubprocessASGIServer healthy With that, I can give you the precise fix for your devcontainer image/config.
k
@Marvin
Copy code
python - << 'PY'
import os, uvicorn
from prefect.server.api.server import create_app
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]=os.environ.get("PREFECT_HOME", "/tmp/prefect-home")
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]=os.environ.get(
    "PREFECT_SERVER_DATABASE_CONNECTION_URL",
    "sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db",
)
app = create_app(ephemeral=True)
# Run with a single worker to keep it simple; Ctrl+C to stop if it runs
uvicorn.run(app, host="127.0.0.1", port=8899, log_level="debug")
PY
INFO:     Started server process [351156]
INFO:     Waiting for application startup.
15:27:01.782 | DEBUG   | prefect.profiles - Using profile 'ephemeral'
but:
Copy code
curl <http://127.0.0.1:8899/api/health>
hangs
m
thought for 71.8 seconds
Perfect, now we have a clean repro inside the devcontainer: uvicorn says “Waiting for application startup.” and curl to /api/health hangs. That means the app’s startup event never completes. In containers, this is almost always one of these trivial issues: A) The child process can’t import Prefect or required DB drivers B) SQLite driver/deps missing (aiosqlite) C) Prefect home/DB path not writable/doesn’t exist D) The combined “API+UI” app’s startup is blocking (UI static copy) while the pure API app is fine Let’s check each quickly. 1) Confirm the child process sees Prefect
Copy code
python - << 'PY'
import sys, subprocess
print("parent:", sys.executable)
cp = subprocess.run(
    [sys.executable, "-c", "import sys, prefect; print('child:', sys.executable, 'prefect', prefect.__version__)"],
    capture_output=True, text=True
)
print("rc:", cp.returncode)
print("stdout:\n", cp.stdout)
print("stderr:\n", cp.stderr)
PY
If rc != 0 or it can’t import prefect, the server subprocess dies immediately. We’ll fix the venv path. 2) Ensure SQLite async driver is present
Copy code
python - << 'PY'
try:
    import aiosqlite, sqlite3, sqlalchemy, alembic
    print("aiosqlite", aiosqlite.__version__, "sqlite", sqlite3.sqlite_version, "sqlalchemy", sqlalchemy.__version__, "alembic", alembic.__version__)
except Exception as e:
    print("IMPORT ERROR:", e)
    raise
PY
If aiosqlite is missing, install it inside the devcontainer:
Copy code
pip install aiosqlite
3) Use a clean, writable Prefect home and DB file Make sure the directory exists before starting the server.
Copy code
export PREFECT_HOME=/tmp/prefect-home
mkdir -p "$PREFECT_HOME"
export TMPDIR=/tmp
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db
export PREFECT_LOGGING_LEVEL=DEBUG
4) Try API-only app to isolate UI startup If API-only works, the hang is in UI mounting/copy.
Copy code
python - << 'PY'
import os, uvicorn
from prefect.server.api.server import create_api_app
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]="/tmp/prefect-home"
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]="sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db"
app = create_api_app()
uvicorn.run(app, host="127.0.0.1", port=8899, log_level="debug")
PY
Then:
Copy code
curl -v <http://127.0.0.1:8899/health>
Note: with API-only, health is at /health (not /api/health). 5) If API-only works, try full app again after ensuring the directory exists Your earlier run didn’t mkdir /tmp/prefect-home before starting. Do that, then:
Copy code
python - << 'PY'
import os, uvicorn
from prefect.server.api.server import create_app
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]="/tmp/prefect-home"
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]="sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db"
app = create_app(ephemeral=True)
uvicorn.run(app, host="127.0.0.1", port=8899, log_level="debug")
PY
Health for the full app is at:
Copy code
curl -v <http://127.0.0.1:8899/api/health>
6) If it still hangs, force a fresh process start method Some container bases misbehave with forked children.
Copy code
python - << 'PY'
import os, httpx, multiprocessing as mp
mp.set_start_method("spawn", force=True)
from prefect.server.api.server import SubprocessASGIServer

os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]="/tmp/prefect-home"
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]="sqlite+aiosqlite:////tmp/prefect-home/ephemeral.db"

srv = SubprocessASGIServer()
try:
    srv.start(timeout=120)
    url = getattr(srv, "api_url")
    print("api:", url)
    r = httpx.get(url + "/health", timeout=5)
    print("health:", r.status_code, r.text)
finally:
    srv.stop()
PY
Immediate unblock for tests - Since “prefect server start” works in the container, you can point tests at it and proceed: ``` prefect server start # separate terminal export PREFECT_API_URL=http://127.0.0.1:4200/api
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow -s ``` Please share: - Output from the child-import test (step 1) - Output from the aiosqlite/alembic import check (step 2) - Whether API-only app responds at /health after mkdir and DB env (step 4) One of these usually reveals the simple root cause (missing aiosqlite, unwritable/absent PREFECT_HOME, or UI app startup blocking). Once we see which, I’ll give you the precise one-liner fix to make prefect_test_harness work in your devcontainer.
k
@Marvin after a bit more debugging: in the devcontainer, the application doesn't start properly:
Copy code
prefect server start --host 0.0.0.0 --no-ui --no-scheduler --log-level DEBUG

Configure Prefect to communicate with the server with:

    prefect config set PREFECT_API_URL=<http://0.0.0.0:4200/api>

View the API reference documentation at <http://0.0.0.0:4200/docs>

The dashboard is disabled. Set `PREFECT_UI_ENABLED=1` to re-enable it.



INFO:     Started server process [326410]
INFO:     Waiting for application startup.
16:29:51.966 | DEBUG   | prefect.server.services.cancellationcleanup - Starting CancellationCleanup
16:29:51.967 | DEBUG   | prefect.server.services.cancellationcleanup - About to run CancellationCleanup...
16:29:52.099 | DEBUG   | prefect.server.services.foreman - Starting Foreman
16:29:52.099 | DEBUG   | prefect.server.services.foreman - About to run Foreman...
16:29:52.101 | DEBUG   | prefect.server.services.marklateruns - Starting MarkLateRuns
16:29:52.101 | DEBUG   | prefect.server.services.marklateruns - About to run MarkLateRuns...
16:29:52.102 | DEBUG   | prefect.server.services.failexpiredpauses - Starting FailExpiredPauses
16:29:52.102 | DEBUG   | prefect.server.services.failexpiredpauses - About to run FailExpiredPauses...
16:29:52.103 | DEBUG   | prefect.server.services.repossessor - Starting Repossessor
16:29:52.103 | DEBUG   | prefect.server.services.repossessor - About to run Repossessor...
16:29:52.104 | DEBUG   | prefect.server.services.task_run_recorder - TaskRunRecorder started
16:29:52.104 | DEBUG   | prefect.server.services.telemetry - Starting Telemetry
16:29:52.104 | DEBUG   | prefect.server.services.telemetry - About to run Telemetry...
16:29:52.105 | DEBUG   | prefect.server.events.services.event_persister - Event persister started
16:29:52.105 | DEBUG   | prefect.server.services.proactivetriggers - Starting ProactiveTriggers
16:29:52.105 | DEBUG   | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
16:29:52.106 | DEBUG   | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:29:52.106096+02:00
16:29:52.106 | DEBUG   | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
16:29:52.106 | DEBUG   | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:29:52.106577+02:00
16:29:52.106 | DEBUG   | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
16:29:52.106 | DEBUG   | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:29:52.106986+02:00
...
Never actually starts.
m
thought for 61.4 seconds
Thanks — focusing squarely on why the API app never finishes startup inside the devcontainer. This pattern (Uvicorn “Waiting for application startup.” and no /api/health) almost always comes down to the DB path/driver in containers. The quickest way to resolve it is to move the server’s SQLite DB off any bind-mounted workspace and into a clean, writable tmp dir, then reset the DB. Try this inside the devcontainer: 1) Force a clean Prefect home and SQLite DB in /tmp - This avoids file-lock quirks on bind mounts and ensures permissions are OK.
Copy code
export PREFECT_LOGGING_LEVEL=DEBUG
export PREFECT_HOME=/tmp/prefect-home
rm -rf /tmp/prefect-home && mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
2) Reset the database and start the API-only server
Copy code
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- You should see “Application startup complete.” and be able to GET:
Copy code
curl -v <http://127.0.0.1:4200/api/health>
3) If step 2 still hangs, confirm the SQLite driver is present
Copy code
python - << 'PY'
try:
    import aiosqlite, sqlite3, sqlalchemy, alembic
    print("aiosqlite", aiosqlite.__version__, "sqlite", sqlite3.sqlite_version, "sqlalchemy", sqlalchemy.__version__, "alembic", alembic.__version__)
except Exception as e:
    print("IMPORT ERROR:", e)
    raise
PY
- If aiosqlite is missing, install it:
Copy code
pip install aiosqlite
4) If it’s still stuck, run the API-only app in-process to surface the real error - This bypasses the subprocess wrapper and prints any migration/DB errors directly:
Copy code
python - << 'PY'
import os, uvicorn
from prefect.server.api.server import create_api_app
os.environ["PREFECT_LOGGING_LEVEL"]="DEBUG"
os.environ["PREFECT_HOME"]="/tmp/prefect-home"
os.environ["PREFECT_SERVER_DATABASE_CONNECTION_URL"]="sqlite+aiosqlite:////tmp/prefect-home/server.db"
app = create_api_app()
uvicorn.run(app, host="127.0.0.1", port=8899, log_level="debug")
PY
Then:
Copy code
curl -v <http://127.0.0.1:8899/health>
Why this works - Devcontainers commonly mount your workspace over a bind/NFS volume. SQLite file locks can stall on those mounts. Using /tmp (container-local) sidesteps that entirely. - A stale or partially-initialized DB can also block startup;
prefect server database reset -y
clears it. - If the aiosqlite driver is absent or mismatched, startup can hang; step 3 confirms the deps. If you run those steps and it still doesn’t start, please paste: - The output of the database reset command - The first 30 lines after
prefect server start …
with DEBUG - The output from the API-only run (step 4), including any traceback We’ll use that to pinpoint the exact blocker and give you a one-liner fix. Links: Settings and profiles, Testing workflows.
k
@Marvin in this step:
Copy code
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
I get the same issue again:
Copy code
The dashboard is disabled. Set `PREFECT_UI_ENABLED=1` to re-enable it.

INFO:     Started server process [328375]
INFO:     Waiting for application startup.
16:35:35.220 | DEBUG   | prefect.server.services.cancellationcleanup - Starting CancellationCleanup
16:35:35.221 | DEBUG   | prefect.server.services.cancellationcleanup - About to run CancellationCleanup...
16:35:35.224 | DEBUG   | prefect.server.services.foreman - Starting Foreman
16:35:35.224 | DEBUG   | prefect.server.services.foreman - About to run Foreman...
16:35:35.225 | DEBUG   | prefect.server.services.marklateruns - Starting MarkLateRuns
16:35:35.226 | DEBUG   | prefect.server.services.marklateruns - About to run MarkLateRuns...
16:35:35.227 | DEBUG   | prefect.server.services.failexpiredpauses - Starting FailExpiredPauses
16:35:35.227 | DEBUG   | prefect.server.services.failexpiredpauses - About to run FailExpiredPauses...
16:35:35.227 | DEBUG   | prefect.server.services.repossessor - Starting Repossessor
16:35:35.228 | DEBUG   | prefect.server.services.repossessor - About to run Repossessor...
16:35:35.228 | DEBUG   | prefect.server.services.recentdeploymentsscheduler - Starting RecentDeploymentsScheduler
16:35:35.228 | DEBUG   | prefect.server.services.recentdeploymentsscheduler - About to run RecentDeploymentsScheduler...
16:35:35.230 | DEBUG   | prefect.server.services.scheduler - Starting Scheduler
16:35:35.230 | DEBUG   | prefect.server.services.scheduler - About to run Scheduler...
16:35:35.231 | DEBUG   | prefect.server.services.task_run_recorder - TaskRunRecorder started
16:35:35.232 | DEBUG   | prefect.server.services.telemetry - Starting Telemetry
16:35:35.232 | DEBUG   | prefect.server.services.telemetry - About to run Telemetry...
16:35:35.233 | DEBUG   | prefect.server.events.services.event_persister - Event persister started
16:35:35.233 | DEBUG   | prefect.server.services.proactivetriggers - Starting ProactiveTriggers
16:35:35.233 | DEBUG   | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
16:35:35.233 | DEBUG   | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:35:35.233795+02:00
16:35:35.234 | DEBUG   | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
16:35:35.234 | DEBUG   | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:35:35.234253+02:00
16:35:35.234 | DEBUG   | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
16:35:35.234 | DEBUG   | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:35:35.234648+02:00
...
So, inside the devcontainer, it doesn't start
m
Your message was too long, here's your message at the allowed limit: in this step
Copy code
prefect server database reset -y prefect server start --host 127 0 0 1 --no-ui --log-level DEBUG
I get the same issue again ``` Configure Prefect to communicate with the server with prefect config set PREFECT_API_URL=<http //127 0 0 1 4200/api> View the API reference documentation at <http //127 0 0 1 4200/docs> The dashboard is disabled Set
PREFECT_UI_ENABLED=1
to re-enable it INFO Started server process [328375] INFO Waiting for application startup 16 35 35 220 | DEBUG | prefect server services cancellationcleanup - Starting CancellationCleanup 16 35 35 221 | DEBUG | prefect server services cancellationcleanup - About to run CancellationCleanup 16 35 35 224 | DEBUG | prefect server services foreman - Starting Foreman 16 35 35 224 | DEBUG | prefect server services foreman - About to run Foreman 16 35 35 225 | DEBUG | prefect server services marklateruns - Starting MarkLateRuns 16 35 35 226 | DEBUG | prefect server services marklateruns - About to run MarkLateRuns 16 35 35 227 | DEBUG | prefect server services failexpiredpauses - Starting FailExpiredPauses 16 35 35 227 | DEBUG | prefect server services failexpiredpauses - About to run FailExpiredPauses 16 35 35 227 | DEBUG | prefect server services repossessor - Starting Repossessor 16 35 35 228 | DEBUG | prefect server services repossessor - About to run Repossessor 16 35 35 228 | DEBUG | prefect server services recentdeploymentsscheduler - Starting RecentDeploymentsScheduler 16 35 35 228 | DEBUG | prefect server services recentdeploymentsscheduler - About to run RecentDeploymentsScheduler 16 35 35 230 | DEBUG | prefect server services scheduler - Starting Scheduler 16 35 35 230 | DEBUG | prefect server services scheduler - About to run Scheduler 16 35 35 231 | DEBUG | prefect server services task_run_recorder - TaskRunRecorder started 16 35 35 232 | DEBUG | prefect server services telemetry - Starting Telemetry 16 35 35 232 | DEBUG | prefect server services telemetry - About to run Telemetry 16 35 35 233 | DEBUG | prefect server events services event_persister - Event persister started 16 35 35 233 | DEBUG | prefect server services proactivetriggers - Starting ProactiveTriggers 16 35 35 233 | DEBUG | prefect server services proactivetriggers - About to run ProactiveTriggers 16 35 35 233 | DEBUG | prefect server services proactivetriggers - Finished running ProactiveTriggers Next run at 2025-10-04 16 35 35 233795+02 00 16 35 35 234 | DEBUG | prefect server services proactivetriggers - About to run ProactiveTriggers 16 35 35 234 | DEBUG | prefect server services proactivetriggers - Finished running ProactiveTriggers Next run at 2025-10-04 16 35 35 234253+02 00 16 35 35 234 | DEBUG | prefect server services proactivetriggers - About to run ProactiveTriggers 16 35 35 234 | DEBUG | prefect server services proactivetriggers - Finished running ProactiveTriggers Next run at 2025-10-04 16 35 35 234648+02 00 16 35 35 234 | DEBUG | prefect server services proactivetriggers - About to run ProactiveTriggers 16 35 35 235 | DEBUG
thought for 83.8 seconds
Thanks — now we know this is specific to your devcontainer and happens even with CLI. The app is stuck in “Waiting for application startup”, which means something in startup is blocking. In containers, the usual culprits are background services (especially telemetry/analytics) or SQLite on mounted volumes. Let’s isolate fast. Do these inside the devcontainer: 1) Force a clean, local DB and home - Avoids bind-mount locking issues.
Copy code
export PREFECT_LOGGING_LEVEL=DEBUG
export PREFECT_HOME=/tmp/prefect-home
rm -rf /tmp/prefect-home && mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
2) Start the API with NO SERVICES - If this responds, we’ve proven a service is hanging startup.
Copy code
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-services --no-ui --log-level DEBUG
# In another shell:
curl -v <http://127.0.0.1:4200/api/health>
3) If that works, re-enable services but disable analytics/telemetry - Telemetry calling out through a proxy-less container is a common hang.
Copy code
export PREFECT_SERVER_ANALYTICS_ENABLED=0
prefect server start --host 127.0.0.1 --no-ui --analytics-off --log-level DEBUG
curl -v <http://127.0.0.1:4200/api/health>
4) If you want the harness to work without services (most tests don’t need them), set this before pytest
Copy code
export PREFECT_API_SERVICES_ENABLED=0
export PREFECT_SERVER_ANALYTICS_ENABLED=0
pytest -p no:xdist python/fraio/fraio_orchestrator/tests/fraio_orchestrator/test_dummy.py::test_my_favorite_flow -s
5) Double-check the child interpreter can import Prefect - If the server subprocess can’t import Prefect, it dies and you see only a timeout.
Copy code
python - << 'PY'
import sys, subprocess
cp = subprocess.run([sys.executable, "-c", "import prefect; print(prefect.__version__)"], capture_output=True, text=True)
print("rc:", cp.returncode, "out:", cp.stdout, "err:", cp.stderr)
PY
6) Confirm the async SQLite driver is present
Copy code
python - << 'PY'
import aiosqlite, sqlite3
print("aiosqlite", aiosqlite.__version__, "sqlite", sqlite3.sqlite_version)
PY
If aiosqlite is missing, install it in the devcontainer:
Copy code
pip install aiosqlite
Outcome guide: - If “--no-services” starts and health is OK, a service is hanging startup; keeping services off for tests is fine. Otherwise disable just analytics (step 3). - If even “--no-services” hangs, the issue is deeper (DB/driver/permissions). The clean /tmp DB + aiosqlite check in steps 1 and 6 should address that; if not, share the first 30 lines after startup at DEBUG and we’ll pinpoint the exact blocker. Reference: Testing workflows in Prefect 3
k
@Marvin ok, so let's compare, this is on the host:
Copy code
16:40:06.489 | DEBUG | prefect.server.events.services.event_persister - Event persister started 16:40:06.489 | DEBUG | prefect.server.services.proactivetriggers - Starting ProactiveTriggers 16:40:06.489 | DEBUG | prefect.server.services.proactivetriggers - About to run ProactiveTriggers... 16:40:06.490 | DEBUG | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 14:40:11.489737+00:00 16:40:06.490 | INFO | prefect.server.events.services.triggers - ReactiveTriggers starting with unique consumer name: reactive-triggers-kurtsys-ThinkPad-P16v-Gen-1-920427-701abdf6 16:40:06.490 | DEBUG | prefect.server.events.triggers - Loading automations 16:40:06.491 | INFO | prefect.server.events.actions - Starting action message handler 16:40:06.491 | DEBUG | prefect.server.events.services.actions - Actions started INFO: Application startup complete.
This is on the devcontainer:
Copy code
16:41:20.185 | DEBUG | prefect.server.events.services.event_persister - Event persister started 16:41:20.186 | DEBUG | prefect.server.services.proactivetriggers - Starting ProactiveTriggers 16:41:20.186 | DEBUG | prefect.server.services.proactivetriggers - About to run ProactiveTriggers... 16:41:20.186 | DEBUG | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-10-04 16:41:20.186571+02:00 16:41:20.186 | DEBUG | prefect.server.services.proactivetriggers - About to run ProactiveTriggers... 1
what's the problem inside the devcontainer?
m
thought for 105.5 seconds
Thanks for the side‑by‑side logs — that’s the key. On your host, startup proceeds past the events services (ReactiveTriggers / Actions) and you see “Application startup complete.” Inside the devcontainer, it never reaches those lines and remains in “Waiting for application startup.” Given you’ve already moved the DB to /tmp and reset it, the most common remaining cause is a background service hanging during startup — typically telemetry/analytics trying to reach the internet from a container without egress or with DNS/proxy issues. Try these inside the devcontainer: 1) Prove it’s a service hang - Start server with services disabled (webserver only):
Copy code
export PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG

prefect server database reset -y
prefect server start --host 127.0.0.1 --no-services --no-ui --log-level DEBUG
- In another shell:
Copy code
curl -v <http://127.0.0.1:4200/api/health>
If this responds, the webserver is fine and a service is the blocker. 2) Disable analytics/telemetry and start services - This is the most frequent culprit in containers with restricted egress:
Copy code
export PREFECT_SERVER_ANALYTICS_ENABLED=0
prefect server start --host 127.0.0.1 --no-ui --analytics-off --log-level DEBUG
- Check for “Application startup complete.” and:
Copy code
curl -v <http://127.0.0.1:4200/api/health>
3) Make this permanent in your devcontainer/test env - In devcontainer.json:
Copy code
"containerEnv": {
  "PREFECT_HOME": "/tmp/prefect-home",
  "PREFECT_SERVER_DATABASE_CONNECTION_URL": "sqlite+aiosqlite:////tmp/prefect-home/server.db",
  "PREFECT_SERVER_ANALYTICS_ENABLED": "0",
  "PREFECT_LOGGING_LEVEL": "DEBUG"
}
- For pytest runs (so prefect_test_harness won’t hang):
Copy code
export PREFECT_SERVER_ANALYTICS_ENABLED=0
pytest -p no:xdist ...
Optional sanity checks - Confirm async SQLite driver is present:
Copy code
python - << 'PY'
import aiosqlite, sqlite3
print("aiosqlite", aiosqlite.__version__, "sqlite", sqlite3.sqlite_version)
PY
- Confirm the server subprocess can import Prefect (venv is consistent):
Copy code
python - << 'PY'
import sys, subprocess
cp = subprocess.run([sys.executable, "-c", "import prefect; print(prefect.__version__)"], capture_output=True, text=True)
print("rc:", cp.returncode, "out:", cp.stdout, "err:", cp.stderr)
PY
If: - “--no-services” works but “--analytics-off” still hangs, we can selectively disable other services next (but analytics is the usual fix). - Even “--no-services” hangs, then something deeper (filesystem/driver) is still off; but you’ve already moved DB to /tmp and reset, so that’s unlikely. Once “--analytics-off” allows “Application startup complete.”, your prefect_test_harness should work as long as you export
PREFECT_SERVER_ANALYTICS_ENABLED=0
in the test environment. Want me to provide a minimal .env and .vscode/settings.json snippet to make this automatic for your devcontainer and VS Code Python tests?
k
@Marvin aha, with --no-services, it seems to run. Now, which service(s) can be the cause? Can we switch them off one by one?
m
thought for 50.9 seconds
Yes — we can disable services selectively to find the blocker and keep a stable setup in your devcontainer. The Prefect server services you see in logs map to these toggles under ServerServicesSettings: - cancellation_cleanup - event_persister - event_logger - foreman - late_runs - scheduler - pause_expirations - repossessor - task_run_recorder - triggers You can disable them via environment variables before starting the server (or running tests). Each service has an enabled flag under the
PREFECT_API_SERVICES_*
namespace. Try this bisect approach inside the devcontainer: 1) Start with everything off, then enable in groups
Copy code
export PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG

# Disable all services
export PREFECT_API_SERVICES_ENABLED=0

prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
# Should say "Application startup complete."
2) Enable services gradually to find the culprit - Group 1: core scheduling loop
Copy code
export PREFECT_API_SERVICES_ENABLED=1
export PREFECT_API_SERVICES_SCHEDULER_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- Group 2: late runs + pause expirations
Copy code
export PREFECT_API_SERVICES_LATE_RUNS_ENABLED=1
export PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- Group 3: repossessor + cancellation cleanup + task run recorder
Copy code
export PREFECT_API_SERVICES_REPOSSESSOR_ENABLED=1
export PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED=1
export PREFECT_API_SERVICES_TASK_RUN_RECORDER_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- Group 4: event system
Copy code
export PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED=1
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=1
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
If a start hangs after enabling a group, the culprit is within that group; then toggle one-by-one within that group to find the exact service. Most common offenders in restricted containers: - TRIGGERS or EVENT_LOGGER can perform outbound calls (telemetry/analytics); if your container has no egress or DNS/proxy misconfig, they can hang. - SCHEDULER rarely hangs, but it’s essential; if it hangs, it’s likely DB-related rather than networking. For pytest/prefect_test_harness Once you’ve identified the problematic service(s), export the corresponding env var(s) before running tests so the ephemeral server inherits them, e.g.:
Copy code
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
pytest -p no:xdist ...
If you want, share which service flips it from “Application startup complete.” to “stuck,” and I’ll propose a minimal permanent configuration for your devcontainer.
k
@Marvin the moment I add one of the services, it seems to fail!?
m
thought for 36.6 seconds
I think we’ve found the root cause: inside your devcontainer, your PREFECT_API_URL is almost certainly set to http://0.0.0.0:4200/api. 0.0.0.0 is a bind address, not a connect address. When any server “service” starts, it calls back into the API using PREFECT_API_URL; if that’s 0.0.0.0, those HTTP calls hang — which is exactly why: - “--no-services” works (no callbacks) - Enabling any single service hangs startup Verify and fix - Check current value:
Copy code
prefect config view | grep -E 'PREFECT_API_URL|source'
- If you see http://0.0.0.0:4200/api, change it to a routable address from inside the container, e.g.:
Copy code
prefect config set PREFECT_API_URL=<http://127.0.0.1:4200/api>
# or, if you’re binding to a different interface/port, use that reachable address
- Also ensure localhost is not proxied:
Copy code
export NO_PROXY=127.0.0.1,localhost
Then retry a full start with services:
Copy code
prefect server start --host 127.0.0.1 --log-level DEBUG
# Expect to see: "Application startup complete."
curl -v <http://127.0.0.1:4200/api/health>
Notes - The CLI banner can print 0.0.0.0 when you pass --host 0.0.0.0. Do not set clients (including the server’s own services) to 0.0.0.0 — always use an address you can connect to (127.0.0.1 inside the container, or the container’s IP/hostname as appropriate). - If your container has restricted egress, you can still disable analytics to be safe:
Copy code
export PREFECT_SERVER_ANALYTICS_ENABLED=0
Once PREFECT_API_URL is corrected, prefect_test_harness should work in the devcontainer without --no-services. If it still doesn’t, paste the output of: - prefect config view (just the PREFECT_API_URL line) - The first lines after “Waiting for application startup.” at DEBUG …but I’m confident the 0.0.0.0 API URL is the blocker here.
k
@Marvin
Copy code
prefect config view | grep -E 'PREFECT_API_URL|source'
PREFECT_API_URL='<http://127.0.0.1:4200/api>' (from profile)
m
thought for 37.8 seconds
Great sleuthing. The diff in your logs is telling: on the host the sequence proceeds to “ReactiveTriggers starting…” and “Actions started” then “Application startup complete.” In the devcontainer it never reaches the reactive triggers/actions lines — it stalls during service startup. Given: - “--no-services” works - Any single service enabled hangs - You’ve already moved the DB to /tmp and reset it - PREFECT_API_URL is 127.0.0.1 (good) The most common blocker in containers is the events/telemetry path (triggers/actions/event logger) trying to talk back to the API or out to the network and getting stuck. Let’s isolate and keep your setup stable. Do this inside the devcontainer: 1) Ensure clean local DB/home and no proxy shenanigans
Copy code
export PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG
export NO_PROXY=127.0.0.1,localhost
# If your container has restricted egress, keep analytics off
export PREFECT_SERVER_ANALYTICS_ENABLED=0
2) Enable services except the event ones and verify startup completes
Copy code
# Turn services on globally
export PREFECT_API_SERVICES_ENABLED=1
# Keep event-related services off first
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
# event_persister is usually fine; leave it on to start
export PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED=1

prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
# Expect "Application startup complete."
3) If that works, re-enable event services one-by-one to find the culprit - Re-enable triggers:
Copy code
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- If still fine, re-enable event logger:
Copy code
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- If either re-enable causes the hang, you’ve found the offending service. 4) If it still hangs even with all three event services off Then the hang is elsewhere; test core services one at a time: - Scheduler only:
Copy code
export PREFECT_API_SERVICES_ENABLED=0
export PREFECT_API_SERVICES_SCHEDULER_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- Then add late_runs, pause_expirations, repossessor, cancellation_cleanup, task_run_recorder one-by-one:
Copy code
export PREFECT_API_SERVICES_LATE_RUNS_ENABLED=1
# ...then next one, etc.
Useful visibility - See which services Prefect thinks are enabled:
Copy code
prefect config view --show-defaults | grep PREFECT_API_SERVICES_
Once you’ve identified the offending service(s), keep them disabled in your devcontainer and tests: - For CLI/server:
Copy code
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
prefect server start --host 127.0.0.1 --no-ui
- For pytest/prefect_test_harness:
Copy code
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
pytest -p no:xdist ...
If you report back which exact service causes the stall, I’ll suggest a minimal permanent config and potential root-cause (e.g., network egress/proxy/DNS quirks specific to that service) so you can decide whether to keep it off in dev or tweak networking to allow it.
k
@Marvin well, with
PREFECT_API_SERVICES_ENABLED
on, it fails. So let's see if we can debug the network?
m
thought for 64.1 seconds
Bingo — this is almost certainly telemetry/analytics blocking in your devcontainer. Why this pattern lines up - --no-services works because it skips all background services, including telemetry. - As soon as you enable any service, Prefect also brings telemetry online. In containers with no egress or a required proxy, telemetry’s first outbound call can hang startup before “Application startup complete.” Quick fix - Turn analytics off and start with services enabled:
Copy code
export PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG
export NO_PROXY=127.0.0.1,localhost
export PREFECT_SERVER_ANALYTICS_ENABLED=0

prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
# Then
curl -v <http://127.0.0.1:4200/api/health>
If this reaches “Application startup complete.” and health returns 200, telemetry was the blocker. Make it sticky (devcontainer/tests) - Add to your devcontainer env (or .env used by VS Code):
Copy code
PREFECT_HOME=/tmp/prefect-home
PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
PREFECT_SERVER_ANALYTICS_ENABLED=0
NO_PROXY=127.0.0.1,localhost
- For pytest/prefect_test_harness:
Copy code
export PREFECT_SERVER_ANALYTICS_ENABLED=0
pytest -p no:xdist ...
If it still hangs after disabling analytics, next suspects are the event services. You can keep services on and just turn off the event ones:
Copy code
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
# event_persister is usually fine; leave it on
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
But in most devcontainers, disabling analytics alone resolves this. Once confirmed, your prefect_test_harness should work in the container as well.
k
@Marvin well:
Copy code
export PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG
export NO_PROXY=127.0.0.1,localhost
export PREFECT_SERVER_ANALYTICS_ENABLED=0

prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
doesn't complete the startup
m
thought for 63.6 seconds
Good find. Since “--no-services” works but any service causes a hang, let’s confirm it’s the events/telemetry path and fix your devcontainer defaults. Do this inside the devcontainer: 1) Disable just the triggers/events services first - Keep services ON, but turn off the likely blockers. Also extend NO_PROXY to include ::1 and the container hostname.
Copy code
export PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG

# Networking sanity
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)

# Services: ON globally, but disable event-related ones
export PREFECT_API_SERVICES_ENABLED=1
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
# event_persister can stay on; it’s usually fine
export PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED=1

prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
Then:
Copy code
curl -v <http://127.0.0.1:4200/api/health>
- If this now reaches “Application startup complete.”, the triggers/event_logger path was the culprit. 2) If it still hangs, try scheduler-only to see if core services are fine
Copy code
export PREFECT_API_SERVICES_ENABLED=0
export PREFECT_API_SERVICES_SCHEDULER_ENABLED=1
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- If scheduler-only works, add services back one-by-one: - late_runs, pause_expirations, repossessor, cancellation_cleanup, task_run_recorder - This pinpoints the exact one. 3) Force local loopback (avoid IPv6/proxy pitfalls) and fast-fail network calls - Keep clients on IPv4 only and fast-fail if any callback goes out-of-container:
Copy code
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
export PREFECT_API_REQUEST_TIMEOUT=5
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)
4) For tests (prefect_test_harness), apply the same env Add these to your devcontainer env or the shell running pytest:
Copy code
export PREFECT_HOME=/tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)
Optional network debug - See where the server is trying to connect during startup:
Copy code
# In one shell
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG

# In another shell, watch outbound loopback attempts
ss -tnp state syn-sent | grep 4200 || true
ss -tnp | grep 'python' || true
Why this helps - Triggers/event logger open streaming connections back to the API; in containers with odd proxy/DNS/IPv6 settings, those can hang. - NO_PROXY including ::1 and $(hostname) avoids proxying loopback or hostname->loopback. - PREFECT_API_REQUEST_TIMEOUT avoids indefinite hangs and will surface an error if a service keeps trying to call out. Please run step 1 first and tell me: - Do you get “Application startup complete.” with triggers+event_logger disabled? If yes, we’ll make that the default in your devcontainer and re-enable them later if needed. If not, tell me which single service (from step 2) causes the hang.
k
@Marvin I'm not sure how, but now, it seems to work with
using profile 'local'
- how did I get that profile?
m
thought for 65.2 seconds
Likely one of these happened: - You ran prefect profile populate-defaults This seeds built-in profiles (including “local”). Once populated, you can switch to it:
Copy code
prefect profile populate-defaults
prefect profile ls
prefect profile use local
prefect profile inspect local
- You or a tool set PREFECT_PROFILE=local in your environment The active profile is chosen in this order: 1) CLI flag: prefect --profile local ... 2) Env var: PREFECT_PROFILE=local 3) The “active” entry in your profiles file - Your profiles file was updated Profiles live at: - Linux/macOS: ~/.prefect/profiles.toml - Or a custom path if PREFECT_PROFILES_PATH is set Quickly check what happened:
Copy code
# Show active profile and where each value comes from
prefect config view

# List and inspect profiles
prefect profile ls
prefect profile inspect local

# See if PREFECT_PROFILE env var is set
env | grep ^PREFECT_PROFILE
If you want to make “local” your stable default in the devcontainer: - Add to your container’s env:
Copy code
PREFECT_PROFILE=local
- Or run once:
Copy code
prefect profile use local
Docs: Settings and profiles
k
@Marvin aah, I was mistaken: I still have the issue with this command:
Copy code
export PREFECT_HOME=/tmp/prefect-home
mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG

# Networking sanity
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)

# Services: ON globally, but disable event-related ones
export PREFECT_API_SERVICES_ENABLED=1
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
# event_persister can stay on; it's usually fine
export PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED=1

prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG > prefect.log 2>&1
m
thought for 46.0 seconds
Thanks — since “no-services” works but enabling services hangs, let’s identify the exact service and confirm if it’s network-related. Run this inside the devcontainer to test each service in isolation - It disables all services, then enables one at a time and checks for “Application startup complete.”. It also puts the DB/home in /tmp and avoids proxies for loopback.
Copy code
export PREFECT_HOME=/tmp/prefect-home
rm -rf /tmp/prefect-home && mkdir -p /tmp/prefect-home
export PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
export PREFECT_LOGGING_LEVEL=DEBUG
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
export PREFECT_SERVER_ANALYTICS_ENABLED=0
export PREFECT_API_REQUEST_TIMEOUT=5

# Helper to clear all per-service flags
unset $(env | awk -F= '/^PREFECT_API_SERVICES_/ {print $1}')

services=(CANCELLATION_CLEANUP EVENT_PERSISTER EVENT_LOGGER FOREMAN LATE_RUNS SCHEDULER PAUSE_EXPIRATIONS REPOSSESSOR TASK_RUN_RECORDER TRIGGERS)

for s in "${services[@]}"; do
  echo "=== Testing $s ==="
  # Disable global services, then enable just one
  export PREFECT_API_SERVICES_ENABLED=0
  unset $(env | awk -F= '/^PREFECT_API_SERVICES_/ {print $1}')
  export PREFECT_API_SERVICES_${s}_ENABLED=1

  # Fresh DB each run to avoid cross-run effects
  rm -f /tmp/prefect-home/server.db

  timeout 30s prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG > /tmp/prefect_${s}.log 2>&1
  if grep -q "Application startup complete" /tmp/prefect_${s}.log; then
    echo "$s: OK"
  else
    echo "$s: HANG"
    tail -n +1 /tmp/prefect_${s}.log | sed -n '1,120p'
  fi
done
What this tells us - The first service that prints HANG is the culprit. The last lines in that log usually show which component is stalling. Share the first failing service and the last ~50 lines of its log. If you want to try a quick network-focused run first - Keep all services enabled but make network failures fail fast, and ensure loopback is not proxied:
Copy code
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)
export PREFECT_API_REQUEST_TIMEOUT=5
export PREFECT_SERVER_ANALYTICS_ENABLED=0
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG > prefect.log 2>&1
tail -n 200 prefect.log
Common outcomes - TRIGGERS or EVENT_LOGGER hangs in restricted networks. If so, leave them off in dev:
Copy code
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
export PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED=0
- If SCHEDULER or FOREMAN hangs, it’s typically DB/loopback config; but you’ve already moved DB to /tmp and set NO_PROXY, so we’ll inspect that log to see the exact call that’s stalling. Once we know which service is the blocker, I’ll give you the minimal permanent env for your devcontainer and for pytest so prefect_test_harness works reliably.
k
@Marvin ok, this seems to be the biggest issue :
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
(there are still a lot of logs, now about
Repossessor
, but I guess that's fine). So, Can we figure out what the issue is? Also, it should be ok if I just add this env variable to the devcontainer?
m
thought for 30.6 seconds
Nice work narrowing it down. The culprit is the Triggers service. That service powers reactive automations by subscribing to the server’s event stream; if that subscription blocks (e.g., due to container networking or proxy/DNS quirks), the entire “application startup” never completes. Everything else (flows, scheduler, repossessor, etc.) can run fine without it. Is it safe to disable in your devcontainer? - Yes. If you’re not using reactive automations, set:
Copy code
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
- Keep this in your devcontainer config so both CLI and prefect_test_harness inherit it. Example devcontainer.json:
Copy code
"containerEnv": {
  "PREFECT_HOME": "/tmp/prefect-home",
  "PREFECT_SERVER_DATABASE_CONNECTION_URL": "sqlite+aiosqlite:////tmp/prefect-home/server.db",
  "PREFECT_API_SERVICES_TRIGGERS_ENABLED": "0",
  "PREFECT_SERVER_ANALYTICS_ENABLED": "0",
  "NO_PROXY": "127.0.0.1,localhost,::1,${containerWorkspaceFolderBasename}"
}
Or in a .env that VS Code loads:
Copy code
PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
PREFECT_SERVER_ANALYTICS_ENABLED=0
If you want to dig into why Triggers hangs in your container - Quick SSE connectivity check (API-only run is fine): - Start server (with services enabled but triggers off):
Copy code
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG
- In another shell, try streaming events:
Copy code
curl -N -H "Accept: text/event-stream" <http://127.0.0.1:4200/api/events/stream?topics=*>
If this hangs or errors unexpectedly, it points at loopback/proxy/DNS behavior in the container. Ensuring NO_PROXY includes 127.0.0.1, localhost, ::1, and the container hostname usually helps:
Copy code
export NO_PROXY=127.0.0.1,localhost,::1,$(hostname)
- Force quick failures instead of indefinite waits while testing:
Copy code
export PREFECT_API_REQUEST_TIMEOUT=5
If Triggers starts after this, a retry loop was waiting on a blocked connection. - Confirm the server is using IPv4 loopback consistently:
Copy code
export PREFECT_API_URL=<http://127.0.0.1:4200/api>
What you lose with triggers disabled - Reactive automations (event-driven actions). Core orchestration (scheduling, runs, logs, UI) still works. If you’d like, share a short log snippet from a start attempt with only Triggers enabled (and DEBUG logging) and I can try to spot the exact step it stalls on. Otherwise, adding PREFECT_API_SERVICES_TRIGGERS_ENABLED=0 to your devcontainer is a pragmatic, safe fix for local development and testing.
k
@Marvin
Copy code
curl -N -H "Accept: text/event-stream" <http://127.0.0.1:4200/api/events/stream>\?topics\=\*

{"detail":"Not Found"}%
Right, but now, can I trigger a flow now? I guess not?
m
thought for 31.8 seconds
You can absolutely trigger and run flows without the Triggers service. Triggers is only for reactive automations (event-driven actions). Core orchestration still works: Ways to run flows now - Ad-hoc run (no deployment needed) If your code calls the flow directly, it will execute locally and still report to the server if PREFECT_API_URL points at it:
Copy code
from prefect import flow

@flow
def my_flow():
    return "hello"

if __name__ == "__main__":
    my_flow()
- Run a deployment via CLI 1) Create a deployment (either with CLI or code). For CLI:
Copy code
prefect deploy ./path/to/flow.py:my_flow -n my-deploy --pool my-pool
2) Start a worker for your pool:
Copy code
prefect work-pool create my-pool --type process   # once
prefect worker start --pool my-pool               # keep this running
3) Trigger the deployment:
Copy code
prefect deployment run my_flow/my-deploy --watch
- Run a deployment via code (3.x style)
Copy code
from prefect import flow

@flow
def my_flow():
    return "hello"

my_flow.from_source(
    source="file://.",                 # or your repo
    entrypoint="path/to/flow.py:my_flow",
).deploy(
    name="my-deploy",
    work_pool_name="my-pool",
)
Notes - Your curl to /api/events/stream returning 404 is expected if you hit API-only or the endpoint path differs across modes — but it doesn’t affect running flows. - With TRIGGERS disabled you just won’t get reactive event-driven automations; scheduled runs, manual triggers, logging, results, UI, etc., remain fully functional. - Make sure your CLI points at the server:
Copy code
prefect config set PREFECT_API_URL=<http://127.0.0.1:4200/api>
If you want, I can give you a minimal end-to-end snippet right now that: - Creates a pool - Starts a worker - Creates a deployment - Triggers it and watches logs Or share your flow’s path/name and I’ll tailor the exact commands.
k
@Marvin let's focus on what "reactive triggers" mean: so manual triggers are one of them. Which kind of triggers do work?
m
thought for 87.7 seconds
Great question. Short version: disabling the Triggers service only affects event-driven automations. It does not impact manual runs or time-based scheduling. What still works with Triggers disabled - Manual runs - Run a flow directly in code - Trigger a deployment via CLI or UI (prefect deployment run …) - Scheduled runs - Cron/interval schedules on deployments - Core orchestration - Workers pulling from pools, logging, states, results, UI What relies on the Triggers service (breaks when disabled) - Event-driven automations (reactive) - Examples: - When a flow run fails, send a notification - When a flow run completes, trigger another deployment - When a task run enters a specific state, run an action - Custom automations that subscribe to Prefect’s event stream - These are created via the Automations API/UI and evaluate incoming events in real-time. With Triggers off, those automations won’t fire. Helpful docs to dig deeper - Automations and triggers overview: Event triggers in Prefect 3 - Automations API reference: Automations API If you’re not using automations today, you can safely keep PREFECT_API_SERVICES_TRIGGERS_ENABLED=0 in your devcontainer and everything else will work as expected. If you do plan to use automations, we can troubleshoot the Triggers service networking in your container separately.
k
@Marvin OK, so, what can be the issue these don't work, while all the rest seems to run fine?
m
thought for 117.2 seconds
Short answer - The Triggers service opens a long‑lived server‑sent events (SSE) subscription back to your API. In your devcontainer, that subscription is hanging. Everything else (scheduler, repossessor, workers, manual/cron runs) uses short, regular HTTP calls and works fine. - Yes, it’s safe to disable Triggers in your devcontainer. You’ll lose reactive automations, but manual runs and schedules are unaffected. Why Triggers can hang in containers - Proxy/DNS/loopback quirks: a request to 127.0.0.1/localhost gets routed through a proxy or IPv6 (::1) while the server only listens on IPv4 127.0.0.1. - NO_PROXY missing hostname/::1 so local calls still get proxied. - SSE stream endpoint blocked or timing out by a network policy. - Less common: a mismatch in PREFECT_API_URL (e.g., 0.0.0.0) or container can’t resolve its own hostname. Make your devcontainer stable Add these to your devcontainer environment so CLI/tests inherit them:
Copy code
PREFECT_HOME=/tmp/prefect-home
PREFECT_SERVER_DATABASE_CONNECTION_URL=sqlite+aiosqlite:////tmp/prefect-home/server.db
PREFECT_API_URL=<http://127.0.0.1:4200/api>
PREFECT_API_SERVICES_TRIGGERS_ENABLED=0
PREFECT_SERVER_ANALYTICS_ENABLED=0
NO_PROXY=127.0.0.1,localhost,::1,${containerWorkspaceFolderBasename}
PREFECT_LOGGING_LEVEL=DEBUG
This will: - Keep everything running - Disable the Triggers service only in dev (fine unless you actively test automations) If you want to debug Triggers specifically - Force quick network error reporting:
Copy code
export PREFECT_API_REQUEST_TIMEOUT=5
- Start with Triggers only to capture logs:
Copy code
export PREFECT_API_SERVICES_ENABLED=0
export PREFECT_API_SERVICES_TRIGGERS_ENABLED=1
prefect server database reset -y
prefect server start --host 127.0.0.1 --no-ui --log-level DEBUG > triggers.log 2>&1
tail -n 200 triggers.log
If it’s a proxy/loopback issue, you’ll typically see httpx timeouts or connection errors. If you share the last ~100 lines from that log, I can pinpoint the exact failure. What you can still do now - Manual runs and scheduled deployments work normally with Triggers disabled: - Create a pool, start a worker, deploy, and trigger with:
Copy code
prefect work-pool create my-pool --type process
    prefect worker start --pool my-pool
    prefect deploy ./path/to/flow.py:my_flow -n my-deploy --pool my-pool
    prefect deployment run my_flow/my-deploy --watch
Docs on event-driven automations if/when you re-enable Triggers: Event triggers in Prefect 3