<@ULVA73B9P> if i am using the testharness and i a...
# marvin-ai
j
@Marvin if i am using the testharness and i am creating artifacts in my tests. what can i do to test that the artifacts are created as i expect? how can i retrieve them during my tests?
m
thought for 155.0 seconds
Short answer: yes—you can assert on artifacts created during tests. Run your flow inside a Prefect test harness, then query artifacts via the Prefect client and make assertions on what you expect (key, type, contents, etc.). Shocking levels of excitement all around. Here’s a minimal pattern (Prefect 3.x):
Copy code
import asyncio
import pytest

from prefect import flow
from prefect.artifacts import create_markdown_artifact
from prefect.testing.utilities import prefect_test_harness
from prefect.client.orchestration import get_client
from prefect.client.schemas.filters import ArtifactFilter, ArtifactFilterKey, FlowRunFilter, FlowRunFilterId


@flow
def demo_flow():
    create_markdown_artifact("# hello world", key="test-markdown", description="created in a test")
    return "done"


def test_artifact_created_and_retrieved():
    # Isolated ephemeral API/DB for the test
    with prefect_test_harness():
        # Run the flow and capture the flow run id from the returned state
        state = demo_flow(return_state=True)
        flow_run_id = state.state_details.flow_run_id

        async def _check():
            async with get_client() as client:
                # Option 1: fetch by key (recommended for deterministic tests)
                by_key = await client.read_artifacts(
                    artifact_filter=ArtifactFilter(
                        key=ArtifactFilterKey(any_=["test-markdown"])
                    )
                )

                # Option 2: fetch artifacts created by this flow run
                by_run = await client.read_artifacts(
                    artifact_filter=ArtifactFilter(
                        flow_runs=FlowRunFilter(
                            id=FlowRunFilterId(any_=[flow_run_id])
                        )
                    )
                )

                # Make assertions
                assert any(a.key == "test-markdown" for a in by_key)
                # You can also assert on type or contents depending on what you created
                md = next(a for a in by_key if a.key == "test-markdown")
                # For markdown artifacts, `md.data` should contain your content
                assert "hello world" in str(getattr(md, "data", ""))

                # Sanity: artifact exists for this flow run
                assert any(a.key == "test-markdown" for a in by_run)

        asyncio.run(_check())
Tips: - Give your artifacts a unique
key
in tests; it makes retrieval and assertions trivial. - The test harness isolates state, so listing “all artifacts” is safe if you prefer broad queries. - You can assert on fields like
key
,
type
,
description
, and the artifact’s
data
payload. Related references from the research agent (for some light bedtime reading): - Example harness setup in a repo: conftest.py example - Artifacts API reference (3.x): Artifacts API Want me to tailor this to your exact artifact type (markdown, link, table, etc.) or confirm the filter you prefer (by key vs. by flow run)? Also, are you on Prefect 3.x?