<@ULVA73B9P>, I am encountering an error similar t...
# ask-community
p
@Marvin, I am encountering an error similar to the one described here when my flow is deployed on Prefect server OSS (latest version) installed with helm on a managed kubernetes cluster. The only difference with this question is that the parameter that doesn't pass for me is a
date.date
object. Same error when using a
datetime.datetime
. Here is my exact error :
Copy code
Task run failed with exception: PrefectHTTPStatusError('Client error \'409 Conflict\' for url \'<http://prefect-server.prefectoss.svc.cluster.local:4200/api/deployments/d6150396-f839-4084-8091-401f5a6c53ca/create_flow_run>\'\nResponse:
 {\'detail\': "Error creating flow run: Validation failed for field \'time_start\'. Failure reason: 1704067200.0 is not of type \'string\'"}\n
The bug doesnt occur when using Prefect cloud. Why ?
m
thought for 192.9 seconds
Short answer: this almost always points to a client/server version mismatch. You’re likely running a Prefect 3.x client against a self‑hosted Prefect Server 2.x (installed via Helm), which rejects the 3.x client’s encoding of time fields. Prefect Cloud accepts it because Cloud is compatible with both 2.x and 3.x clients. What’s happening - Your server is validating the request body and expects an ISO‑8601 string for the time_start field, but it’s receiving a float epoch like 1704067200.0. That’s why you see “is not of type 'string'”. - This typically occurs when a Prefect 3.x client talks to a Prefect 2.x OSS server. The serialization and schema expectations differ across the major versions. - Prefect Cloud runs a compatible API that handles both 2.x and 3.x clients, so you don’t see the error there. Official guidance - Self‑hosted Prefect Server must match the major version of the client you’re using. If you use Prefect 3.x in your code/worker, your OSS server must also be Prefect 3.x. - Docs note: Self-hosted Server (Prefect 3.x) How to verify quickly - Check your client version (inside your flow/worker image):
Copy code
prefect version
- Check the server image tag running in your cluster (look for the Prefect image):
Copy code
kubectl -n <namespace> get deploy -o wide | grep prefect
  kubectl -n <namespace> describe pods | grep -i image:
If you see an image like
prefecthq/prefect:2.x.y
, that’s a 2.x server. How to fix - Best fix: Align versions. - If your code/worker uses Prefect 3.x, upgrade your Helm deployment to a Prefect 3.x server. - Start here: Prefect 3.x self-hosted server overview and Server how-to (3.x) - If you must stay on Prefect 2 OSS server, run a Prefect 2.x client/worker for that environment (e.g., pin
prefect<3
in your project image). - Practical workaround (while you align versions): pass only JSON-serializable parameter types (e.g., strings) and parse inside the flow. - For date/datetime parameters:
Copy code
# When triggering the run, send strings
    parameters = {
      "start_date": my_date.isoformat(),        # date -> 'YYYY-MM-DD'
      "start_datetime": my_dt.isoformat(),      # datetime -> 'YYYY-MM-DDTHH:MM:SS[Z]'
    }

    # In your flow code
    from datetime import date, datetime

    @flow
    def my_flow(start_date: str, start_datetime: str):
        d = date.fromisoformat(start_date)
        dt = datetime.fromisoformat(start_datetime)  # handle timezone as needed
        ...
This avoids non-JSON-native types and the server’s strict validation of time fields. A couple of clarifying questions so I can be 100% sure: - What does
prefect version
print from your worker/flow runtime? - What image tag is your Helm release running for the Prefect server (the
prefecthq/prefect:<tag>
you see via kubectl)? - Which Helm chart/version did you install? Once we confirm those, I can give you step-by-step upgrade or pinning instructions.
n
hi @Pierre L i think marvin is half right, i bet this is about a version mismatch but i doubt its a major version mismatch what version of prefect server are you running?
p
@Marvin I had indeed a worker using an old Prefect v2 image. After using the latest image, I have: In the worker:
Copy code
kubectl exec -it oss-prefect-worker-c6944b98d-hh9s5 -n prefectoss prefect version

Version:              3.4.19
API version:          0.8.4
Python version:       3.11.13
Git commit:           e7ae41cd
Built:                Fri, Sep 19, 2025 09:26 PM
OS/Arch:              linux/x86_64
Profile:              ephemeral
Server type:          server
Pydantic version:     2.11.9
Server:
  Database:           sqlite
  SQLite version:     3.46.1
Integrations:
  prefect-redis:      0.2.5
  prefect-kubernetes: 0.6.5
In the server:
Copy code
kubectl exec -it prefect-server-657d75c788-ttmlm -n prefectoss prefect version

Version:              3.4.25
API version:          0.8.4
Python version:       3.11.14
Git commit:           8a37e7b1
Built:                Fri, Oct 24, 2025 03:14 PM
OS/Arch:              linux/x86_64
Profile:              ephemeral
Server type:          ephemeral
Pydantic version:     2.12.3
Server:
  Database:           postgresql
  PostgreSQL version: 14.13
Integrations:
  prefect-redis:      0.2.6
kubectl -n prefectoss get deploy -o wide | grep prefect
:
Copy code
oss-prefect-worker   1/1     1            1           28d   prefect-worker   prefecthq/prefect:3.4.19-python3.11-kubernetes   app.kubernetes.io/component=worker,app.kubernetes.io/instance=prefect-worker,app.kubernetes.io/name=oss-prefect-worker
prefect-server       1/1     1            1           29d   prefect-server   prefecthq/prefect:3.4.25-python3.11              app.kubernetes.io/component=server,app.kubernetes.io/instance=prefect-server,app.kubernetes.io/name=prefect-server
kubectl -n prefectoss describe pods | grep Image
:
Copy code
Image:         prefecthq/prefect:3.4.19-python3.11-kubernetes
    Image ID:      docker.io/prefecthq/prefect@sha256:28bf360a9eace58b821373c8fd2bebf92e10701743fbb8dd8412b48f3d6c6802
    Image:         prefecthq/prefect:3.4.25-python3.11
    Image ID:      docker.io/prefecthq/prefect@sha256:22c60fc7a040ee457a225c3ef7bf70ec9665ffad6e7f503aa049aa142f05f92c
    Image:           docker.io/bitnamilegacy/postgresql:14.13.0
    Image ID:        docker.io/bitnamilegacy/postgresql@sha256:df6ec02e2b9ae0163c30a91a370d27bf32c1e588942f50f7323e4182ad89d08e
    Image:           docker.io/bitnamilegacy/redis:8.2.1
    Image ID:        docker.io/bitnamilegacy/redis@sha256:25bf63f3caf75af4628c0dfcf39859ad1ac8abe135be85e99699f9637b16dc28
but I get the same error.
m
thought for 855.9 seconds
Thanks for sharing the versions — that rules out the 2.x/3.x mismatch. Given both worker and server are on Prefect 3.x, the error is now squarely about how the parameter value is getting serialized on the way to the server. What the error means - The server is validating your deployment’s parameter schema and sees a parameter named
time_start
whose JSON type is “string” (as expected for date/datetime), but it receives a float value
1704067200.0
instead. - That float looks like a Unix timestamp. Prefect’s API expects ISO 8601 strings for date/datetime parameters (e.g.,
2024-01-01T00:00:00Z
), not numeric epochs. - This is why you see: “… Validation failed for field 'time_start' … is not of type 'string'”. Why Cloud works - Cloud normalizes a few inputs and its UI may send ISO 8601 strings for date/datetime parameters. On OSS, depending on how you trigger runs (UI vs. client), you may be sending a numeric epoch that fails validation. Most common causes we see - Triggering from the OSS UI parameter form where a date/datetime control posts a numeric value. - Passing numpy/pandas date-like objects (e.g.,
numpy.datetime64
,
pandas.Timestamp
) that serialize to epoch numbers. - Manually constructing parameters with a timestamp (e.g.,
dt.timestamp()
) instead of a string. Quick ways to prove and unblock 1) If you’re starting runs from the UI: - Switch to the JSON editor for parameters and send strings:
Copy code
{
  "time_start": "2024-01-01T00:00:00Z"
}
- Or use a plain date string for a
date
parameter:
Copy code
{
  "time_start": "2024-01-01"
}
2) From Python (recommended for testing): -
run_deployment
will handle datetime objects correctly. Ensure you send str or Python datetime (not numpy/pandas types):
Copy code
from datetime import date, datetime, timezone
  from prefect.deployments import run_deployment

  # Strings are always safe
  params = {
      "time_start": datetime(2024, 1, 1, tzinfo=timezone.utc).isoformat()
  }
  run = await run_deployment("your-flow/your-deployment", parameters=params)
3) From CLI: - Pass parameters as JSON strings:
Copy code
prefect deployment run "your-flow/your-deployment" \
    -p time_start="\"2024-01-01T00:00:00Z\""
Docs on the endpoint: Create flow run from deployment If your flow signature expects datetime/date - Prefect’s parameter schema will still define them as JSON “string” types (with date/date-time format). That’s normal. - If you often trigger from the UI, one pragmatic way to avoid serialization pitfalls is to accept strings and parse inside the flow:
Copy code
from datetime import date, datetime
  from prefect import flow

  @flow
  def my_flow(time_start: str):
      # Accept ISO strings from UI, parse inside
      # Choose one based on your expected format
      try:
          dt = datetime.fromisoformat(time_start)
      except ValueError:
          d = date.fromisoformat(time_start)
      ...
A couple of questions to nail this down: - How are you creating the run when you see the error (UI “Run” button, Python
run_deployment
, or CLI)? - If UI: can you try the JSON editor with an ISO 8601 string and confirm it succeeds? - If Python: are you passing any numpy/pandas date types? If so, convert to Python types or to ISO strings (e.g.,
pd.Timestamp(...).to_pydatetime().isoformat()
). References - API endpoint details: Create flow run from deployment - Python helper signature: run_deployment() If you can share the exact parameters JSON that the failing request uses (e.g., from your browser’s network tab or the code snippet), I can verify and suggest a precise fix.
p
@Nate Thank you for your response. I upgraded both the server and the worker to
3.4.25
but I still get the error. It look like the error is actually not related to my setup, because it appear since yesterday on code of ours that was working previously. Maybe it is related to a lib upgrade.
it was related to this issue and it is solved in prefect 3.5.0
🙌 1