to create a task definition for
prefect server start
containers on a single ECS service, with postgres on an aurora cluster and prefect-ui on another ECS service. Unfortunately, the hasura container keeps stopping and we aren't getting logs, so I'd like to compare with someone who has a working setup.
to TASK RUN window in the UI? So far I'm getting just:
but I'd also like to see what's going on INSIDE the job itself. Thank you in advance! 🙂👍
12:21:32 INFO RunNamespacedJob Job test-app has been created. 12:21:37 INFO RunNamespacedJob Job test-app has been completed. 12:21:37 INFO RunNamespacedJob Job test-app has been deleted.
storage for each job. We construct the images ourselves and may have many flows. We want to avoid constructing a unique image for each specific Flow, and we also want to avoid creating an image that contains every flow. Instead, we want to provide a base image that the Kubernetes Agent can run, and subsequently the launched Job can ask for the necessary Flow at runtime. Ultimately, we want to avoid serializing the Flow within the built image and still use the Kubernetes Agent. A thought we had was to build an image whose entrypoint would query for the Flow, place the serialized Flow in the appropriate file location and start Prefect. Looking for any thoughts on how to best use k8s agent with these constraints! Thanks
initcontainer that had failed and stopped the deployment. The error mentioned an invalid character in the
and that lead me to the fix.
, so most often you actually need to URL encode the username to avoid having two
symbols in your connection string, e.g.
. This is what I did and what the
container was upset about. I hardcoded the full connection string in the helm template with the two
symbols and it worked straight away.
prefer there would be an extra
file in storage when submitting a flow to the server, versus the serialized variant of a flow that is used to register with the GraphQL API? To motivate the reasoning, we have a need to directly invoke the GraphQL API, and are therefore manually serializing flows ourselves. As per our current understanding, we also need to build the
file in storage and make sure that is referenced in the serialization, but given the fact that the
are captured in the serialized flow anyway, we were curious as to what the
file is used for, what other information is captured there? Mainly trying to understand why, rather than perhaps build a work around of any kind. Thanks in advance!