Vaikath Job
09/22/2022, 6:46 PMIlya Galperin
09/22/2022, 7:02 PMSam Garvis
09/22/2022, 7:15 PMAmol Shirke
09/22/2022, 7:23 PMLuis Henrique
09/22/2022, 7:33 PMkiran
09/22/2022, 8:29 PMMichał Augoff
09/22/2022, 9:01 PMCan't connect to Orion API at https://<internal-url>/api. Check that it's accessible from your machine.
even though the connection between the API and UI works fine and we can see the data in the UI. Could that just be a false warning? Any way to get rid of it? We set the url via an env var as part of the k8s deploymentTony Popov
09/22/2022, 9:26 PMKubernetesJob
block work? There’s almost no info in the docs and github examples are quite confusing.
Context is setting up the CI deployment of Prefect flowsTony Popov
09/22/2022, 9:49 PMprefect-cli
talks to orion
via kubectl port-forward
2. prefect-agent
knows about orion
via the k8s service (setting api url env var)
Hence the question: how does prefect deploy
command interact with an prefect-agent
if orion
does not know about agent as well?Tony Popov
09/22/2022, 11:47 PMprefect deployment build ./path/to/flow.py …
cli uploads all files in ./
to StorageAmir
09/23/2022, 12:33 AMRuntimeError: A 'sync_compatible' method was called from a context that was previously async but is now sync. The sync call must be changed to run in a worker thread to support sending the coroutine for 'load' to the main thread.
I've dug into this error, and here are my findings:
• This error only seems to occur when I'm attempting to deploy to Azure while importing functions from another directory (either sibling or child directories).
• I'm able to successfully deploy the code to Blob Storage when I comment out the directory imports (and in association the related functions/tasks).
• I'm able to successfully spin up an agent and have the flows run through the local file system. This is isolated only when trying to deploy to Azure.
• The error stems from sync_compatible.
• Not many examples of this occurring in the past, but there is one in this slack channel and this posting here. Both are from the last ~3 weeks.
Any ideas? Thanks!
EDIT: Found a solution. Seems like this issue stemmed from my inexperience with the tool. I had broken up the file structure into a separate folder for Flows, Tasks and Functions. Issue was that I had the .py files for the functions written in a way where not all of the code was contained within a function.
ie:
engine = create_engine(snowflake...)
def test():
return
Did NOT work, whereas
def test():
engine = create_engine(snowflake...)
return
Did work.
I'll leave this here in case anyone else runs into a similar issue 🙂Slackbot
09/23/2022, 1:00 AMSlackbot
09/23/2022, 3:14 AMOscar Björhn
09/23/2022, 8:25 AMAndreas Nord
09/23/2022, 8:47 AMVadym Dytyniak
09/23/2022, 11:58 AMArshak Ulubabyan
09/23/2022, 1:16 PMTony Piazza
09/23/2022, 1:23 PMprefect cloud login
Key: <my-generated-api-key>
Unable to authenticate with Prefect Cloud. Please ensure your credentials are correct.
Sebastián Montoya Tapia
09/23/2022, 1:40 PMToby Rahloff
09/23/2022, 2:20 PMaddress
param removed from the code snippet for readability).
It seems like resetting the Prefect database fixes this issue. Could the tracking of tasks overload the DB? Is this edge-case known and are there workarounds for it?Bertangela Loret de Mola
09/23/2022, 2:36 PMBrett Naul
09/23/2022, 2:49 PMJared Robbins
09/23/2022, 3:14 PMGuido Stein
09/23/2022, 4:14 PMSean Turner
09/23/2022, 4:21 PMv2
)? My EKS agent picked up a flow off of a queue but the pod never started?
prefect deployment build main.py:foo \
-n sean-k8s-test-deployment \
-q company-name \
-sb s3/company-name-prefect-staging/sean-turner/foo \
-ib kubernetes-job/test \
--apply
EKS agent logs:
16:09:45.420 | INFO | prefect.agent - Submitting flow run 'c8f0bba7-b104-40d2-aa96-8a278c800f1e'
16:09:45.766 | INFO | prefect.agent - Completed submission of flow run 'c8f0bba7-b104-40d2-aa96-8a278c800f1e'
16:10:45.785 | ERROR | prefect.infrastructure.kubernetes-job - Job 'test6qzkh': Pod never started.
My kubernetes-job/test
infra block has {"EXTRA_PIP_PACKAGE": "s3fs"}
and does not have a kubeconfig set.
My S3 infra block s3/company-name-prefect-staging/sean-turner/foo
does not have credentials because the agent and orion pods are assuming IAM roles that give permissions to read and write to the bucket.Missi Sogbohossou
09/23/2022, 5:02 PM"Checking Flow run state...".
I don't get any errors, but it is stuck. The flow runs successfully locally. Anyone experienced this? I'm not sure where to go from here.Nic
09/23/2022, 5:35 PMNic
09/23/2022, 5:36 PMEric Coleman
09/23/2022, 6:11 PMAlix Cook
09/23/2022, 8:13 PMResponse: {'detail': 'protected block types cannot be updated.'}
when I try to update them like so:
block = DateTime(name=storage_key, value=pendulum.now())
await block.save(storage_key, overwrite=True)
Alix Cook
09/23/2022, 8:13 PMResponse: {'detail': 'protected block types cannot be updated.'}
when I try to update them like so:
block = DateTime(name=storage_key, value=pendulum.now())
await block.save(storage_key, overwrite=True)
Christopher Boyd
09/23/2022, 8:20 PM