microft08/16/2022, 2:44 PM
Tim Enders08/16/2022, 4:40 PM
defined on a mapped task? I now get this error (
) when trying to map across the subsequent function signature.
TypeError: object of type 'NoneType' has no len()
@task def get_items_list( client, entity, page, response_attr=None, path=None, params=None ):
Sam Garvis08/16/2022, 5:45 PM
, I get
whether I run this in an ipynb or with a python file through the terminal. Is it expected to run this command with await and async? Or is this not supposed to happen?
AttributeError: 'coroutine' object has no attribute 'get'
Blake Hamm08/16/2022, 6:16 PM
Alexander Belikov08/16/2022, 6:19 PM
and it worked. The main reason for using
processes =  for chunk in url_chunks: kwargs["mp3_urls"] = chunk processes.append(mp.Process(foo=foo, kwargs=kwargs)) for p in processes: p.start() for p in processes: p.join()
is to have a large model in memory shared between processes. In fact it works when I run this exact flow with prefect via a python script calling the flow as a fucntion, it works as expected. However, when run it as
or via a scheduled run, the processes in
prefect deployment run ...
don't run. The flow result is marked as success to my surprise. Any ideas?
chicago-joe08/16/2022, 7:24 PM
Vlad Tudor08/16/2022, 7:42 PM
locally and to bind it to minikube for testing purposes. For now, I see the Agent in the Server UI, but the tasks are not executed. How do I configure this? Thank you for you patience, I am new to Prefect.
😄 minikube v1.26.1 on Ubuntu 20.04 ✨ Using the docker driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... 🏃 Updating the running docker "minikube" container ... 🐳 Preparing Kubernetes v1.24.3 on Docker 20.10.17 ... ▪ Generating certificates and keys ... ▪ Booting up control plane ... ▪ Configuring RBAC rules ... 🔎 Verifying Kubernetes components... ▪ Using image <http://gcr.io/k8s-minikube/storage-provisioner:v5|gcr.io/k8s-minikube/storage-provisioner:v5> 🌟 Enabled addons: storage-provisioner, default-storageclass 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Tim Enders08/16/2022, 8:54 PM
sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection timed out, timeout 30.00 (Background on this error at: <https://sqlalche.me/e/14/3o7r>)
Hammad Haqqani08/16/2022, 10:03 PM
prefect agent ecs start --cluster arn:aws:ecs:us-west-2:00000000:cluster/prefect-dev --label aws-ecs
Usage: prefect agent [OPTIONS] COMMAND [ARGS]...
Try 'prefect agent --help' for help.
╭─ Error ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ No such command 'ecs'. │
Ben Muller08/16/2022, 10:38 PM
Mars08/16/2022, 11:57 PM
. This is without a running Orion server. The code in my flow that creates the Secret block appears to work. However, when I try use my Secret block in my 2.0 flow the prefect client raises a PrefectHTTPStatusError, and says ‘Block document not found’. In Prefect 1.0 I could use
if __name__ == '__main__': myflow()
to set the values I need, but I don’t see any mention of similar functionality for 2.0.
Jaime Raldua Veuthey08/17/2022, 7:52 AM
Tim Helfensdörfer08/17/2022, 9:37 AM
Tim Helfensdörfer08/17/2022, 10:15 AM
Mohamed Ayoub Chettouh08/17/2022, 10:29 AM
does give the block types. UPDATE2: Something is off, when hosting on localhost and doing an ssh tunnel on it it works like a charm. When hosting on
prefect block type ls
the page is empty. UPDATE3: Now it works on the public IP as well ?!? Maybe it needs to start without
the first time
Andreas Nigg08/17/2022, 11:27 AM
However, as soon as I add infrastr. overrides, the flow stays in "pending" state forever
prefect deployment build r_script_automation.py:r_script_automation --name retention_cohort_analysis_deployment -t k8s -sb gcs/gcs-prefect-stprage -ib kubernetes-job/generic-k8s-job -o cohort_deployment.yaml
If I manually delete the infra_overrides-block from the deployment-yaml - the flow works again. So somehow this env.xyz - overrides are the problem. Can you point me to my mistake? Edit: I tested overriding the "image" - this works. It really only fails when overriding the environment variables....
prefect deployment build r_script_automation.py:r_script_automation --name retention_cohort_analysis_deployment -t k8s -sb gcs/gcs-prefect-stprage -ib kubernetes-job/generic-k8s-job -o cohort_deployment.yaml --override image=europe-docker.pkg.dev/vol-at/rm-datateam-repository/r-script-automation:beta-23 --override env.GIT_PYTHON_REFRESH="quiet" --override env.GOOGLE_APPLICATION_CREDENTIALS="/google/.google-secret-key.json" --override env.CHROMIUM_FLAGS="--no-sandbox"
Sachin Ranveer08/17/2022, 11:53 AM
Jason Thomas08/17/2022, 12:33 PM
Hi all, I just upgraded from
errors within flows/tasks were caught by Prefect and the flow would continue, now in
they are being raised and crashing my run. How can I get back to the previous behavior? I’ll post my code in the thread
Pedro Machado08/17/2022, 12:39 PM
Hanno Küpers08/17/2022, 12:59 PM
. I tried different return value types for the flow (str, dict). What am I missing here? How can I easily access the return value of a succesful flow run? Thank you
Mars08/17/2022, 2:03 PM
Hamza Naanani08/17/2022, 2:05 PM
command ? I tried setting
prefect config set PREFECT_API_URL
as a setting but it doesn't seem to work.
Alberto Fernández08/17/2022, 2:12 PM
Tom Klein08/17/2022, 3:04 PM
- with no mapped tasks being created, and then to the flow itself just immediatley moving to the
stated - even though nothing failed?
Darren08/17/2022, 3:08 PM
my credentials look like this
ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:997)
username="email@example.com", password="mypasswordhere", smtp_server="smtp.office365.com", smtp_type="STARTTLS", smtp_port=587 )
email_server_credentials = EmailServerCredentials(
Neil Natarajan08/17/2022, 3:26 PM
. The issue is that two of the final tasks of my flow accept futures of previous tasks as input and are listed in the
section as well. These final two tasks instead are stuck in a
state and this causes the flow to hang instead of running to completion. Any advice on how to properly implement the ideal behavior (tasks that depend on futures from previous tasks that fail or didn't run should also fail/not run and flow should run to completion) would be greatly appreciated, thank you! In the logs, I see the flow encountered an
Ben Strange08/17/2022, 3:29 PM
Sam Garvis08/17/2022, 4:20 PM
Lucien Fregosi08/17/2022, 4:38 PM
is it possible to specify a
to be able to assign the worker pod to a specific node pool ? I can’t see it in the doc Thanks for your help
Jeffrey Lam08/17/2022, 4:48 PM
Jeffrey Lam08/17/2022, 4:48 PM
Oscar Björhn08/17/2022, 5:18 PM
Jeffrey Lam08/17/2022, 5:51 PM
Oscar Björhn08/17/2022, 5:57 PM