Michael Ulin
12/17/2021, 12:56 AMMichael Ulin
12/17/2021, 12:56 AM400 List of found errors: 1.Field: job_spec.worker_pool_specs[0].container_spec.env[6].value; Message: Required field is not set. 2.Field: job_spec.worker_pool_specs[0].container_spec.env[5].value; Message: Required field is not set. [field_violations {
field: "job_spec.worker_pool_specs[0].container_spec.env[6].value"
description: "Required field is not set."
}
field_violations {
field: "job_spec.worker_pool_specs[0].container_spec.env[5].value"
description: "Required field is not set."
}
]
Nikita Samoylov
12/17/2021, 11:19 AMDaniel Komisar
12/17/2021, 1:27 PMMichael Ulin
12/18/2021, 10:43 PMThomas Opsomer
12/20/2021, 2:45 PMLyla
12/20/2021, 3:15 PMMadison Schott
12/20/2021, 4:11 PMFlow run is no longer in a running state; the current state is: <Failed: "HTTPSConnectionPool(host='api.prefect.io', port=443): Max retries exceeded with url: / (Caused by ReadTimeoutError("HTTPSConnectionPool(host='api.prefect.io', port=443): Read timed out. (read timeout=15)"))">
Sam Werbalowsky
12/20/2021, 4:56 PM0.15.4
to 0.15.10
regarding Git storage - running with a kubernetes agent and deployed via helm. The storage is set using environment variables as part of our CI.
Failed to load and execute Flow's environment: ValueError('Either `repo` or `git_clone_url_secret_name` must be provided')
The thing is, in the UI I can see the repo
value, as it is created during registration. I am assuming it isn’t getting passed to the PrefectJob pod that gets spun up, but I’m not sure why that is. Any ideas?Prasanth Kothuri
12/20/2021, 7:35 PM<http://localhost:8080/default/task-run/bcb01c03-6507-40c3-8c31-c2a0a37a9868>
Martin
12/21/2021, 12:07 AMAhmed Ezzat
12/21/2021, 10:06 AMflow.executor = prefect.executors.DaskExecutor(
cluster_class=lambda: KubeCluster(
pod_template=make_pod_spec(
memory_request="64M",
memory_limit="4G",
cpu_request="0.5",
cpu_limit="8",
threads_per_worker=24,
image=prefect.context.image,
),
deploy_mode="remote",
idle_timeout="0",
scheduler_service_wait_timeout="0",
env=dict(os.environ)
| {
"DASK_DISTRIBUTED__WORKER__MULTIPROCESSING_METHOD": "fork",
"DASK_DISTRIBUTED__SCHEDULER__ALLOWED_FAILURES": "100",
},
),
adapt_kwargs={"minimum": min_workers, "maximum": max_workers},
)
Raúl Mansilla
12/21/2021, 10:23 AMImportError('Unable to import dulwich, please ensure you have installed the git extra')
but I have installed prefect[gitlab] in prefect server and also in the dask cluster nodes.Sylvain Hazard
12/21/2021, 10:59 AMCôme Arvis
12/21/2021, 6:44 PMscheduled
state, while these runs have no labels associated with a concurrency limit
In addition, we indeed have an agent with the matching label, but nothing is happening
Note that some of the first runs were able to be executed (less than 10)
Any idea maybe ? 😕Nikita Samoylov
12/22/2021, 8:50 AMCancel
and Set state
options in UI for each running flow.
• If I press Cancel
- flow is stuck in Cancelling status forever and what is more dangerous for us child process on agent machine which actually executed this flow is stuck too and is never killed. It means it does not release resources. I can see 2 processes stuck (as on picture) - 1 for flow execution and 1 for it's heartbeat.
• Setting Failed
state seems working well, but not if I set state after flow is cancelled.
Could you tell me something about this behaviour ?
PS: I'm talking about Cloud backend + Local AgentSylvain Hazard
12/22/2021, 11:05 AMfrom typing import Dict, List
from prefect import Flow, Parameter, task
import prefect
from prefect.tasks.kubernetes.job import ListNamespacedJob, RunNamespacedJob
from kubernetes.client import V1JobList, V1Job
@task
def get_job(jobs_list: V1JobList) -> Dict:
candidates = jobs_list.items
job = candidates[0]
if len(candidates) > 1:
prefect.context.get("logger").warning(
f"Multiple candidates retrieved. Chose {graph_job.metadata.name}."
)
return job.to_dict()
with Flow("Test") as flow:
jobs = ListNamespacedJob(
kube_kwargs={"field_selector": "metadata.name=JOB_NAME"},
kubernetes_api_key_secret=None,
)()
job = get_job(jobs)
job_result = RunNamespacedJob(
kubernetes_api_key_secret=None,
delete_job_after_completion=False,
)(body=job)
Right now, this gets a 422 error starting with "Job.batch JOB_NAME is invalid..." from the k8s API when trying to run the job.
Am I just doing it wrong ?Prasanth Kothuri
12/22/2021, 11:50 AMJawaad Mahmood
12/22/2021, 7:08 PMfrom prefect import task, Flow
from prefect.executors import LocalExecutor
from prefect.run_configs import DockerRun
from prefect.storage import Docker
import docker
with Flow("some_flow") as flow:
do_something
docker_client = docker.DockerClient()
docker_client.login(username=<env_user>,password=<env_pass>)
flow.storage = Docker(
registry_url='<http://registry.hub.docker.com/repository/docker/<user>/<repo>|registry.hub.docker.com/repository/docker/<user>/<repo>>'
,image_name='<some_flow>'
,files={
<origin path>:<dest path>
}
,python_dependencies = ['pandas','numpy','prefect']
,env_vars={
"PYTHONPATH": "$PYTHONPATH:assets/:root/:data/:image"
}
,base_image='python:3.7.3'
)
flow.run_config = DockerRun(labels=['my-label']
)
flow.executor = LocalExecutor()
flow.register(project_name="some_project")
Kyrylo Zaitsev
12/23/2021, 10:24 AMrequests.exceptions.HTTPError: 413 Client Error: Payload Too Large for url: <http://0.0.0.0:4200/>
This markdown I obtained by converting an HTML page in .md format. I deployed prefect using docker-compose, is there any way to increase GraphQL payload size limit?Suresh R
12/23/2021, 10:44 AMPrasanth Kothuri
12/23/2021, 2:12 PMRaúl Mansilla
12/23/2021, 2:27 PMAlexis Lucido
12/23/2021, 5:19 PMdev
12/24/2021, 10:48 AMERROR - agent | Failed to query for ready flow runs
, even though I am able to run the flows.Alexis Lucido
01/03/2022, 4:41 PMYash
01/04/2022, 3:44 AMElliot Oram
01/04/2022, 4:30 PMprefect agent local start --api <http://prefect-server-url-goes-here:4200>
(replacing the perfect-server-url-goes-here
with the actual server address) I get a connection refused. There is a prefect server running on that machine and I have whitelisted my local IP for port 4200 on that machine. Not too sure how to go about debugging this one so any pointers would be greatly appreciated.
[2022-01-04 16:27:48,100] INFO - agent | Registering agent...
Traceback (most recent call last):
File "/Users/elliotoram/dev/pipeline/venv/lib/python3.9/site-packages/urllib3/connection.py", line 169, in _new_conn
conn = connection.create_connection(
File "/Users/elliotoram/dev/pipeline/venv/lib/python3.9/site-packages/urllib3/util/connection.py", line 96, in create_connection
raise err
File "/Users/elliotoram/dev/pipeline/venv/lib/python3.9/site-packages/urllib3/util/connection.py", line 86, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 61] Connection refused
Sam Werbalowsky
01/04/2022, 9:02 PMPrasanth Kothuri
01/05/2022, 8:58 AM[3 January 2022 8:26pm]: [Errno 24] Too many open files
error , what is the recommended value for open files ?