Mac Gréco Péralte Chéry
11/24/2020, 8:37 PMbuild:
commands:
- python --version
- pyenv global 3.7.7
- python --version
- pip install $(grep -ivE "pywin32" requirements.txt)
- prefect backend cloud
- prefect auth login -t $PREFECT_ACCESS_TOKEN
- aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin <http://xxxxxxxx.dkr.ecr.us-east-2.amazonaws.com|xxxxxxxx.dkr.ecr.us-east-2.amazonaws.com>
- python3.7 flows/my_flow.py
From November 20 docker introduced pull limits for images what cause me to have this error in Code build
File "/root/.pyenv/versions/3.7.7/lib/python3.7/site-packages/prefect/environments/storage/docker.py", line 619, in pull_image
output = client.pull(self.base_image, stream=True, decode=True)
File "/root/.pyenv/versions/3.7.7/lib/python3.7/site-packages/docker/api/image.py", line 415, in pull
self._raise_for_status(response)
File "/root/.pyenv/versions/3.7.7/lib/python3.7/site-packages/docker/api/client.py", line 261, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/root/.pyenv/versions/3.7.7/lib/python3.7/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: <https://www.docker.com/increase-rate-limit>")
When i go to the site i can read: "Anonymous free users will be limited to 100 pulls per six hours, and authenticated free users will be limited to 200 pulls per six hours."
So i created a account on docker to raise my limits to 200 pull per 6 hours.
So i understand the problem occurs when prefect is pulling the base prefect docker image to build my flow image because i have reached my 100 pulls / 6 hours anonymous limit
My question is where should i put my docker account token to pull the prefect base image via my newly created docker account?Hui Zheng
11/24/2020, 11:29 PMinit_run
,
Task 'init_run': Starting task run...
it got this error message ( see thread)
and, the log message said
Task 'init_run': Finished task run for task with final state: 'ClientFailed'
After that, All remaining tasks stuck in Pending
state for hours. Any ideas what happened? Maybe my flow need to know how to handle this ClientFailed
state ?Kilian
11/24/2020, 11:50 PMLior
11/25/2020, 11:26 AMAdam
11/25/2020, 11:57 AMAccess to fetch at '<https://api.prefect.io/graphql>' from origin '<https://cloud.prefect.io>' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled
Enrique
11/25/2020, 1:05 PMflow.register(project_name=env_label, add_default_labels=False)
We use also the flow.storage = Docker(...)
command.
Thanks for your time, and sorry if it is an obvious question 😅Saulius Beinorius
11/25/2020, 1:45 PMPeter Peter
11/25/2020, 3:55 PMPeter Peter
11/25/2020, 4:10 PMRiley Hun
11/25/2020, 6:10 PMVinod Sugur
11/25/2020, 7:26 PMjack
11/26/2020, 5:09 AMHamed Sheykhlou
11/26/2020, 7:18 AMLior
11/26/2020, 9:54 AMJonas Hanfland
11/26/2020, 1:13 PMIain Dillingham
11/26/2020, 1:22 PMAnalytics BSTD
11/26/2020, 3:38 PMSque
11/26/2020, 3:55 PMflow.register()
but what about flows that you want to be removed? How do you remove deprecated flows?Robin
11/26/2020, 5:06 PMtasks_to_be_executed = Parameter("tasks_to_be_executed", default=["task_a", "task_c"])
The following error is returned when checking `if "get_number_of_all_systems" in tasks_to_be_executed`:
Traceback (most recent call last):
File ".\awesome_flow.py", line 112, in <module>
if "get_number_of_all_systems" in tasks_to_be_executed:
TypeError: argument of type 'Parameter' is not iterable
Joseph Finlayson
11/27/2020, 1:08 AM│ ---- ------ ---- ---- ------- │
│ Normal Scheduled <unknown> default-scheduler Successfully assigned prefect/prefect-job-3afb5e32-qx8wd to first-3kwtz │
│ Normal Pulling 13s kubelet, first-3kwtz Pulling image "<http://registry.digitalocean.com/unmanaged-registry/prefect-job|registry.digitalocean.com/unmanaged-registry/prefect-job> │
│ s:ac673b8facdc4d18a75da5e8fb98b63b" │
│ Warning Failed 13s kubelet, first-3kwtz Failed to pull image "<http://registry.digitalocean.com/unmanaged-registry/pref|registry.digitalocean.com/unmanaged-registry/pref> │
│ ect-jobs:ac673b8facdc4d18a75da5e8fb98b63b": rpc error: code = Unknown desc = Error response from daemon: Get <https://registry.digitalocean> │
│ .com/v2/unmanaged-registry/prefect-jobs/manifests/ac673b8facdc4d18a75da5e8fb98b63b: unauthorized: authentication required │
│ Warning Failed 13s kubelet, first-3kwtz Error: ErrImagePull
Agent YAML looks like this
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: prefect
name: prefect-agent
namespace: prefect
spec:
replicas: 1
selector:
matchLabels:
app: prefect
template:
metadata:
labels:
app: prefect
spec:
containers:
- args:
- prefect agent start kubernetes command:
- /bin/bash
- -c
env:
- name: NAMESPACE
value: prefect
- name: PREFECT__CLOUD__AGENT__AUTH_TOKEN
value: XXXXX
- name: PREFECT__CLOUD__API
value: <https://api.prefect.io>
- name: IMAGE_PULL_SECRETS
value: unmanaged-registry
The generated job itself doesn't seem to have anything set regarding an imagepull secretJessica Stewart
11/27/2020, 4:37 AMJoseph Finlayson
11/27/2020, 12:51 PMflow.register
I'm seeing the following strange stack:
flow.register(project_name="default")
File "/home/joe/.local/share/virtualenvs/flows/lib/python3.8/site-packages/prefect/core/flow.py", line 1655, in register
registered_flow = client.register(
File "/home/joe/.local/share/virtualenvs/flows/lib/python3.8/site-packages/prefect/client/client.py", line 771, in register
serialized_flow = flow.serialize(build=build) # type: Any
File "/home/joe/.local/share/virtualenvs/flows/lib/python3.8/site-packages/prefect/core/flow.py", line 1462, in serialize
storage = self.storage.build() # type: Optional[Storage]
File "/home/joe/.local/share/virtualenvs/flows/lib/python3.8/site-packages/prefect/environments/storage/docker.py", line 359, in build
self._build_image(push=push)
File "/home/joe/.local/share/virtualenvs/flows/lib/python3.8/site-packages/prefect/environments/storage/docker.py", line 423, in _build_image
self._parse_generator_output(output)
File "/home/joe/.local/share/virtualenvs/flows/lib/python3.8/site-packages/prefect/environments/storage/docker.py", line 658, in _parse_generator_output
output = (
AttributeError: 'NoneType' object has no attribute 'strip'
`
Scott Moreland
11/27/2020, 1:13 PMJoseph Finlayson
11/27/2020, 1:52 PMGabriel Montañola
11/27/2020, 4:57 PMStartFlowRun
tasks are resulting in failed tasks for me since 0.13.17
in Prefect Cloud prefect cloud
Unexpected error: ClientError([{'path': ['user'], 'message': 'field "user" not found in type: \'query_root\'', 'extensions': {'path': '$.selectionSet.user', 'code': 'validation-failed', 'exception': {'message': 'field "user" not found in type: \'query_root\''}}}])
Is this related somehow with https://github.com/PrefectHQ/prefect/pull/3692?Andrey Tatarinov
11/28/2020, 6:40 PMmap
? I have a collection that might vary in size from 500 to 50K, is it reasonable to process it with map?Lior
11/29/2020, 6:49 AMMichelle Wu
11/30/2020, 2:08 AMdask-resource
in tags
as argument for @task
:
running_server = "name_of_remote_server"
resource_tag = "dask-resource:{}=1".format(running_server)
@task(log_stdout=True, state_handlers=[email_on_failure], tags=[resource_tag])
def test_task():
print(1/0)
When I started the remote dask-worker, I used command like this:
dask-worker tcp://<address of local dask-scheduler>:8786 --nprocs 4 --nthreads 1 --worker-port xxx --resources "name_of_remote_server=1"
This connected local scheduler with remote worker perfectly. However, when I actually started the flow on local machine, it failed running on the remote worker first because of ModuleNotFoundError: No module named 'prefect'
. After I installed prefect for the remote worker, another error occurred on it:
[2020-11-24 17:02:05+0800] ERROR - prefect.CloudTaskRunner | Failed to set task state with error: ConnectionError(MaxRetryError("HTTPConnectionPool(host='localhost', port=4200): Max retries exceeded with url: /graphql (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff6099fcf10>: Failed to establish a new connection: [Errno 111] Connection refused'))"))
Traceback (most recent call last):
File "/anaconda3_a/envs/py37/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/anaconda3_a/envs/py37/lib/python3.7/site-packages/urllib3/util/connection.py", line 84, in create_connection
raise err
File "/anaconda3_a/envs/py37/lib/python3.7/site-packages/urllib3/util/connection.py", line 74, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
...
2020-11-24 17:02:05+0800] INFO - prefect.CloudTaskRunner | Task 'test_task': Finished task run for task with final state: 'ClientFailed'
[2020-11-24 09:02:10+0000] CRITICAL - CloudHandler | Failed to write log with error: HTTPConnectionPool(host='localhost', port=4200): Max retries exceeded with url: /graphql (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff6097590d0>: Failed to establish a new connection: [Errno 111] Connection refused'))
[2020-11-24 09:02:10+0000] CRITICAL - CloudHandler | Failed to write log with error: HTTPConnectionPool(host='localhost', port=4200): Max retries exceeded with url: /graphql (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff6097590d0>: Failed to establish a new connection: [Errno 111] Connection refused'))
[2020-11-24 09:02:10+0000] CRITICAL - CloudHandler | Failed to write log with error: HTTPConnectionPool(host='localhost', port=4200): Max retries exceeded with url: /graphql (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff6097590d0>: Failed to establish a new connection: [Errno 111] Connection refused'))
[2020-11-24 09:02:10+0000] CRITICAL - CloudHandler | Unable to write logs to Prefect Cloud
[2020-11-24 09:02:10+0000] CRITICAL - CloudHandler | Unable to write logs to Prefect Cloud
[2020-11-24 09:02:10+0000] CRITICAL - CloudHandler | Unable to write logs to Prefect Cloud
I tried opening port 4200 of local scheduler for the remote worker, but the same error occurred anyway. Wondering what it is that I’ve been doing wrong? 😶Klemen Strojan
11/30/2020, 9:31 AMEric
11/30/2020, 9:53 AMprefect.utilities.exceptions.ClientError: 400 Client Error: Bad Request for url: http://<Prefect Cloud IP>:4200
This is likely caused by a poorly formatted GraphQL query or mutation. GraphQL sent:
query {
query { auth_info { api_token_scope } }
}
variables {
null
}
Is there any missing step when I set Prefect agent? Thank you very much!