Charles Leung
10/26/2020, 5:14 PMCharles Leung
10/26/2020, 9:08 PMHagai Arad
10/27/2020, 8:16 PMimport prefect
c = prefect.Client()
Traceback attached in a comment below. Any ideas what went wrong? Thanks!Ian
10/27/2020, 8:17 PMprefect server start
2. Is it possible to register an already-built (docker storage) flow? Our product involves deploying to many customer environments that are dynamic, so we would like to be able to build a flow image in CI and deploy it to our customer installations. The question is then, how do you register a flow in an arbitrary number of environments when the build step is before the register step? Does anyone know good patterns to follow here?
Thanks!Jins Kadwood
10/28/2020, 3:30 AMale
10/28/2020, 8:46 AMJames Cole
10/28/2020, 9:05 AMJosef Trefil
10/28/2020, 11:25 PMDeploying flow run ...
and that's it.
When I then check kubectl get all
, the prefect-agent pod and deployment both show READY 0/1
When I check kubectl logs deployment/prefect-agent
the very last line of the traceback says: prefect.utilities.exceptions.ClientError: Malformed response received from API.
Any idea what I'm doing wrong? 🤷♂️
Thank you so much for any clues! 🙂Faris Elghlan
10/29/2020, 11:05 AMHenry
10/29/2020, 4:10 PMHenry
10/29/2020, 4:11 PMHenry
10/29/2020, 4:11 PMCharles Leung
10/29/2020, 9:10 PMMax
10/30/2020, 6:17 PMpostgres, towel, hasura, graphql, apollo, ui
), but from the logs it appears that something is wrong with either hasura or graphsql (the logs are in the thread).
What could be the issue? How does one even debug this kind of errors?Henry
11/03/2020, 10:45 PME0611: No name 'task' in module 'prefect' (no-name-in-module)
with pylint - has anyone run into this before?takahashi
11/04/2020, 6:15 AMLukas N.
11/05/2020, 2:25 PMDave
11/07/2020, 3:03 PMsimone
11/09/2020, 11:23 AMexecutor = DaskExecutor(address=cluster.scheduler_address)
……………………….
flow_state = flow.run(executor=executor)
everything runs fine and the mapped functions run in parallel.
I would like to use the UI to monitor the processing and make use of the great logging functions. If I run
flow.register(project_name="test")
flow.run_agent()
flow_state = flow.run(executor=executor)
The process can start only in the UI (I guess because the run step in not executed). The process runs but run in serial and I guess because the flow.environment is not set and the default executor is used
If i run
flow.environment.executor = executor
flow.register(project_name="test")
flow.run_agent()
and start the flow from the UI: the flow starts but crush with the following error:
Unexpected error: ConnectionError(MaxRetryError('None: Max retries exceeded with url: /graphql (Caused by None)'))
I guess that prefect cannot connect to the scheduler of the dask cluster.
Can you please let me know if what I am trying is possible or is not implemented? If it is possible can you let me know which approach I should use? Thanks a lot!
I also tested the code below as suggested in a thread but I got the same error reported above
flow.environment = RemoteDaskEnvironment(cluster.scheduler_address)
flow.register(project_name="test")
flow.run_agent()
SOLUTION
I solved the issue by starting the agent outside the script.
prefect agent local start --api <http://Apollo_server_IP:4200>
Roey Brecher
11/09/2020, 1:29 PMprefect.utilities.exceptions.ClientError: 400 Client Error: Bad Request for url: <http://10.0.4.45:4200/graphql>
This is likely caused by a poorly formatted GraphQL query or mutation. GraphQL sent:
query {
mutation($input: create_flow_from_compressed_string_input!) {
create_flow_from_compressed_string(input: $input) {
id
}
}
}
Is it expected that every minor change will require us to update our Server if we update the clients?M Taufik
11/09/2020, 5:49 PMDave
11/10/2020, 10:08 AMRoey Brecher
11/10/2020, 12:51 PMenv_vars
in the Docker Storage, are variables that are added to the end of my Dockerfile. this should probably be mentioned in the documentation.
Since the ENV command is added to the end of the file, you cannot actually pass any variables that are used in the build process itself.brett
11/10/2020, 5:56 PMJoseph Haaga
11/10/2020, 8:48 PMprefect server create-tenant --name default --slug default
but I already have a default
tenant, will it overwrite the existing db tables?Josef Trefil
11/11/2020, 10:10 AM@task
def hello_task():
logger = prefect.context.get("logger")
<http://logger.info|logger.info>("Hello, Cloud!")
flow = Flow("hello-flow", tasks=[hello_task])
flow.storage = Docker()
flow.register(project_name="myproject")
into Prefect running on server backend + desktop Docker with Kubernetes v 1.19.3
Running the flow works flawlessly but if I call flow.register() i get this:Josef Trefil
11/11/2020, 10:10 AMJosef Trefil
11/11/2020, 10:10 AMC:\Documentos\Coding\_trials\kubernetes\venv\lib\site-packages\prefect\environments\storage\docker.py:351: UserWarning: This Docker storage object has no `registry_url`, and will not be pushed.
self._build_image(push=push)
[2020-11-11 11:09:22+0100] INFO - prefect.Docker | Building the flow's Docker storage...
Step 1/9 : FROM prefecthq/prefect:0.13.14-python3.7
---> 93545f019d66
Step 2/9 : RUN pip install pip --upgrade
---> Using cache
---> 04f7a3ff4972
Step 3/9 : RUN pip show prefect || pip install git+<https://github.com/PrefectHQ/prefect.git@0.13.14#egg=prefect[kubernetes]>
---> Using cache
---> 19e43c078e99
Step 4/9 : RUN pip install wheel
---> Using cache
---> e42095cd0936
Step 5/9 : RUN mkdir -p /opt/prefect/
---> Using cache
---> 926010edf84c
Step 6/9 : COPY hello-flow.flow /opt/prefect/flows/hello-flow.prefect
---> 70517ea739e4
Step 7/9 : COPY healthcheck.py /opt/prefect/healthcheck.py
---> ce25a709678c
Step 8/9 : ENV PREFECT__USER_CONFIG_PATH=/opt/prefect/config.toml
---> Running in 8e564fe71477
Removing intermediate container 8e564fe71477
---> cae7074f0b02
Step 9/9 : RUN python /opt/prefect/healthcheck.py '["/opt/prefect/flows/hello-flow.prefect"]' '(3, 7)'
---> Running in e0df9c491e35
Beginning health checks...
System Version check: OK
Cloudpickle serialization check: OK
Result check: OK
Environment dependency check: OK
All health checks passed.
Removing intermediate container e0df9c491e35
---> e73382d6b699
Successfully built e73382d6b699
Successfully tagged hello-flow:2020-11-11t10-09-20-404746-00-00
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2020.2\plugins\python\helpers\pydev\pydevd.py", line 1448, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2020.2\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Documentos/Coding/_trials/kubernetes/prefect/deployment_tutorial_flow.py", line 13, in <module>
flow.register(project_name="myproject")
File "C:\Documentos\Coding\_trials\kubernetes\venv\lib\site-packages\prefect\core\flow.py", line 1651, in register
idempotency_key=idempotency_key,
File "C:\Documentos\Coding\_trials\kubernetes\venv\lib\site-packages\prefect\client\client.py", line 788, in register
retry_on_api_error=False,
File "C:\Documentos\Coding\_trials\kubernetes\venv\lib\site-packages\prefect\client\client.py", line 281, in graphql
retry_on_api_error=retry_on_api_error,
File "C:\Documentos\Coding\_trials\kubernetes\venv\lib\site-packages\prefect\client\client.py", line 237, in post
retry_on_api_error=retry_on_api_error,
File "C:\Documentos\Coding\_trials\kubernetes\venv\lib\site-packages\prefect\client\client.py", line 413, in _request
session=session, method=method, url=url, params=params, headers=headers
File "C:\Documentos\Coding\_trials\kubernetes\venv\lib\site-packages\prefect\client\client.py", line 344, in _send_request
raise ClientError(msg)
prefect.utilities.exceptions.ClientError: 400 Client Error: Bad Request for url: <http://localhost:4200/graphql>
This is likely caused by a poorly formatted GraphQL query or mutation. GraphQL sent:
query {
mutation($input: create_flow_from_compressed_string_input!) {
create_flow_from_compressed_string(input: $input) {
id
}
}
}
variables {
{"input": {"project_id": "f89dd540-50d6-4abf-86f5-0119fa63bc73", "serialized_flow": "H4sIAFO4q18C/41TTW/bMAz9K4XPs2K3wYb1NmDracfehkHgZFoRIkuGPtIFgf/7SMW10yDFCvgg8VGPfI/0qXIwYPV4V+3QWl/31r9Un+6qdBxLdAzYo0pC+YCCQfE0Z0S1wy5bznLZWoqMEIgrYYgU+/WbWSDuy+VURZv1UkYyULe3CpUnc7nsVDLeiaf58EwYv/E5jTkxMT83Totv7siAcXP8NNENcvJSo8MACTuK9mAjEqCAepe9D2vv59Aej2sogV6ExL0ZpXcyjzEFhEFygMAUMhOmYLRGpjtVvXsj54xEAdbKmJXCGPtsudn9CwR9bnZaOjiANR0kf4sLnTYOxVViFA4PGGSOeIM1YApH2aGFC2UD/JUMGOTEhgWYAcnVNeXNVpRxMbmUVCnSJKRksBHtg2i31cQWYadx8Ys7DugUymUFSo47mODdgC4VfRb+oF1Q2h0gSVAgM4C+2svH++a+qduWvtQ2dfO1puu22X7Zfq4bujfV9H6PF6v20yuwPy5aKUZlJ5V3vdGrCe9TRTL+3N+pnLGTEGVUwYzpYtGKCHn7D4uoaAaL+HMq7Rxn/ldosVibyMPNwa4tM3uZ/mU1Ytz4MW3mRdqUpM2aIGagGDGfX6VfCR8h7T5i0KvX373a048xTf8AKiEd/mkEAAA=", "set_schedule_active": true, "version_group_id": null, "idempotency_key": null}}
}
python-BaseException
Carlo
11/11/2020, 5:49 PMCould not upgrade the database
I was on 0.13.3, killed the processes, pip installed 0.13.5, then restarted
graphql_1 | Could not upgrade the database!
graphql_1 | Error: Can't locate revision identified by '24f10aeee83e'
JC Garcia
11/11/2020, 6:46 PMenvironment = KubernetesJobEnvironment(job_spec_file=f"{local_dir_path}/job_spec.yaml")
...
with Flow("k8s-example-flow", storage=storage, environment=environment) as flow:
...
flow.register(project_name="Test Project")
However when the job is scheduled via the UI, the k8s job spec does not match the one I generated and filled in the job_spec_file
. I understand that some properties will be overwritten, but no env vars or resource requests/limits are coming through. Any pointers?JC Garcia
11/11/2020, 6:46 PMenvironment = KubernetesJobEnvironment(job_spec_file=f"{local_dir_path}/job_spec.yaml")
...
with Flow("k8s-example-flow", storage=storage, environment=environment) as flow:
...
flow.register(project_name="Test Project")
However when the job is scheduled via the UI, the k8s job spec does not match the one I generated and filled in the job_spec_file
. I understand that some properties will be overwritten, but no env vars or resource requests/limits are coming through. Any pointers?josh
11/11/2020, 7:34 PMprefect-job
that is responsible for pulling your flow’s environment and then creating the Kubernetes job that you set as the flow’s environment. Are you running into an issue with the initial prefect-job
?
This is actually something we are phasing out with the addition of the new RunConfig pattern (docs aren’t fully written yet and it’s still experimental). referenceJC Garcia
11/11/2020, 8:10 PMjob_spec_file
but it does not use itjosh
11/11/2020, 8:11 PMJC Garcia
11/11/2020, 8:12 PMjosh
11/11/2020, 8:15 PMJC Garcia
11/11/2020, 8:16 PMjob_spec_file
josh
11/11/2020, 8:17 PMJC Garcia
11/11/2020, 8:17 PM