Christopher Chong Tau Teng
12/07/2021, 9:56 AM500 Server Error for <http+docker://localhost/v1.41/images/create?tag=v3&fromImage=gcr.io%2Fchristopherchong-mysdev00-id%2Fprefect-flows>: Internal Server Error ("unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: <https://cloud.google.com/container-registry/docs/advanced-authentication>")
Is there any way we can pass docker credentials to Docker Agent or Docker Run? Or is there some other way I can authenticate this Docker Agent inside a container to pull image from GCR?Romain P
12/07/2021, 10:42 AMprefect server create-tenant
I've tried with 3.8 and 3.9.9 python
It worked a few days ago, but I had to drop the containers and recreate them. I think it is related to my docker deployment as I already have a previous postgres install on 5432.
I'm willing to restart from the docker-compose file. Any help ?Guilherme Petris
12/07/2021, 11:18 AMfrom prefect import task, Flow
from prefect.executors import LocalDaskExecutor
import time
@task
def extract_reference_data():
time.sleep(10)
return 'hej'
@task
def extract_live_data(input):
time.sleep(10)
return f'{input}hejdå'
@task
def separate_task():
time.sleep(10)
return 'hoppsan'
with Flow("Aircraft-ETL",
executor=LocalDaskExecutor()) as flow:
reference_data = extract_reference_data()
live_data = extract_live_data(reference_data)
separate_task()
flow.run()
# flow.visualize()
Saurabh Indoria
12/07/2021, 11:38 AMPod prefect-job-94453cb1-2sw9q failed.
Container 'flow' state: terminated
Exit Code:: 139
Reason: Error
Gagan Singh Saluja
12/07/2021, 2:13 PMGuilherme Petris
12/08/2021, 9:52 AMChristopher Chong Tau Teng
12/08/2021, 10:18 AMSylvain Hazard
12/09/2021, 8:41 AMKubernetesRun
based flows that remain pending
for some reasons.
For example, what happens if I try to submit a flow but there isn't enough resources available on my cluster at that moment ? From my experience, I can see that those flows get re-submitted and another pod is created after some time but what happens then ? Both pods will run if given the resources ? Is there a limit after which the engine kills the flow run because of being unable to run it properly ?Adam Everington
12/09/2021, 10:52 AMWilliam Clark
12/09/2021, 6:24 PMSaurabh Indoria
12/10/2021, 4:36 AMjob_spec.yaml
and modify the CPU section leaving the rest as it is?Christopher Chong Tau Teng
12/10/2021, 7:54 AMget_available_tenant
in Python libjack
12/10/2021, 7:21 PMPierre Monico
12/11/2021, 2:36 PMuser@host
) when used in a connection string? If I replace it by %40
I have the graphql
service complaining about an invalid interpolation - If I leave it in the bit after the @ is parsed as the host (including the password etc) by hasura
… Is there any other format I can pass to --postgres-url
?Scarlett King
12/13/2021, 1:22 PMPayam Vaezi
12/13/2021, 3:09 PM3GiB
memory usage, while running in cloud with prefect server I’m getting above 16GiB
memory usage where job gets killed as a result of that. Any idea what may have caused this discrepancy in memory usage?Aleksandr Liadov
12/13/2021, 5:11 PMserver
as backend all logs are displayer correctly (both Prefect logs and my custom logs).William Clark
12/13/2021, 8:15 PMWilliam Clark
12/13/2021, 8:16 PM@task(name="Create Task Definition")
def create_task(task_info:List):
"""Build and upload task defintion file to S3 from docker image and tag job parameters
Args:
task_info (List): [A list that contains the repository and tag strings]
Returns:
None
"""
fs = s3fs.S3FileSystem(use_ssl=False)
bucket_path = 'prefect/task_definitions/'
task_definition = json.load(fs.open(f'{bucket_path}/model_template.json', 'rb'))
task_definition = task_definition['containerDefinitions'][0]['image_name'] = task_info[0]
json.dump(task_definition, fs.open(f'{bucket_path}/{task_info[0]}_model_scoring.json','w'))
return task_definition
with Flow(name="ECS Task Defintion to Run Config") as flow:
task_definition = create_task(task_info)
run_config = ECSRun(task_definition=task_definition,
run_task_kwargs=dict(cluster="Innovation-Garage-AI-Cluster"))
flow.run_config = run_config
Liam England
12/14/2021, 7:23 PMDaniel Komisar
12/14/2021, 7:51 PMMichael Ulin
12/14/2021, 9:45 PMMichael Ulin
12/14/2021, 9:45 PMUnexpected error: ModuleNotFoundError("No module named 'google'")
Traceback (most recent call last):
File "/opt/conda/envs/coiled/lib/python3.8/site-packages/prefect/engine/runner.py", line 48, in inner
new_state = method(self, state, *args, **kwargs)
File "/opt/conda/envs/coiled/lib/python3.8/site-packages/prefect/engine/task_runner.py", line 926, in get_task_run_state
result = self.result.write(value, **formatting_kwargs)
File "/opt/conda/envs/coiled/lib/python3.8/site-packages/prefect/engine/results/gcs_result.py", line 77, in write
self.gcs_bucket.blob(new.location).upload_from_string(binary_data)
File "/opt/conda/envs/coiled/lib/python3.8/site-packages/prefect/engine/results/gcs_result.py", line 39, in gcs_bucket
from prefect.utilities.gcp import get_storage_client
File "/opt/conda/envs/coiled/lib/python3.8/site-packages/prefect/utilities/gcp.py", line 6, in <module>
from google.oauth2.service_account import Credentials
ModuleNotFoundError: No module named 'google'
Raúl Mansilla
12/14/2021, 10:28 PMRaúl Mansilla
12/15/2021, 11:15 AMWill Skelton
12/15/2021, 5:04 PMConnor Martin
12/15/2021, 5:11 PMmerge()
to allow me to pass kwargs to the Merge
constructor so I can turn off that task checkpointing, but it still picks up in the middle of my flow at the task that failed. I can't have this happen because it requires previously downloaded data referenced in other tasks that get cleaned on failure/success.
My question is: How can I restart a flow in the UI from the beginning with the exact same parameters without any task result caching?Stéphan Taljaard
12/15/2021, 8:27 PMBogdan Bliznyuk
12/16/2021, 7:13 AMPayam Vaezi
12/16/2021, 1:38 PMPayam Vaezi
12/16/2021, 1:38 PMAnna Geller
12/16/2021, 1:47 PMPayam Vaezi
12/16/2021, 1:55 PM