Hui Zheng
01/15/2021, 5:44 PMPrefect cli
?Riley Hun
01/15/2021, 6:13 PMJoseph
01/15/2021, 8:38 PMLucas Kjaero-Zhang
01/15/2021, 8:40 PMagent | Service token file does not exists. Using out of cluster configuration option.
I’ve confirmed that the service account, role, and. rolebinding all exist on the server. Here’s a screenshot of the pod settings, attaching the rest in a threadBilly McMonagle
01/15/2021, 8:42 PMjeff n
01/15/2021, 11:16 PMprojects
with one set of runs and accounts
with another set of runs but both of the runs are the same flow code and need to run separately.BK Lau
01/16/2021, 12:31 AMRiley Hun
01/16/2021, 12:40 AMPermissionError
(see reply for full stacktrace of error). Tried to look through the thread to see if others have encountered a similar error, but no luck. Any insight on this?Sonny
01/16/2021, 1:44 AMHui Zheng
01/16/2021, 3:35 AMAmanda Wee
01/16/2021, 11:56 AMidempotency_key
matches the one from the previous flow registration, the flow version is not bumped. However, does the serialised flow get uploaded to storage anyway? I'm too newbie at the codebase (and graphql in particular) to make sense of the details of the flow registration code.Tadas
01/16/2021, 3:51 PMMarwan Sarieddine
01/16/2021, 4:55 PMjack
01/18/2021, 2:30 AMAiden Price
01/18/2021, 7:16 AM[2021-01-17 00:00:11,063] ERROR - Prefect-Kubed | Error while managing existing k8s jobs
Traceback (most recent call last):
File "/usr/local/.venv/lib/python3.8/site-packages/prefect/agent/kubernetes/agent.py", line 362, in heartbeat
self.manage_jobs()
File "/usr/local/.venv/lib/python3.8/site-packages/prefect/agent/kubernetes/agent.py", line 219, in manage_jobs
event.last_timestamp
TypeError: '<' not supported between instances of 'NoneType' and 'datetime.datetime'
Sven Teresniak
01/18/2021, 9:21 AMValueError: Multiple flows cannot be used with the same resource block
.
In which direction I have to search for a solution? We heavily rely on a ResourceManager available for multiple flows…Adam Roderick
01/18/2021, 12:42 PMMatic Lubej
01/18/2021, 1:46 PMMatic Lubej
01/18/2021, 2:05 PMMatic Lubej
01/18/2021, 2:06 PMKieran
01/18/2021, 2:40 PMdefault_client = docker.from_env()
FLOW_NAME = "hello-flow"
flow_schedule = CronSchedule("0 8 * * *")
flow_storage = Docker(
base_url=default_client.api.base_url,
tls_config=docker.TLSConfig(default_client.api.cert),
registry_url="<http://_________.dkr.ecr.eu-west-2.amazonaws.com/xxxxx/prefect|_________.dkr.ecr.eu-west-2.amazonaws.com/xxxxx/prefect>"
)
flow_run_config = ECSRun(
cpu="512",
memory="512",
run_task_kwargs={"requiresCompatibilities": ["FARGATE"], "compatibilities": ["FARGATE"]}
)
with Flow(
name=FLOW_NAME,
schedule=flow_schedule,
storage=flow_storage,
run_config=flow_run_config
) as flow:
say_hello()
if is_serializable(flow):
flow.register(project_name="Test", registry_url=flow_storage)
else:
raise TypeError("Flow did not serialise.")
We are getting the following error from our task logs:
An error occurred (InvalidParameterException) when calling the RunTask operation: Task definition does not support launch_type FARGATE.
In an attempt to resolve this issue I tried adding run_task_kwargs
as above but with no luck.
Does anyone have any pointers?
(I can see from the ECS task definition panel that the Compatibilities
is set to EC2 and Requires compatibilities
is blank and from this thread that could be the cause...)Jeff Williams
01/18/2021, 9:51 PMSai Srikanth
01/18/2021, 11:04 PMFelix Vemmer
01/18/2021, 11:57 PMUnexpected error: TypeError("__init__() got an unexpected keyword argument 'client_options'")
Traceback (most recent call last):
File "/Users/felixvemmer/.pyenv/versions/3.8.6/envs/automation_beast/lib/python3.8/site-packages/prefect/engine/runner.py", line 48, in inner
new_state = method(self, state, *args, **kwargs)
File "/Users/felixvemmer/.pyenv/versions/3.8.6/envs/automation_beast/lib/python3.8/site-packages/prefect/engine/task_runner.py", line 891, in get_task_run_state
result = self.result.write(value, **formatting_kwargs)
File "/Users/felixvemmer/.pyenv/versions/3.8.6/envs/automation_beast/lib/python3.8/site-packages/prefect/engine/results/gcs_result.py", line 77, in write
self.gcs_bucket.blob(new.location).upload_from_string(binary_data)
File "/Users/felixvemmer/.pyenv/versions/3.8.6/envs/automation_beast/lib/python3.8/site-packages/prefect/engine/results/gcs_result.py", line 41, in gcs_bucket
client = get_storage_client()
File "/Users/felixvemmer/.pyenv/versions/3.8.6/envs/automation_beast/lib/python3.8/site-packages/prefect/utilities/gcp.py", line 53, in get_storage_client
return get_google_client(storage, credentials=credentials, project=project)
File "/Users/felixvemmer/.pyenv/versions/3.8.6/envs/automation_beast/lib/python3.8/site-packages/prefect/utilities/gcp.py", line 31, in get_google_client
client = Client(project=project, credentials=credentials)
File "/Users/felixvemmer/.pyenv/versions/3.8.6/envs/automation_beast/lib/python3.8/site-packages/google/cloud/storage/client.py", line 122, in __init__
super(Client, self).__init__(
TypeError: __init__() got an unexpected keyword argument 'client_options'
I am running a task thats returning a pd.DataFrame
which I am trying to store into Google Cloud Storage:
pandas_serializer = PandasSerializer(
file_type='csv'
)
gcs_result = GCSResult(
bucket='tripliq-data-lake',
serializer=pandas_serializer,
location=f'linkedin_top_posts/{datetime.datetime.now().strftime("%Y%m%d-%H%M%S")}_linkedin_post_likes.csv'
)
like_linkedin_feed = LikeLinkedInFeed(
result=gcs_result
)
I am not understanding the source code too well, but I think it’s referring to this line in
site-packages/google/cloud/storage/client.py
def __init__(
self,
project=_marker,
credentials=None,
_http=None,
client_info=None,
client_options=None,
):
Any help is very much appreciated!Alex Rud
01/19/2021, 3:42 AMGreg Roche
01/19/2021, 2:45 PMLocalDaskExecutor
flow, using a LocalAgent running inside a Docker image? TypeError: start() missing 1 required positional argument: 'self'
Edit: solved, I wasn't initialising the LocalDaskExecutor.
flow.executor = LocalDaskExecutor # wrong
flow.executor = LocalDaskExecutor() # this works
Joël Luijmes
01/19/2021, 3:29 PMPedro Machado
01/19/2021, 4:07 PMsupervisord
a while ago and was getting a permission error when trying to access the docker engine. I never got it to work.
I'd like to run the Docker Agent inside of a container managed by docker compose.
I have a couple of questions about this set up:
1. I am setting the restart
policy to always
Would this be enough to restart the agent if it failed or in case of restart of the host?
2. What is the best way to give the agent access to the docker engine running on the host?
Thank you!SK
01/19/2021, 5:28 PMSK
01/19/2021, 5:28 PMSK
01/19/2021, 5:28 PMMichael Adkins
01/19/2021, 5:35 PMpip --version,
python --version
, and which prefect
?SK
01/19/2021, 5:41 PMMichael Adkins
01/19/2021, 5:47 PMwhich prefect
?pip show prefect
SK
01/19/2021, 5:50 PMMichael Adkins
01/19/2021, 5:53 PMpip3
and python3
but in the to command you just used python
to run your scriptSK
01/19/2021, 5:53 PMGreg Roche
01/19/2021, 8:54 PMdocker-compose
couldn't be found on the system, so it needs to be installed, or at least be on the system path so that Prefect can interact with it. FWIW the "installation" section of the docs does mention that both Docker and Docker Compose must be installed in order to run Prefect Server. https://docs.prefect.io/core/getting_started/installation.html#running-the-local-server-and-uiMichael Adkins
01/19/2021, 10:03 PM