Nathan Atkins
07/14/2021, 5:35 PMJonathan Wright
07/14/2021, 5:39 PMtarget
and cache_for
. So far I’ve not got this to work*, is this supported?
*the target file is written and used by subsequent flow runs but it is never invalidated
result_location_and_target = "cache/{project_name}/{flow_name}/{task_name}.prefect_result"
s3result = S3Result(bucket="bucket-name", location=result_location_and_target)
@task(
cache_for=datetime.timedelta(minutes=10),
cache_validator=all_parameters,
checkpoint=True,
target=result_location_and_target,
result=s3result,
)
Blake List
07/15/2021, 4:01 AMTalha
07/15/2021, 3:57 PMKathryn Klarich
07/15/2021, 3:59 PMSubmitted for execution
and eventually after three retries it marks the flow as failed A Lazarus process attempted to reschedule this run 3 times without success. Marking as failed.
, on agent side, the only log i see is Deploying flow run
. It appears that the agent is creating the task definition, running the task and then immediately de-registering it (as seen here), which could be the problem as I don't see how the task can run if the definition is immediately de-registered before the task run is complete. I have looked through this issue, but I don't think this is the same problem because I am using prefect cloud. Any help is much appreciated.Nishtha Varshney
07/15/2021, 5:19 PMCharles Liu
07/15/2021, 5:43 PMRobert Bastian
07/15/2021, 5:49 PMMark McDonald
07/15/2021, 7:39 PMSoren Daugaard
07/15/2021, 8:27 PMuuid
, _uuid
and UUID
. Their definitions look identical, all are SCALAR
values.
What is the purpose of having 3 different UUID types defined in the GraphQL schema?Pedro Machado
07/15/2021, 9:05 PMKubernetesRun
run config.
I wrote a task that reads the secrets from the file system. However, locally, I was using environment variables for secrets. I'll probably have to see a way to mount the secrets locally on the docker container but before I go too far down this path, I'd like to get some input from the community.
I suppose I could try to set up Kubernetes locally but I know very little about Kubernetes. This is my first experience with it.Ayla Khan
07/15/2021, 9:23 PMClient.get_flow_run_info(..)
, the read function is using the default pickle serializer. I used prefect 0.15.1 in both the flow environment and to run the read code. Am I missing something? Thank you in advance!
result = GCSResult(
bucket="prefect-cloud-results",
location="{flow_name}/{flow_run_id}/provenance.json",
serializer=JSONSerializer()
)
@task(result=result)
def set_provenance_data(flow_run_id: str, prefect_cloud_client: prefect.client.client.Client = None):
...
Erik Amundson
07/15/2021, 10:06 PMdocker buildx build --platform linux/amd64
. Is there any way to do this in the prefect.storage.docker.Docker
class or can I build the image locally first then add flows to a pre-built image?Jason Prado
07/16/2021, 12:01 AMFailed to retrieve task state with error: ClientError([{'path': ['get_or_create_task_run_info'], 'message': 'Expected type UUID!, found ""; Could not parse UUID:
Sebastián Tabares
07/16/2021, 1:05 AMAWSSecretsManager
in prefect.task
library tasks? I don't figure out how to force slack_notifier
task to use that Secret provider. for example this code throws an error because Slack try to load PrefectSecrets before Flow init:mark.smith
07/16/2021, 1:14 AMJacob Blanco
07/16/2021, 2:40 AMGoh Rui Zhi
07/16/2021, 3:14 AMwith Flow("my_daily_flow", schedule) as flow:
start_date = Parameter("start_date", default=datetime.utcnow().date() - timedelta(days=5))
end_date = Parameter("start_date", default=datetime.utcnow().date())
daily_job.set_dependencies(keyword_tasks={"start_date": start_date, "end_date": end_date})
flow.register("test")
This question might have been asked before, but I would like some confirmation. Thank you!Simone Cittadini
07/16/2021, 7:00 AMRob Fowler
07/16/2021, 11:30 AMMichael Terry
07/16/2021, 1:41 PMfield "key_value" not found in type: "query_root"
. But my query works in the interactive API console. (I'm using cloud prefect) Have folks seen this error before?haven
07/16/2021, 6:24 PMJeremy Phelps
07/16/2021, 6:45 PMQueued due to concurrency limits. The local process will attempt to run the task for the next 10 minutes, after which time it will be made available to other agents.That string does not appear in the open-source part of Prefect, so it must be part of Prefect Cloud. The concurrency limit on that task is 10, and things were working until I changed some of Dask's configuration parameters to try to resolve an issue with it. The most likely cause of the above message is that some error happened and it didn't get handled correctly. https://cloud.prefect.io/stockwell/flow-run/4460703d-3c91-4573-b85c-a4b001048999
Ben Muller
07/16/2021, 7:18 PMHarry Baker
07/16/2021, 8:11 PMexport PREFECT__CONTEXT__SECRETS__AWS_CREDENTIALS='{"ACCESS_KEY": "abcdef", "SECRET_ACCESS_KEY": "ghijklmn"}'
to the secrets context--could I do this in the prefect config toml file? That's where I was storing my other secret keys using prefect.config.api.XXX. This probably isn't the 'best' way to do this in production, but I'm used to using dotenv and os to just use a .env to pass in secret values.Ben Muller
07/17/2021, 11:24 AMMichael Warnock
07/17/2021, 4:12 PMfeature-generator
which contains both worker/orchestration logic and the code for doing the work. I added task and flow definitions to it, but with github storage, the flow can't find the other modules in that repo (I've seen https://github.com/PrefectHQ/prefect/discussions/4776 and understand this is intentional).
My question is how best to structure things so that my flow can use that repo's code, but also execute a parameterized run from feature-generator
, on commit, through CI (because that's how we start the job right now). Obviously, I can make feature-generator
a package and depend on it from a new flows
repo, but to have feature-generator
start the run would create a circular dependency. Would you split it into three repos, with one of them just being responsible for executing the flow? I don't love that idea, but maybe that's best practices?Michael Warnock
07/17/2021, 4:21 PMhaven
07/18/2021, 6:00 AMKubernetesSecret
should not rely on any upstream task/parameters. However, I think it might not be a great assumption. I'm using an env
parameter that denotes "DEV"
or "PROD"
and then would decide what secret I want to retrieve from k8s later (usually a database secret). Any plan to allow KubernetesSecret
to be a dynamic task and interpreted naturally in the flow without having to manually use flow.set_dependencies
?Austen Bouza
07/18/2021, 5:59 PMDictCursor
with the existing SnowflakeQuery
task? The source shows
try:
with conn:
with conn.cursor() as cursor:
executed = cursor.execute(query, params=data).fetchall()
conn.close()
return executed
while what I would want to use is:
try:
with conn:
with conn.cursor(DictCursor) as cursor:
executed = cursor.execute(query, params=data).fetchall()
conn.close()
return executed
Has anyone else dealt with this before? I’d like to simply inject the DictCursor class if possible, but from the way this block of code is written it looks like the only way to do it is to subclass SnowflakeQuery
and overwrite the entire run
method.