Mohamed Ayoub Chettouh
08/18/2022, 4:41 PMVishy ganesh
08/18/2022, 5:40 PMJimmy Le
08/18/2022, 5:42 PM--work-queue
but the CLI says no such option exists. Removing the --work-queue
flagged resolved the issue.Shanhui Bono
08/18/2022, 6:21 PMJared Robbins
08/18/2022, 8:19 PMTim Enders
08/18/2022, 8:20 PMflatten
call in Prefect 2.0? I have a list of lists after mapping tasks and I used to be able to use flatten
to coalesce that into a single large list.Mars
08/18/2022, 8:46 PMrequests-oauthlib
connection, and now all I need to do is:
1. Copy logging.yaml
2. Set my logger:
requests_oauthlib:
level: DEBUG
handlers: [console]
propagate: no
3. Set PREFECT_LOGGING_SETTINGS_PATH=logging.yaml
in my .env
file.
And it works!Mars
08/18/2022, 9:16 PMcreate_markdown_artifact
in Prefect 2.0? I don’t see markdown or artifacts mentioned in the migration guide.Josh
08/18/2022, 9:56 PMmy_task.map(databases)
But I don’t want to overload my databases, so I want to set concurrency limits at the task level to make sure there are only ever N < limit
tasks running against a given database at one time. I know you can set task tags when I’m creating the task object, but can I do so as the flow is executing against mapped tasks?Blake Hamm
08/18/2022, 10:48 PMDeployment
class. Previously, I defined a KubernetesJob
infrastructure block in a different python script and used the .save()
method to instantiate a block in the UI. It looks like this was removed in 2.1... How can I do this now?
Moreover, I'm trying to understand what I pass in for the Deployment
class in the infrastructure
parameter. With the storage
parameter, I just load the storage
block that I previously defined. Can I do the same with the infrastructure
block? If so, how do I load an existing KubernetesJob
block? Or, is the new workflow to just input the KubernetesJob
dictionary directly in your Deployment
class? If that's the case, how do I tie this to a work queue?
In general, I'm a big fan of the KubernetesJob
block. idc whether it's an actual block or just a python dictionary I pass into the deployment. Either way, it's been extremely helpful to manage compute depending on the deployment. Would love to streamline it with CICD. I know that's in the works and I'm eager to see best practices with this and implement it.yusuf yel
08/19/2022, 1:13 AMkojisuganuma.dog
08/19/2022, 5:15 AMThomas Pedersen
08/19/2022, 6:02 AM+--------------+-------------+------------------+-------------------+
|table_name |relation_size|indexes_sizes_size|total_relation_size|
+--------------+-------------+------------------+-------------------+
|log |15 GB |9060 MB |24 GB |
|task_run |5765 MB |17 GB |23 GB |
|task_run_state|13 GB |6308 MB |19 GB |
|flow_run |241 MB |472 MB |713 MB |
|flow_run_state|438 MB |118 MB |557 MB |
+--------------+-------------+------------------+-------------------+
Cyber Pyro
08/19/2022, 6:52 AMByron Kilian
08/19/2022, 7:10 AMYaron Levi
08/19/2022, 7:43 AMYaron Levi
08/19/2022, 7:43 AMYaron Levi
08/19/2022, 7:43 AMYaron Levi
08/19/2022, 7:44 AMYaron Levi
08/19/2022, 7:44 AMYaron Levi
08/19/2022, 7:46 AMYaron Levi
08/19/2022, 7:46 AMYaron Levi
08/19/2022, 7:48 AMYaron Levi
08/19/2022, 7:48 AMAlexander Belikov
08/19/2022, 8:39 AMA deployment additionally enables you to:
- Assign a work queue name to delegate deployment flow runs to work queues.
But in the deployment section there is no info on how to declare a queue to be associated with a deployment. Could you please clarify?Erik Tiengo
08/19/2022, 8:50 AMprefect deployment run command
? as plain text or reading them from file? thanks in advanceOscar Björhn
08/19/2022, 8:52 AMmax
08/19/2022, 9:38 AMflow
, that should execute some tasks at the same cluster. It means that if the flow
get a new message, it should use the same workers (cluster) as for previous messages instead of creating new ones.
do you have any idea how to solve my case (coz I don't)?José Duarte
08/19/2022, 10:00 AMDavid Hlavaty
08/19/2022, 10:05 AMDavid Hlavaty
08/19/2022, 10:05 AMAnna Geller
08/19/2022, 11:34 AMprefect deployment build
what about having two deployments of the same flow, but with different schedules.this works two - you create 2 deployments of one flow
David Hlavaty
08/19/2022, 1:12 PMprefect deployment build
command only adds metadata to the deployment unless I am missing something. It has no effect on where the files are stored in the storage - e.g. S3 in my case.
So if my storage points to <s3://example/flow>
then:
1. prefect deployment build --version 1 ...
will upload the files to <s3://example/flow>
2. prefect deployment build --version 2 ...
will override files in <s3://example/flow>
What I would expect (or be able to do) is for the version to be appended to the storage configuration:
1. prefect deployment build --version 1 ...
will upload the files to <s3://example/flow/1>
2. prefect deployment build --version 2 ...
will upload the files to <s3://example/flow/2>
So that building the deployment again cannot interfere with any flows that are currently scheduling - e.g. what if version 2 added extra dependencies to a base image, but another flow is already starting that fetched the image, but has not downloaded the source code yet. I appreciate that the odds of this are very low, but it is a consequence of the build artefacts not being immutable - which is often a desirable property (the same as it is often desirable to make docker images immutable once uploaded to the repository).