Tom Shaffner
12/14/2021, 3:24 PMPedro Machado
12/14/2021, 3:42 PMloguru
https://github.com/Delgan/loguru
It's used inside of a task. When I run the flow locally, I see the loguru output but when it runs remotely (ECS), I don't see the logs in Prefect Cloud. How could I get it to write to the Prefect logs?
Also, how should I set up my ECS flow to get it to write the container logs to cloudwatch?
Thanks!Leanna Morinishi
12/14/2021, 5:39 PMquery_list
is a list of str
, each of which is a valid query. Let’s say there’s 3 queries and each query gives me back 10 rows. This task works fine.
data = execute_test_query.map(query=query_list)
Now I want to transform a concatenated dataframe in its entirety.
@task
def n_rows(df):
rows = df.shape[0]
print(f"There are {rows} rows!")
return rows
I was expecting data_2 = n_rows.map(flatten(data))
to give "There are 30 rows!"
, but I get key errors. Any idea what I need to do to flatten data
?Jason Motley
12/14/2021, 6:27 PMRoyzac
12/14/2021, 7:16 PMDavid Yang
12/14/2021, 8:23 PMAmanda Wee
12/14/2021, 8:51 PMPedro Machado
12/14/2021, 10:22 PMS3Result
with a target
is configured for this task so the output of the API lands in S3 in json format.
• A pipe in Snowflake will detect and load these files
My question: is relying on the Prefect results an acceptable way to save data to S3 or should I have an explicit task that will handle writing to S3?
In this approach (S3Results), will the data stay in memory until the end of the process or will the memory be released when the data is written to S3?Matt Denno
12/14/2021, 11:40 PMPREFECT__CLOUD__HEARTBEAT_MODE
Or is there an equivalent env for server, like this:
PREFECT__SERVER__HEARTBEAT_MODE
Thanks,
MattJacob Blanco
12/15/2021, 5:48 AMChun Shen Wong
12/15/2021, 7:48 AMChun Shen Wong
12/15/2021, 7:49 AMhaf
12/15/2021, 8:59 AMThomas Furmston
12/15/2021, 10:39 AMThomas Furmston
12/15/2021, 10:39 AMMPIJob
(from the https://github.com/kubeflow/mpi-operator library) given in the body of the task. Is that roughly correct?Thomas Furmston
12/15/2021, 10:39 AMJelle Vegter
12/15/2021, 1:28 PMTom Shaffner
12/15/2021, 1:56 PMJason Motley
12/15/2021, 3:25 PMTilak Maddy
12/15/2021, 3:29 PMEnda Peng
12/15/2021, 4:13 PMAdam Roderick
12/15/2021, 4:45 PMTao Bian
12/15/2021, 4:56 PMfrom prefect.client.client import Client
client = Client()
client.graphql(query)
Kevin Kho
Daniil Ponizov
12/15/2021, 6:21 PMFrank Oplinger
12/15/2021, 8:56 PMKevin Mullins
12/15/2021, 10:02 PMStandard
card shows the feature `Role-based permissioning`; however, below in features when Standard
is selected it has RBAC
and Custom Permissions
greyed out which would make me think it doesn’t have role-based permissioning
.
a. Can I get clarification on what Role-based permissioning
actually is comparative to the features list and which plan would be needed for them?
2. From what I see, the features have either a checkmark, an open circle, a dotted circle, a greyed out filled circle, and a black full circle without an explanation of the differences, this makes it hard to understand some the differences without hovering over every.
a. This is just feedback from a user perspective that it might be hard to quickly figure out the features.
Thanks!Kirk Quinbar
12/15/2021, 10:10 PMTom Klein
12/15/2021, 10:39 PMrootless mode
for the docker (to increase security), or will it hinder Prefect in anyway?
https://docs.docker.com/engine/security/rootless/KhTan
12/15/2021, 11:10 PMdef pipe(dt):
#get intermediary output
output1, output2, output3 = step1(dt)
#write to database
step2(output1, output2, output3)
is it like this or is there a better way?
with Flow('multi-output func') as flow:
dt = Parameter("urls", default=['dt1', 'dt2', 'dt3'])
for dt in dt_range:
pipe(dt)
thank you