:wave: Hi, I’m new here, trialing Prefect 2.0, and...
# prefect-community
👋 Hi, I’m new here, trialing Prefect 2.0, and have a couple of question. I’m writing a flow to copy data into a bunch of Snowflake tables. I wanted to use the Snowflake connection instead of managing a connection myself. Seems like
are their own tasks, so I can’t use them as part of a different task? Which means that my flow looks like this:
Copy code
for table_name in tables:
    queries = build_queries(table)
These tasks are all running sequentially, but there are a lot of tables so I’d like the
tasks for each table to run concurrently. How can I make that happen?
I think you can use list comprehension instead:
Copy code
[snowflake_multiquery(x) for x in queries]
and that might be parallel. You can use a task in another task by calling
but it’s just the python under the hood, not a task with retries/caching/checkpointing
To clarify — each of the query in
needs to be run in order (they’re, like, “create table x”, “copy into table x”, etc) but the queries for each table that I’m concerned about should be run concurrently — we can load data into table X concurrently with loading into table Y and table Z. So maybe a list comprehension like this would work?
[snowflake_multiquery(build_queries(t) for t in tables]
Though I don’t understand how that’s functionally different from the for loop I posted above. Can you explain if/why that list comprehension would have tasks running concurrently but not the for loop?
I think the issue might be overwriting queries each time. Could you try:
Copy code
for table_name in tables:
Also, can you make sure you are using the ConcurrentTaskRunner? I think the default may have changed to SequentialTaskRunner
👀 1
👍 1
logs indicate it is using the ConcurrentTaskRunner. Trying your suggestion, one sec
hm, no dice, they’re still running sequentially
Are you using a subflow somewhere?
Nope — it’s a pretty simple flow I think
Let me ask a teammate
🙏 1
If the solution is to update my
task to build the queries and then run them with
, that should be fine — I mostly wanted to avoid having to work with the snowflake python connector directly
quick update: I tried that approach, like so:
Copy code
def copy_table(table):
  queries = ...
  creds = SnowflakeCredentials(...)
  snowflake_multiquery.fn(queries, creds)
This seems like it’s running tasks concurrently — they all get kicked off and run, and report that they
Finished in state Completed()
— but they’re not actually doing anything, ie, the snowflake queries aren’t being run. I’m also seeing this warning in the logs:
Copy code
RuntimeWarning: coroutine 'snowflake_multiquery' was never awaited
When I update the task to
return snowflake_multiquery.fn(queries, creds)
, then it fails with
TypeError: cannot pickle 'coroutine' object
Am I doing something wrong here, or does this approach not actually work?
We require you to call
on your tasks to send them to the task runner now
Concurrency is opt-in rather than default behavior