https://prefect.io
Join Slack
Hello, I've been deploying using YAML files via the CLI to Prefect Cloud. I noticed that when I remo...
b

Brian Newman

about 2 years ago
Hello, I've been deploying using YAML files via the CLI to Prefect Cloud. I noticed that when I remove a deployment from the YAML and redeploy, the deployment still exists in Prefect Cloud. Does anyone know if there's a way to automatically delete deployments that are no longer in the YAML file during re-deployment? Thanks!
b
j
  • 2
  • 6
  • 122
How to disable Prefect logger for tests?
d

davzucky

almost 3 years ago
In prefect 2, how can I test a flow or task which is using the
get_run_logger()
which is set from the context. You can find sample test code on the thread The test keep failing with the erorr
prefect.exceptions.MissingContextError: There is no active flow or task run context.
d
p
+2
  • 4
  • 16
  • 122
I am attempting to create a custom storage block (Digital Ocean uses an `https://` prefix but `s3://...
r

Rio McMahon

about 3 years ago
I am attempting to create a custom storage block (Digital Ocean uses an
https://
prefix but
s3://
is hardcoded in the S3 filesystem object). I did this by copying the S3 storage object code and removing the hardcoded prefix but leaving everything else the same (except name and credential arguments). The block successfully registers to prefect cloud. However when I try to attach it as the storage block I get the error in the comments. From what I can tell this is happening during the validation step. At some point prefect is looking at the registry for the
Block
type via:
from prefect.utilities.dispatch import lookup_type, get_registry_for_type
from prefect.blocks.core import Block
registry = get_registry_for_type(Block)
print(registry)
which returns all the prefect defined blocks. If I try to build a deployment it will fail because the custom filesystem class I created does not show up in that registry. Currently the remote file system object stores all ‘settings’ as plain text which isn’t ideal because I’m storing access credentials. Thoughts on the best way to proceed?
✅ 1
r
a
  • 2
  • 12
  • 122
Does anyone know of a workaround to run a task from a task in Prefect 2? I've found some examples of...
o

Oscar Björhn

about 3 years ago
Does anyone know of a workaround to run a task from a task in Prefect 2? I've found some examples of people using .fn but it doesn't appear to work in my case. Specifically, I'd prefer to run trigger_dbt_cli_command from the prefect-dbt package from one of my tasks rather than having to create a flow to run it. I can't change the definition of trigger_dbt_cli_command, since the package itself defines it as a task. I think my problem might be related to the task being async, it seems to finish the execution instantly and the returned type is "None". As long as I'm not using fn(), Prefect will patiently wait for a result, but then I'm forced to call the task from a flow. Edit: If anyone thinks I'm going about this the wrong way, I'd be happy to hear that too. Maybe I just need to settle on always calling trigger_dbt_cli_command from a flow. 🙂
✅ 1
o
k
m
  • 3
  • 3
  • 122
Hello I have started using prefect for airbyte orchestration task. Below I have attached my code ```...
k

komal azram

about 3 years ago
Hello I have started using prefect for airbyte orchestration task. Below I have attached my code
import prefect
from prefect import task, Flow

@task
def claims_func():
    airbyte_server_host = "localhost",
    airbyte_server_port = 8000,
    airbyte_api_version = "v1",
    connection_id = conn_id,



with Flow("fhir-flow") as flow:
    claims_func()


 
flow.run()
As per my understanding when when I run this flow it should trigger the connection and automatically sync data in from gcp->snowflake. I don't get any error but no data is synced.
k
o
k
  • 3
  • 10
  • 122
Hi All , I am having issue with prefect server. Then server is starting on server ip and config.toml...
a

Alex F

over 3 years ago
Hi All , I am having issue with prefect server. Then server is starting on server ip and config.toml is set to apollo_url="http://10.1.x.x:4200/graphql" everything starts normally and no errors during start , the ui shows green connection to API, however 1-2 minutes later it turns yellow and stops working , i turn debug=true but I am not able to find logs are written or how to troubleshoot this problem. Any help would be appreciated (this is local install on ubuntu)
a
k
a
  • 3
  • 22
  • 122
<@ULVA73B9P> in Prefect 3 how do I run multiple subflows concurrently?
c

Charlotte

7 months ago
@Marvin in Prefect 3 how do I run multiple subflows concurrently?
c
m
n
  • 3
  • 2
  • 121
<@ULVA73B9P> how to clean up docker containers after a flow run if using the docker worker
j

juana

12 months ago
@Marvin how to clean up docker containers after a flow run if using the docker worker
j
m
  • 2
  • 5
  • 121
<@ULVA73B9P> I need help with this worker pool job template config: 1 validation error for DockerWor...
d

Dijar Vrella

about 1 year ago
@Marvin I need help with this worker pool job template config: 1 validation error for DockerWorkerJobConfiguration volumes value is not a valid list (type=type_error.list)
d
m
  • 2
  • 31
  • 121
Is there a way to have a Prefect GPU agent/worker on (temporary) sleep when it’s not used in Kuberne...
o

Ofir

about 2 years ago
Is there a way to have a Prefect GPU agent/worker on (temporary) sleep when it’s not used in Kubernetes? Motivation: let’s assume that the cost of a heavy GPU machine is $2k per day. Let’s also assume that our Prefect deployment runs in AKS (managed Azure Kubernetes) and we have separate pods for
prefect-server
and a
prefect-agent
. What if 90% of the day the
prefect-agent
(which is running on a GPU node on the cluster) is idle? This means it’s underutilized and we waste money for no good reason. Reference: Airflow provides Kubernetes Executor - on-demand/ad-hoc worker pods. Since Prefect thought of everything - I’m sure there is either a built-in capability for that or a design pattern for achieving that. Thanks!
o
n
l
  • 3
  • 6
  • 121
Previous484950Next

Prefect Community

Bring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.

Powered by