https://prefect.io
Join Slack
Hello :wave: Is there a way to retry `CRASHED` tasks or to avoid setting them as `CRASHED`? <This i...
j

Jonathan Langlois

over 2 years ago
Hello ๐Ÿ‘‹ Is there a way to retry
CRASHED
tasks or to avoid setting them as
CRASHED
? This issue was resolved and we can have Dask to retry
RUNNING
tasks now. But since the prefect
2.8.7
version, prefect detects that the Dask worker will be shut down and sets the task as
CRASHED
. When Dask resend the failing task, we end up with
Task run '...' already finished
.
j
s
  • 2
  • 2
  • 208
<@ULVA73B9P> I have prefect 3 and pydantic 2.8 I am getting this error File "/home/ubuntu/cleargrid-...
a

Adeel Shakir

about 1 year ago
@Marvin I have prefect 3 and pydantic 2.8 I am getting this error File "/home/ubuntu/cleargrid-dwt-prefect/dwhenv/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py", line 2227, in _extract_get_pydantic_json_schema raise PydanticUserError( pydantic.errors.PydanticUserError: The
__modify_schema__
method is not supported in Pydantic v2. Use
__get_pydantic_json_schema__
instead in class
SecretStr
.
a
m
+2
  • 4
  • 3
  • 207
<@ULVA73B9P> I have a long running task that loops through a long list. Occasionally this task fails...
k

KG

over 1 year ago
@Marvin I have a long running task that loops through a long list. Occasionally this task fails during the loop. How would you recommend I save the last entry processed so that on a retry, the task does not iterate over the records already processed in the previous task run
k
m
  • 2
  • 5
  • 207
Hi, I am using prefect to run multiple spark jobs on k8s. Each of these jobs can run long running wi...
s

Sharath Chandra

over 3 years ago
Hi, I am using prefect to run multiple spark jobs on k8s. Each of these jobs can run long running with some of them have execution times of more than an hour. The jobs are mapped and executed. However in some instances I can see that only few of the jobs in the map are executing. I am using
LocalExecutor
which these mapped jobs are running sequentially. Are there instances where the LocalExecutor is not able to track the spark job running on k8s and thus not able to trigger the subsequent tasks in the map ?
s
a
  • 2
  • 9
  • 207
hello, i'm trying to do a docker deploy. It recommends serving from the root directory (but i alread...
s

Steven

over 1 year ago
hello, i'm trying to do a docker deploy. It recommends serving from the root directory (but i already have a docker image there), so i am getting the error
RuntimeError: Failed to generate Dockerfile. Dockerfile already exists in the current directory.
i notice that
def generate_default_dockerfile(context: Optional[Path] = None):
optionally can take a path context, but i don't see a way to specify that from my simple flow:
if __name__ == "__main__":
    test_processing_flow.deploy(
        name="test-processor-flow",
        work_pool_name="test-docker-pool",
        image="test-processor-flow:latest",
        push=False
    )
am i missing something?
s
n
  • 2
  • 41
  • 206
Hi all. I just started using prefect and am trying to deploy my first flow. I can successfully deplo...
c

Casey M

about 2 years ago
Hi all. I just started using prefect and am trying to deploy my first flow. I can successfully deploy a python file, but the error I get is: Failed to start process for flow run 'd7c862c3-f1f2-4cbd-b00d-b1248a9add0f'. FileNotFoundError: [Errno 2] No such file or directory: '/Users/caseym/PycharmProjects/virtual' Is this because I deployed while in a virtual environment? I'm using PyCharm. Any tips?
๐Ÿž 2
โœ… 1
c
i
  • 2
  • 10
  • 206
I currently have a prefect flow that operates on a single row of a pandas dataframe. Is there a stra...
d

Dexter Antonio

over 3 years ago
I currently have a prefect flow that operates on a single row of a pandas dataframe. Is there a straighforward way to map this flow to all of the rows in a pandas dataframe? In other words, can I create a flow and then map it? If I cannot map each row of a dataframe to a flow, is there a straightforward way of nesting different tasks into each other and then mapping that โ€œsuperโ€ task to a series of inputs?
d
k
  • 2
  • 10
  • 206
<@ULVA73B9P> using prefect 3 how do i set working directory in a prefect deploy call using --job-var...
d

David

about 1 year ago
@Marvin using prefect 3 how do i set working directory in a prefect deploy call using --job-variable?
d
m
+2
  • 4
  • 13
  • 204
<@ULVA73B9P> how should I configure my Postgres backend database for best performance with perfect?
a

Alec Thomson

about 1 year ago
@Marvin how should I configure my Postgres backend database for best performance with perfect?
a
m
  • 2
  • 3
  • 204
Sorry, I'm still confused around blocks! I understand that I might want to add a block in order to c...
c

Christopher

about 3 years ago
Sorry, I'm still confused around blocks! I understand that I might want to add a block in order to centrally coordinate access to configuration inside my flow. At this point, I'm just trying to get my flow running though, using an ECS-Task block. 1. I've defined a block in Python with
ECSTask(...).save("dev-trial", overwrite=True)
on my dev machine. It shows up in Prefect Cloud. 2. I have created a deployment with
prefect deployment build -n dev-trial -q dev -ib ecs-task/dev-trial -a flows/healthcheck.py:healcheck
again from my dev machine 3. I have started a local agent with
prefect agent start -q dev
and triggered a job. All works. 4. I have started an agent inside a container and triggered the job, but now it fails with the error
KeyError: "No class found for dispatch key 'ecs-task' in registry for type 'Block'."
It seems like in the container, it's not able to resolve the block reference. But isn't that embedded inside the yaml downloaded from Prefect Cloud?
โœ… 1
c
z
j
  • 3
  • 10
  • 204
Previous232425Next

Prefect Community

Bring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.

Powered by