• Benny Warlick

    Benny Warlick

    4 months ago
    Hey all, I've been playing with running Prefect 2 in a docker container on Google Compute Engine. I created a basic "hello world" example that uses Github Actions to build the container and deploy to Compute Engine. Let me know if you have any suggestions and if this is helpful: https://github.com/BennyJW/prefect2-docker-gce
    Benny Warlick
    Kevin Kho
    +1
    3 replies
    Copy to Clipboard
  • j

    Jake

    4 months ago
    When we delete flows using the graphql mutation; how do we ensure that all versions of a flow are deleted and not just the most recent / active version?
    j
    Kevin Kho
    4 replies
    Copy to Clipboard
  • John Kang

    John Kang

    4 months ago
    Hi, I'm looking to integrate with a database and we are exploring putting some data into a CockroachDB database, but I did not see it in the prefect integrations. Do you recommend instead using a PostgreSQL database?
    John Kang
    Kevin Kho
    2 replies
    Copy to Clipboard
  • Malthe Karbo

    Malthe Karbo

    4 months ago
    Hi, posting incase anyone else is having issues in their pytest CICD pipelines after updating to the awesome beta4 release of 2.0: It seems that the
    prefect_test_harness
    was moved into a new module (
    prefect.testing
    ), that is not available in 2.0b4 - even though it is available in the orion branch in the repo. I created an issue at GH also: https://github.com/PrefectHQ/prefect/issues/5787
    Malthe Karbo
    1 replies
    Copy to Clipboard
  • Arnas

    Arnas

    4 months ago
    Hello! Hopefully a simple question: I am trying to create a flow with two schedules (let's say A and B). Is there any way to specify an additional flow label for each schedule (e.g., 'a' for A schedule and 'b' for schedule B)? What I ultimately would like to achieve is to enable the same flow to be run on different agents at different times - maybe there is a better way? P.s. I'm using Prefect 1.0
    Arnas
    Kevin Kho
    2 replies
    Copy to Clipboard
  • Andrew Lawlor

    Andrew Lawlor

    4 months ago
    seeing errors like
    Error during execution of task: KeyError(<Thread(Dask-Default-Threads-12-578, started daemon 140412823688960)>)
    when retrying tasks run on dask. is there special configuration i need to do for a retry with dask?
    Andrew Lawlor
    Kevin Kho
    3 replies
    Copy to Clipboard
  • f

    Frederick Thomas

    4 months ago
    Hi all, we've just upgraded the python version to 3.10 and re-registered all the flows but we are getting this error, could someone assist?:
    Exception raised while calling state handlers: SystemError('unknown opcode')
    Traceback (most recent call last):
      File "/mnt/data/prefect/venv/lib/python3.8/site-packages/prefect/engine/cloud/flow_runner.py", line 119, in call_runner_target_handlers
        new_state = super().call_runner_target_handlers(
      File "/mnt/data/prefect/venv/lib/python3.8/site-packages/prefect/engine/flow_runner.py", line 116, in call_runner_target_handlers
        new_state = handler(self.flow, old_state, new_state) or new_state
      File "/mnt/data/prefect/venv3.10/lib/python3.10/site-packages/prefect/utilities/notifications/notifications.py", line 65, in state_handler
        def state_handler(
    SystemError: unknown opcode
    f
    Kevin Kho
    +1
    10 replies
    Copy to Clipboard
  • Steve s

    Steve s

    4 months ago
    Hi all, I'm seeing a new error crop up in a flow I've been running stably for a few months. The flow is a top-level pipeline that runs a series of
    create_flow_run
    (and
    wait_for_flow_run
    ) tasks. One of these steps is followed up with a
    get_task_run_result
    , which has always worked without issue until today. Now it's throwing this error:
    ValueError: The task result cannot be loaded if it is not finished
    . I'm not seeing how this could be, since I can see in the logs that the upstream task did in fact finish successfully. I tried explicitly setting the result of
    wait_for_flow_run
    as an upstream dependency of
    get_task_run_result
    (which i think shouldn't be needed), and I also tried setting the
    poll_time
    to
    30
    , but still no luck. Does anyone have any ideas?
    Steve s
    Anna Geller
    +1
    24 replies
    Copy to Clipboard
  • Ramzi

    Ramzi

    4 months ago
    I am in the process of building CI/CD pipeline using prefect 2.0 for cloud. I am running into an issue where i get the error:
    You have not configured default storage on the server
    or set a storage to use for this deployment but this deployment is using a 
    Kubernetes flow runner which requires remote storage.
    I have already defined the s3 bucket as the storage in prior steps and make sure to even reset it as the default before hand. I have no problem creating the deployment locally only an issue when running it on github actions.
    Ramzi
    Malthe Karbo
    +1
    5 replies
    Copy to Clipboard
  • Mikkel Duif

    Mikkel Duif

    4 months ago
    Got a questions in regards to handling DST correctly. If i specify the anchor_date in winther time, it will be offset by 1 hour. is there a way to handle this correctly?
    import asyncio
    import pendulum
    from datetime import timedelta
    from prefect.orion.schemas.schedules import IntervalSchedule
    
    winter_schedule = IntervalSchedule(
       interval=timedelta(hours=24),
       anchor_date=pendulum.datetime(2022, 1, 1, 0, 30, 0, tz="Europe/Copenhagen")
    )
    
    summer_schedule = IntervalSchedule(
       interval=timedelta(hours=24),
       anchor_date=pendulum.datetime(2022, 4, 1, 0, 30, 0, tz="Europe/Copenhagen")
    )
    
    
    print(asyncio.run(winter_schedule.get_dates(1))[0])
    print(asyncio.run(summer_schedule.get_dates(1))[0])
    
    >>> "2022-05-16T01:30:00+02:00"
    >>> "2022-05-16T00:30:00+02:00"
    Mikkel Duif
    Anna Geller
    10 replies
    Copy to Clipboard