• Aiden Price

    Aiden Price

    2 years ago
    We’d also want to take advantage of as many Azure managed services as we could, so I’d be hoping to use an existing Azure managed Postgresql instance for the backing database. What are the options for that given the current server deployment requires Docker Compose?
  • d

    David Ojeda

    2 years ago
    A question for the Prefect team (a bit related to the most recent question by @Aiden Price) I have a helm chart for a Prefect open-source server (ui, apollo, and the other services). Basically, the helm/k8s translation of the current docker-compose approach. I can publish this for all to use but I feel like I am stepping into cloud territory and I don’t want to hinder your business model. Would you be interested in that? There are a couple of caveats: I did not create this with the objective of generalization; it works for me so there are two specific things: a kubernetes agent and google cloud provisioning for the database disk.
    d
    j
    +1
    8 replies
    Copy to Clipboard
  • b

    Bartek

    2 years ago
    Hi I have noticed that after restarting prefect server I have lost history and flows and I have to register all my flows once again. Is it possible preserve history and flows after stoping server?
    b
    nicholas
    5 replies
    Copy to Clipboard
  • s

    Sandeep Aggarwal

    2 years ago
    Hi All, Prefect newbie here. I am currently evaluating Prefect for a switch over from Airflow. A particular usecase I am struggling with is accessing result of upstream tasks in state handler of current task. In Airflow, I could achieve the same by querying the XCom for the upstream task instance. Also is there a way to access task using its ID or name inside state handler? Any help would be really appreciated. Thanks.
    s
    Laura Lorenz (she/her)
    9 replies
    Copy to Clipboard
  • b

    Bartek

    2 years ago
    Hi I am experiencing issue with
    UI
    as it not showing any
    flows
    and
    runs
    . I register flow with success and see in agent logs and server logs that flow runs when is scheduled but I have no information about flows and runs in
    UI
    .
    b
    a
    +1
    26 replies
    Copy to Clipboard
  • t

    Troy Sankey

    2 years ago
    My flows use KubernetesJobEnvironment and I specify a custom job_spec, but I'm noticing that I need to manually delete the k8s job between flow runs or else subsequent runs will fail to create the k8s job due to the job already existing.
    t
    nicholas
    +1
    16 replies
    Copy to Clipboard
  • k

    Kostas Chalikias

    2 years ago
    Hi there, I'm trying to understand how the zombie killer decides to mark a task as failed and by extension how the heartbeats are actually sent. We use the local daemon with cloud which I believe forks a process per flow, who is doing the heartbeating there?
    k
    nicholas
    +1
    7 replies
    Copy to Clipboard
  • m

    Matthias

    2 years ago
    Hi, the following code crashes on me in Dask, due to
    Large object of size 5.49 MB detected in task graph:
    Am I doing something wrong? This is the simplest example I could come up with, that shows this behaviour.
    m
    d
    +1
    26 replies
    Copy to Clipboard
  • Scott Zelenka

    Scott Zelenka

    2 years ago
    Looking for best practices around continuous delivery around Flows. Specifically, we have a Flow that's triggered from another system over GraphQL. We're currently triggering the Flow on the
    version_group_id
    . The execution of the Flow includes a step to write data back to a Prod API of an external system. The challenge comes when making iterations to this Flow in development. The external system's Dev instance still triggers our Flow through GraphQL, but is expecting it to write data back to a Dev API. We could update and deploy the Flow to Cloud, but then the
    version_group_id
    picks up our Dev Flow, rather than our Prod Flow. The only thing I can think of is to have two different Flows deployed on Cloud (one for Prod and another for Dev). But in that case, promoting from Dev to Prod has a bunch of manual steps prone to human error. Interested in the communities thoughts on how you handle deploying multiple versions of the same Flow between Dev and Prod environments, where the configuration between Dev and Prod are different.
    Scott Zelenka
    j
    +2
    6 replies
    Copy to Clipboard