• Zach Munro

    Zach Munro

    5 months ago
    e.g. Is there a Fargate executor?
    Zach Munro
    1 replies
    Copy to Clipboard
  • Zach Munro

    Zach Munro

    5 months ago
    One more question: is it possible to have different tasks in a pipeline run in separate containers? For instance, if one task needed to be run on a windows machine, and the next task in the flow needed to be run on a linux machine.
    Zach Munro
    1 replies
    Copy to Clipboard
  • Rajan Subramanian

    Rajan Subramanian

    5 months ago
    Hello, i have a fargate eks question. I was in the verge of deploying my prefect file into fargate. I'm using redis as a temporary store. I created a redis elastic cache on AWS and im having trouble connecting to it via kubernetees. Mainly getting a
    temporary failure in name resolution
    I'm not even sure whom to ask this, but since i was using prefect i figured someone here knows about this. I raised it here, https://stackoverflow.com/questions/71864208/unable-to-connect-to-redis-elasticcache-from-fargate, curious if someone had any suggestions? My fargate profile has same 4 subnets that my cluster in elastic cache has. they also have the same security group.
    Rajan Subramanian
    Anna Geller
    7 replies
    Copy to Clipboard
  • Apoorva Desai

    Apoorva Desai

    5 months ago
    Hello! I have a custom docker container for my prefect flows and I have copied over a python file with some custom functions onto this docker container. However, my prefect flow is unable to import these functions. I am getting a module not found error. How do you access functions from python files in custom docker containers in prefect flow?
    Apoorva Desai
    Anna Geller
    +1
    35 replies
    Copy to Clipboard
  • Alexander Butler

    Alexander Butler

    5 months ago
    I am working on standing up Prefect 2.0 is a production environment. For internal data pipeline and reverse etl uses so no fire hazards on my end to use 2.0 early here. Is there a general preference on YAML vs Code for the deployment specification. I noticed you can configure a flow deployment with YAML but I cant find any information on the schema of that document. For example:
    - name: elt-salesforce
      flow_location: ./salesforce_flows.py
      flow_name: elt-salesforce
      tags:
        - salesforce
        - core
      parameters:
        destination: "gcp"
      schedule:
        interval: 3600
    Assuming interval is seconds? Can I specify another grain? Can schedule take a dict? If it takes cron, does that take a dict? Honestly schedule is the primary question point. Everything else is straightforward enough.
    Alexander Butler
    Kevin Kho
    +3
    13 replies
    Copy to Clipboard
  • Salim Doost

    Salim Doost

    5 months ago
    We’re having issues with deployments on one of our Prefect workspaces all of a sudden (without changing anything on our setup to the best of our knowledge). Our flows are stored as Docker images in AWS ECR. Running a newly created flow leads to the following error:
    404 Client Error for <http+docker://localhost/v1.41/containers/create?name=quantum-squid>: Not Found ("No such image: <account-id>.<http://dkr.ecr.ap-northeast-1.amazonaws.com/datascience-prefect:<image-tag-name>%22|dkr.ecr.ap-northeast-1.amazonaws.com/datascience-prefect:<image-tag-name>">)
    However, we’re able to confirm that the image with this tag exists on EMR. Updating an existing flow by overriding an existing image-tag leads to the following error:
    KeyError: 'Task slug <task-name> is not found in the current Flow. This is usually caused by a mismatch between the flow version stored in the Prefect backend and the flow that was loaded from storage.
    - Did you change the flow without re-registering it?
    - Did you register the flow without updating it in your storage location (if applicable)?'
    Again, we’re able to confirm in AWS ECR that the image got pushed and updated successfully. Our deployment job didn’t throw any error messages as well. Any idea what we can do to resolve this issue?
    Salim Doost
    Kevin Kho
    10 replies
    Copy to Clipboard
  • c

    Carlos Cueto

    5 months ago
    Hi. I'm having an issue using a
    LocalRun
    flow's
    working_dir
    parameter. Whenever I specify the following:
    flow.run_config = LocalRun(_working_dir_='C:/scripts/GetADUsers', _labels_=["SVRBIPTH01"])
    Whenever I register the flow (I'm using Prefect 1.2.0 on MacOS python 3.10) I get the following working_dir on the UI of Prefect Cloud:
    /Users/carloscueto/Documents/Python_Scripts/Prefect-Flows/PowerShell/GetADUsers/C:/scripts/GetADUsers
    It seems to be adding the path from where I register the script from (on the local machine) to the working_dir string I specified on the run_config. Has anybody encountered this before? Everything works fine when I register the flow from a Windows computer.
    c
    Kevin Kho
    +2
    13 replies
    Copy to Clipboard
  • Alexander Butler

    Alexander Butler

    5 months ago
    One more question thats far more importantly than the YAML flow deployment config... If I want to bundle a self contained application as a docker image which required no ui interfacing to stand up Prefect 2.0, so
    prefect orion start
    prefect deployment create ...
    prefect work-queue create -t etl -- etl-queue
    HERE IS THE GAP -- the response to the above command is something like
    UUID('...')
    , which is useless when setting something up from the CLI without sed/awk?
    prefect agent start 'no simple headless way to derive id...'
    The less appealing part afterwards is that
    prefect work-queue ls
    renders a table which is pretty in a CLI but useless again to simply get an ID. Has anyone set up Prefect 2.0 to self deploy in an image along with all their code? The ephemeral nature makes this very advantageous with what seems to be a tiny unconsidered gap. I am pretty sure a more reliable consistent way to get work queue is all thats needed basically, but if I am totally missing it just lmk. I am a big fan of the package for the record but now its crunch time production use attempts 🙂
    Alexander Butler
    Anna Geller
    3 replies
    Copy to Clipboard
  • Jacob Blanco

    Jacob Blanco

    5 months ago
    Anyone else seeing timeouts against the Prefect API? The Cloud view is not updating consistently and many of our Cloud-managed flows are not starting up.