• CA Lee

    CA Lee

    1 year ago
    Hello all, trying out the latest Prefect version
    0.14.12
    Running into this error when attempting to run a flow using ECS agent and ECSRun:
    botocore.errorfactory.InvalidParameterException: An error occurred (InvalidParameterException) when calling the RunTask operation: Task definition does not support launch_type FARGATE.
    I have a working config for prefect agent that executes the flow without errors. However, this involves creating a task-definitions.yaml:
    prefect agent ecs start -t token \
        -n aws-ecs-agent \
        -l label \
        --task-definition /path/to/task-definition.yaml \
        --cluster cluster_arn
    task-definitions.yaml
    networkMode: awsvpc
    cpu: 1024
    memory: 2048
    taskRoleArn: task_role_arn
    executionRoleArn: execution_role_arn
    The flow runs without errors, so the error is not due to IAM permissions. However, when running the ECS Agent using the
    --task-role-arn
    and
    --execution-role-arn
    CLI args, I run into the above-mentioned error. I have also tried running Prefect agent using
    --launch-type FARGATE
    , which I believe is the default and does not need to be specified, but this does not work too.
    prefect agent ecs start -t token \
        -n aws-ecs-agent \
        -l ecs \
        --task-role-arn task_role_arn \
        --execution-role-arn execution_role_arn \
        --cluster cluster_arn
    I have also tried to pass in
    task_role_arn
    and
    execution_role_arn
    into the ECSRun() function within my flow, and ran into the same error. Is there any way to run ECS Agent using CLI args without using the task-definition file?
    CA Lee
    Michael Adkins
    +2
    12 replies
    Copy to Clipboard
  • c

    Chris Smith

    1 year ago
    Hi all, does anyone know if it’s possible to have multiple teams and/or restrict user access to particular projects?
    c
    Greg Roche
    6 replies
    Copy to Clipboard
  • m

    Matthew Blau

    1 year ago
    Hello all, I have a flow that creates a container, starts the container, and grabs the container logs. Unfortunately I cannot seem to output logging infomation to where Prefect Server UI can see them. What am I doing wrong? Code and visualization will be in the thread
    m
    4 replies
    Copy to Clipboard
  • j

    Justin

    1 year ago
    Hey guys, one question... I experienced this on multiple systems, usually all ubuntu server lts 20.04, installing with pip. On every instance it will just not work out of the box. It will fail to connect to the graphql endpoint on localhost:4200. Sometimes it worked to set the docker internal ip of the container, sometimes it only worked to set the public ip (very insecure!), but it just never works out of the box - which is very annoying. Am I missing something? Steps I usually do • new ubuntu server • install docker (official repo) & docker-compose • apt install python3 python3-dev python3-pip • pip install prefect • prefect backend server • prefect server start • prefect agent local start And then I won't get a connection but it will redirect me to the "Welcome to your prefect ui" screen, where I try out one of the abovementioned IPs and - if I'm lucky - it will work, if not it will not work at all on that machine, even when pruning docker & reinstalling everything curl localhost:4200, 127.0.0.1:4200, dockerinternalip:4200 all work fine
    j
    Michael Adkins
    +1
    13 replies
    Copy to Clipboard
  • haf

    haf

    1 year ago
    Has anyone encountered and solved
    Unexpected error while running flow: KeyError('Task slug resolve_profiles_dir-1 not found in the current Flow; this is usually caused by changing the Flow without reregistering it with the Prefect API.')
    — I set up all agents/cli😒 two days ago at their latest versions, and the agent is a k8s one. I'm using Prefect cloud. This happens when I register the flow as such and then run it from the UI (I verified the UI archives the old version and I'm triggering a run of the updated version)
    #!/usr/bin/env python
    
    from logging import getLogger
    from datetime import timedelta
    from os import getenv
    from pathlib import Path
    from pendulum import today
    from prefect.engine.state import Failed
    from prefect.schedules import IntervalSchedule
    from prefect.storage import Docker
    from prefect.utilities.notifications import slack_notifier
    from prefect.utilities.storage import extract_flow_from_file
    
    logger = getLogger("dbt.deploy")
    
    with open('requirements.txt') as file:
        packages = list(line.strip() for line in file.readlines())
    
    docker = Docker(
        registry_url="europe-docker.pkg.dev/projecthere/cd",
        # dockerfile='Dockerfile', # Uncomment to use own Dockerfile with e.g. dependencies installed
        image_name="analytics-dbt",
        image_tag="0.14.11",
        python_dependencies=packages
    )
    
    slack = slack_notifier(
        only_states=[Failed],
        webhook_secret='SLACK_WEBHOOK_URL')
    
    every_hour = IntervalSchedule(
        start_date=today('utc'),
        interval=timedelta(hours=1))
    
    flows = sorted(Path('flows').glob('*.py'))
    
    # Add flows
    for file in flows:
        flow = extract_flow_from_file(file_path=file)
        <http://logger.info|logger.info>('Extracted flow from file before build')
    
        docker.add_flow(flow)
    
    # Build storage with all flows
    docker = docker.build()
    
    # Update storage in flows and register
    for file in flows:
        flow = extract_flow_from_file(file_path=file)
        <http://logger.info|logger.info>('Extracted flow from file after build')
    
        flow.storage = docker
        flow.state_handlers.append(slack)
        flow.schedule = every_hour
    
        <http://logger.info|logger.info>('Registering...')
        flow.register(
            project_name='dbt',
            build=False,
            labels=['prod'],
            idempotency_key=flow.serialized_hash(),
        )
    haf
    3 replies
    Copy to Clipboard
  • j

    Jack Sundberg

    1 year ago
    How does your team advise for scaling concurrency in Agents? My LocalAgent crashes once too many flows are submitted and concurrently running (roughly 300) -- and I believe this is because too many subprocesses are created, causing my computer to kill the parent process.
    j
    Samuel Hinton
    +1
    12 replies
    Copy to Clipboard
  • j

    Justin Chavez

    1 year ago
    Hi all, I am trying to run multiple linked StartFlowRuns that share results from some of the tasks from one flow to the next, I was looking at the previous answer (https://prefect-community.slack.com/archives/CL09KU1K7/p1607733221071000?thread_ts=1607727047.066800&amp;cid=CL09KU1K7), but it is not working for me as StartFlowRun returns a Task not a State in my instance. Is there a new solution for this?
    j
    Michael Adkins
    7 replies
    Copy to Clipboard
  • haf

    haf

    1 year ago
    Thanks for bearing with all my questions. I have one about k8s agents. Seeing that they are stateless pods, is there a way to set the agent id? https://docs.prefect.io/orchestration/agents/kubernetes.html#kubernetes-agent — this page doesn't mention anything about it, and it would seem that once the pod with the agent is restarted it gets a new id.
    haf
    Michael Adkins
    19 replies
    Copy to Clipboard
  • John Grubb

    John Grubb

    1 year ago
    Hi there 👋 - a question about scheduling and parameters in the UI. What we'd really like to do is set up multiple schedules for a given flow, each with their own sets of parameters. Has this ever been mentioned before? Usecase is that we want to reuse a flow that we've built without having to register it multiple times or copy it in the codebase.
    John Grubb
    nicholas
    3 replies
    Copy to Clipboard
  • c

    Charles Liu

    1 year ago
    Hey all, just looking for some further clarification on the proper usage of storage = Docker() and KubernetesRun() together. Could someone shine some light on why I can deploy and run a flow with the storage=Docker(...dockerfile="my dockerfile") [with confirmed EC2 activity and successful front end runs], but when I mute that dockerfile param and instead try to use KubenernetesRun(image="my image here") with the same kind of registry as the Docker storage destination, my private packages are missing? In my case, both images are built from the same Dockerfile, the latter scenario just involves pushing to a repo first and attempting to call it with KubernetesRun().
    c
    Chris White
    4 replies
    Copy to Clipboard