• y

    Yusuf

    3 weeks ago
    Hey, I just finished going through the Get Started section for the Prefect 2.0 docs. I haven't yet gotten to the Concepts yet, so bear with me if this is a dumb question. But how do I define dependencies between tasks? What is the equivalent of
    task.set_upstream
    from prefect 1? I know Prefect 2.0 tries to remove the strict dependency on DAGs but are DAGs gone altogether?
    y
    Taylor Curran
    2 replies
    Copy to Clipboard
  • a

    Aman Tripathi

    2 weeks ago
    Hi Team, Newbie to 2.0. We are in process to upgrade to 2.0. We are using Prefect + PySpark in our project. In Prefect 1.0 we used resource_manager (the decorator) for SparkContext, so SparkSession is initialized at the beginning of the flow and remains throughout the lifetime of the flow. What is the equivalent of resource_manager decorator in 2.0? Thanks for your help.
    a
    Anna Geller
    2 replies
    Copy to Clipboard
  • g

    gertjan

    2 weeks ago
    Hi everyone quite new to prefect and testing something out but I’m stuck,. I want to use
    Deployment.build_from_flow
    , it works when I put this in the same directory of the
    flows
    But what I want is the following:
    - deployments:
       -- deployments.py (file with all my deployments)
    - flows:
       -- flow_a.py
       -- flow_b.py
    When I generate the deployments I get:
    ValueError: '..../updates/flows/flow_a.py' is not in the subpath of '.../updates/deployments' OR one path is relative and the other is absolute.
    It has to do with this line
    entry_path = Path(flow_file).absolute().relative_to(Path(".").absolute())
    -> it does not look like I can edit this “path” Anyone has done this before? Is this even possible?
    g
    Anna Geller
    5 replies
    Copy to Clipboard
  • Falk

    Falk

    2 weeks ago
    Hey everyone, just wondering if anybody is familiar with https://github.com/ydataai/pandas-profiling and managed to implement it in their Prefect Flows? I'm looking for a way to run pandas profiling on a Dataframe and show the results in the artifacts. Has something like this been implemented yet?
    Falk
    1 replies
    Copy to Clipboard
  • a

    Angel Acosta

    2 weeks ago
    Hello again, I am trying to use the github storage block. I create the block, here is the url https://github.com/AngelSantiagoAcosta/prefect_sb . I run the agent and run the deployment with defaults. here is the deployment string: prefect deployment build lis_qc.py😛rocess_lis_data --name lis_qc_deployment -q "lis qc queue" -sb github/my-github --apply . the agent returns the following error: OSError: Failed to pull from remote: 'g' is not recognized as an internal or external command, operable program or batch file.
    a
    Cole Murray
    7 replies
    Copy to Clipboard
  • y

    Yusuf

    2 weeks ago
    Looking for advice for a pattern I'm trying to implement in prefect 1 that behaves like a local filesystem sensor. I don't want to rig it up to lambda/event grid for this use case. I have files arriving in a local filesystem. My thought is to just have a first task that runs every 10 minutes that checks if a new file has arrived it determines what is "new" by checking a log of processed file names that the last task in the flow updates. I can't move the files themselves to a landing/processed area because this is sitting on top of a process in place already that I can't change. My question is, how should I make it such that when the first task runs, and it sees no new files, for the remaining tasks to not execute, but the flow and tasks to all show successful?
    y
    a
    11 replies
    Copy to Clipboard
  • g

    gertjan

    2 weeks ago
    I know you can change profiles via the CLI, but can you change profiles in the python code? For example: I have one flow, 2 tasks, but I want to execute each task on a different profile. (basically different ENV vars) Also from the docs:
    prefect config set VAR2=Y VAR3=Z
    Gives:
    Unknown setting name 'VAR2'.
    And nothing is added in the
    profiles.toml
    file, what am I missing?
    g
    Jeff Hale
    2 replies
    Copy to Clipboard
  • m

    Marty Ko

    1 month ago
    Hi new to prefect I’m curious if V2 is actually production ready? I can tell at least from the UI there are a lots of features are missing like creating a flow run etc. If this is my first time setting it up is it better to start off with V1 first?
    m
    Anna Geller
    +2
    13 replies
    Copy to Clipboard
  • l

    Lucas Cavalcanti Rodrigues

    2 weeks ago
    Hi, everyone. How can I run a flow several times using different parameters? I'm trying to use create_flow_run.map like this:
    dump_to_gcs_flow = create_flow_run.map(
                flow_name=unmapped(utils_constants.FLOW_DUMP_TO_GCS_NAME.value),
                project_name=unmapped(constants.PREFECT_DEFAULT_PROJECT.value),
                parameters=tables_to_zip,
                labels=unmapped(current_flow_labels),
                run_name=unmapped("Dump to GCS"),
            )
    except
    parameters
    all the other arguments are constants, so I use
    unmapped
    on them.
    tables_to_zip
    is a list of dictionaries containing the parameters values for each table to be zip. However, this didn't work. I'm currently receiving the error:
    prefect.exceptions.ClientError: [{'message': 'parsing UUID failed, expected String, but encountered Array', 'locations': [{'line': 2, 'column': 5}], 'path': ['flow_run'], 'extensions': {'path': '$.selectionSet.flow_run.args.where.id._eq', 'code': 'parse-failed', 'exception': {'message': 'parsing UUID failed, expected String, but encountered Array'}}}]
    what am I doing wrong here?
    l
    1 replies
    Copy to Clipboard