• Damien Ramunno-Johnson

    Damien Ramunno-Johnson

    1 year ago
    I have a quick question, with Cloud. Is the traffic from the endpoint to the agent one way? Like does the agent just check the endpoint every 10 seconds, and then tells the endpoint that it is running the job?
    Damien Ramunno-Johnson
    Kevin Kho
    5 replies
    Copy to Clipboard
  • l

    Lukas N.

    1 year ago
    Hello 👋 , I've hit an issue while retrying mapped task. It always re-runs the first mapped task. Code and more details in thread
    l
    Jenny
    +1
    17 replies
    Copy to Clipboard
  • Garret Cook

    Garret Cook

    1 year ago
    Will a ShellTask save its return value into the flow’s S3Result? If I have return_all=True set for the shell task, will all the logs get pickled and saved to S3?
    Garret Cook
    Michael Adkins
    3 replies
    Copy to Clipboard
  • Raed

    Raed

    1 year ago
    Hello, what would be the best way to pass a bitbucket access token to prefect-sever? (I.e. Set a secret in prefect-server) I setup the server using the instructions in the most recent

    stream

    When trying the following for a given flow
    flow.run_configs = KubernetesRun(
        image=<image>,
        env={
            "PREFECT__CONTEXT__SECRETS__BITBUCKET_ACCESS_TOKEN": os.environ[
                "BITBUCKET_ACCESS_TOKEN"
            ]
        },
        image_pull_policy="Always",
    )
    flow.storage = Bitbucket(
        project=<project>,
        repo=<repo>,
        path="flows/example_flow.py",
        access_token_secret="BITBUCKET_ACCESS_TOKEN",
    )
    I get the following error on the UI
    Failed to load and execute Flow's environment: ValueError('Local Secret "BITBUCKET_ACCESS_TOKEN" was not found.')
    Wouldn't the access token secret have been set in the run config? I also have the same environment variable in a custom docker image being used by the run config
    Raed
    Kevin Kho
    48 replies
    Copy to Clipboard
  • f

    Florian Kühnlenz

    1 year ago
    Hi. I have an issue that when I change the default parameters in the settings page of the UI, the changes do not apply to already scheduled flows. I believe there was a fix for this issue in the past, but maybe I misremember. Anyway, is this expected behavior or should I file a bug?
    f
    Kevin Kho
    6 replies
    Copy to Clipboard
  • Krapi Shah

    Krapi Shah

    1 year ago
    Hi. Prefect change task state as failed only in case of error or exception from what I understand. But what if i want to set the state based on the return value? Can I do that?
    Krapi Shah
    Jenny
    2 replies
    Copy to Clipboard
  • Garret Cook

    Garret Cook

    1 year ago
    How do I get the flow name at runtime I was trying prefect.context.get(‘flow_name’)
    Garret Cook
    Kevin Kho
    +1
    18 replies
    Copy to Clipboard
  • Tomás Emilio Silva Ebensperger

    Tomás Emilio Silva Ebensperger

    1 year ago
    I changed to the new api key for cloud but i get this error when registering the flows
    prefect.utilities.exceptions.ClientError: Malformed response received from Cloud - please ensure that you have an API token properly configured.
    Tomás Emilio Silva Ebensperger
    Chris White
    9 replies
    Copy to Clipboard
  • Michael Hadorn

    Michael Hadorn

    1 year ago
    Hi all I can not use the full cpu with mapped tasks on the prefect server using local dask executor (threads). I've a machine running the prefect server with 8 cpus (on ubuntu). There is a flow executed as a docker run with mapped cpu-intensive tasks using the local dask executor with threads. But whatever I try, the maximum cpu peak is 100% in this flow container (got this from docker stats). I know from other containers running on the same server, that the maximum is 100% per core, so in my case it is 800%. It looks like dask (or the container execution) is limited to only one cpu. Also if I look in 'top', there is only one cpu really in use. Is there any limit on the prefect server? Or is it linux, which is not using different cpu's for threads?
    Michael Hadorn
    Michael Adkins
    17 replies
    Copy to Clipboard
  • l

    Lukas N.

    1 year ago
    Hi all, I've observed different restart behaviour from ZombieKiller and the UI. I have a task that uses LOOP signal and output persisting with S3Result. My expectation is that if I do
    4/10
    iterations of the task, the result contains the output of the 4 iteration which is also the input to the 5th one. If the 5th one fails (in my case the process dies, stops sending 💓 ) the ZombieKiller restarts the task, but it ignores the result and starts from the 1st iteration! This doesn't happen if I restart it from the UI, there it correctly picks up the result and continues with 5th iteration.
    l
    2 replies
    Copy to Clipboard