• Daniel Sääf

    Daniel Sääf

    4 months ago
    Hi. I continue to ask my questions here - super grateful for the help i got so far. I have now setup a flow that launches a number of subflows that fetches data from Google Cloud Storage and writes the data to Big Query. I’m running the flow locally (prefect 2.0) and it’s connected to prefect cloud. However, i sometimes run into an error i cannot really understand and i don’t know how to troubleshoot it. It only happens occasionally and i cannot really reproduce it. I moved the trace to the thread. The logging message @
    16:34:10.826
    is the last thing that happens in the task read_blob in which the error occurs in. So it looks to me that something goes wrong when reporting the task. The error message doens’t tell me that much - so if you have any advices on how i should troubleshoot this i would be really helpful (or if you can guess on what might be wrong?)
    Daniel Sääf
    Anna Geller
    +1
    6 replies
    Copy to Clipboard
  • Constantino Schillebeeckx

    Constantino Schillebeeckx

    4 months ago
    is something funky going on with logging in the prefect UI? for flows that failed, i'm not getting the full logs (in the UI) - I send those logs to AWS cloudwatch and can confirm that full expected logs are there 😞
    Constantino Schillebeeckx
    Anna Geller
    +1
    10 replies
    Copy to Clipboard
  • Clément VEROVE

    Clément VEROVE

    4 months ago
    Hi everyone 👋 I have some troubles with the running of my flow on kubernetes. My flow includes docker command such as
    docker volume create
    /
    docker-compose up
    so i need docker daemon but it cannot be outside my job. Here is my job template
    apiVersion: batch/v1
    kind: Job
    spec:
      template:
        spec:
          restartPolicy: Never
          containers:
            - name: flow-container
            - name: dind-daemon
              image: docker:stable-dind
              env:
                - name: DOCKER_TLS_CERTDIR
                  value: ""
              securityContext:
                privileged: true
          imagePullSecrets:
            - name: regcred
    It works but my docker daemon container never stop...... any ideas ?
    Clément VEROVE
    Anna Geller
    3 replies
    Copy to Clipboard
  • d

    Daniel Saxton

    4 months ago
    any suggestions / best practices for triggering Prefect jobs within a CI/CD pipeline? suppose we're building a Docker image within the pipeline and pushing it to a container registry, and want to execute a flow using that container image, guaranteeing that we're always using the latest (can we do this with a Docker agent?)
    d
    emre
    3 replies
    Copy to Clipboard
  • Joshua Greenhalgh

    Joshua Greenhalgh

    4 months ago
    Hi wonder if anyone could help me with a problem I have working with the Dask KubeCluster? So the issue I am having is that various secrets that I have mounted to the usual flow jobs don't get carried over to the pods that are started by dask - there is an added complexity that I am using two images a dev one and a non dev one tied to two different prefect projects - I am able to do something like this to switch the image;
    DEV_TAG = os.environ.get("DEV", "") != ""
    
    JOB_IMAGE_NAME = f"blah/flows{':dev' if DEV_TAG else ''}"
    And then in each flow I ref the
    JOB_IMAGE_NAME
    - this just changes the image but otherwise uses the job template I have defined on the agent;
    apiVersion: batch/v1
    kind: Job
    spec:
      template:
        spec:
          containers:
            - name: flow
              imagePullPolicy: Always
              env:
                - name: SOME_ENV
                  valueFrom:
                    secretKeyRef:
                      name: secret-env-vars
                      key: some_env
                      optional: false
    Now when I specify the dask setup I do the following;
    executor=DaskExecutor(
            cluster_class=lambda: KubeCluster(make_pod_spec(image=JOB_IMAGE_NAME)),
            adapt_kwargs={"minimum": 2, "maximum": 3},
        )
    But this is obviously missing the env part of my default template - I would like to not have to respecify it (its much bigger then the above snippet) - is it possible to grab a handle on the default template and just override the image name?
    Joshua Greenhalgh
    Anna Geller
    11 replies
    Copy to Clipboard
  • Kayvan Shah

    Kayvan Shah

    4 months ago
    I am trying to write a DeploymentSpec YAML config file referring to this example:
    $ prefect deployment inspect 'hello-world/hello-world-daily'
    {
        'id': '710145d4-a5cb-4e58-a887-568e4df9da88',
        'created': '2022-04-25T20:23:42.311269+00:00',
        'updated': '2022-04-25T20:23:42.309339+00:00',
        'name': 'hello-world-daily',
        'flow_id': '80768746-cc02-4d25-a01c-4e4a92797142',
        'flow_data': {
            'encoding': 'blockstorage',
            'blob': '{"data": "\\"f8e7f81f24512625235fe5814f1281ae\\"", "block_id":
    "c204821d-a44f-4b9e-aec3-fcf24619d22f"}'
        },
        'schedule': {
            'interval': 86400.0,
            'timezone': None,
            'anchor_date': '2020-01-01T00:00:00+00:00'
        },
        'is_schedule_active': True,
        'parameters': {},
        'tags': ['earth'],
        'flow_runner': {'type': 'universal', 'config': {'env': {}}}
    }
    Is there any extensive example available to write the complete config for a flow??
    Kayvan Shah
    Anna Geller
    3 replies
    Copy to Clipboard
  • Kayvan Shah

    Kayvan Shah

    4 months ago
    Can't get the reason why there are so many late runs piling up Have scheduled about 6-7 flows on single node cluster via minikube
    Kayvan Shah
    Kevin Kho
    12 replies
    Copy to Clipboard
  • Jan Domanski

    Jan Domanski

    4 months ago
    Hi there what’s the best practice for passing database parameters into the prefect flows? I have a flow that I want to connect to different DBs (alpha/beta/prod), should I just use the Parameter mechanism for this?
    Jan Domanski
    Kevin Kho
    3 replies
    Copy to Clipboard
  • m

    Marwan Sarieddine

    4 months ago
    Not sure if others have faced this as well, but we experienced an issue at 4:00pm EST where all flows that were scheduled to run on prefect cloud at 4:00 pm EST were only picked up by our agents at 4:25 pm EST. It seems that things are back to normal now, given runs are being picked up by our agents promptly
    m
    1 replies
    Copy to Clipboard
  • h

    Hui Zheng

    4 months ago
    FYI, we experience flow run issues from 1PM - 1:30 PDT time. The scheduled flow runs were not triggered or picked up to run.
    h
    1 replies
    Copy to Clipboard