https://prefect.io logo
Docs
Join the conversationJoin Slack
Channels
announcements
ask-marvin
best-practices-coordination-plane
data-ecosystem
data-tricks-and-tips
events
find-a-prefect-job
geo-australia
geo-bay-area
geo-berlin
geo-boston
geo-chicago
geo-colorado
geo-dc
geo-israel
geo-japan
geo-london
geo-nyc
geo-seattle
geo-texas
gratitude
introductions
marvin-in-the-wild
prefect-ai
prefect-aws
prefect-azure
prefect-cloud
prefect-community
prefect-contributors
prefect-dbt
prefect-docker
prefect-gcp
prefect-getting-started
prefect-integrations
prefect-kubernetes
prefect-recipes
prefect-server
prefect-ui
random
show-us-what-you-got
Powered by Linen
prefect-community
  • h

    Hui Zheng

    10/13/2020, 9:10 PM
    Hello, I am running into a strange failure at the
    Cloudpickle serialization check
    step of the health-check during flow.register(). Could you help? The error message is below
    Beginning health checks...
    System Version check: OK
    Traceback (most recent call last):
      File "/opt/prefect/healthcheck.py", line 147, in <module>
        flows = cloudpickle_deserialization_check(flow_file_paths)
      File "/opt/prefect/healthcheck.py", line 40, in cloudpickle_deserialization_check
        flows.append(cloudpickle.load(f))
      File "/usr/local/lib/python3.8/site-packages/environs/__init__.py", line 334, in __getattr__
        return functools.partial(self.__custom_parsers__[name], self)
      File "/usr/local/lib/python3.8/site-packages/environs/__init__.py", line 334, in __getattr__
        return functools.partial(self.__custom_parsers__[name], self)
      File "/usr/local/lib/python3.8/site-packages/environs/__init__.py", line 334, in __getattr__
        return functools.partial(self.__custom_parsers__[name], self)
      [Previous line repeated 994 more times]
    RecursionError: maximum recursion depth exceeded
    
    Removing intermediate container 98613182a8f2
    The command '/bin/sh -c python /opt/prefect/healthcheck.py '["/opt/prefect/flows/20-06-0-standard-ag-stage.prefect"]' '(3, 8)'' returned a non-zero code: 1
    Traceback (most recent call last):
      File "build_and_register.py", line 174, in <module>
        flow.register(
      File "/usr/local/lib/python3.8/site-packages/prefect/core/flow.py", line 1588, in register
        registered_flow = client.register(
      File "/usr/local/lib/python3.8/site-packages/prefect/client/client.py", line 734, in register
        serialized_flow = flow.serialize(build=build)  # type: Any
      File "/usr/local/lib/python3.8/site-packages/prefect/core/flow.py", line 1439, in serialize
        storage = self.storage.build()  # type: Optional[Storage]
      File "/usr/local/lib/python3.8/site-packages/prefect/environments/storage/docker.py", line 346, in build
        self._build_image(push=push)
      File "/usr/local/lib/python3.8/site-packages/prefect/environments/storage/docker.py", line 412, in _build_image
        raise ValueError(
    ValueError: Your docker image failed to build!  Your flow might have failed one of its deployment health checks - please ensure that all necessary files and dependencies have been included.
    Please see more context and the details in the thread.
    j
    • 2
    • 17
  • m

    Marwan Sarieddine

    10/13/2020, 9:51 PM
    Hi folks, I am trying to schedule a flow to run every 4th business day - so I created my own filter function following a similar format as how prefect schedule filter functions are implemented but I am getting a marshmallow.exceptions.ValidationError
    j
    • 2
    • 10
  • p

    Pedro Machado

    10/13/2020, 11:16 PM
    Hi there. Is it possible to use Prefect cloud to send a notification from a state handler? I need to send an email conditionally depending on the result of a task.
    c
    • 2
    • 9
  • k

    Krzysztof Nawara

    10/14/2020, 8:05 AM
    Hi, I need all of the tasks in the flow to have some shared configuration, so I decided to follow into Prefect example and create a custom decorator. But I have problem with correctly forwarding the arguments to the task. EDIT: details (code and stackstrackes) moved to the thread Any suggestions how to make it work?
    j
    c
    • 3
    • 13
  • a

    Alberto de Santos

    10/14/2020, 8:50 AM
    Hi guys, Can anyone give me a piece of advice on how to run a flow from python without manually inserting the flow_id?
    j
    j
    • 3
    • 21
  • a

    as

    10/14/2020, 11:43 AM
    HI, I was wondering what is supposed to happen when a the target parameter is set but not the checkpoint parameter. The target is here set to the path of a file that is produced by the task (onnx file in this case) the result to the task is a string with the file path to that object. Since checkpointing is not defined, the result of the task is not saved but the task is not run because the target exists. So is there a conflict here? If the target exists, how is the result passed on to the next task in the flow? This is a bit confusing to me. Can somebody explain to me what happens in this situation? Thanks EDIT: I just figured out that the object generated by the task (of which the path is specified at the target parameter) is overwritten with the result of the task. Ideally I would expect to be able to somehow define an artifact generated by the task as a target. If the object exits do not redo the task and pass on the path to the object to the next step in the flow. Is this something that would make sense? Or does this allready exist and I am doing it wrong?
    j
    • 2
    • 6
  • r

    Raphaël Riel

    10/14/2020, 12:44 PM
    Hi all! 👋 I’m facing some trouble executing parallel/concurrent Tasks on my agent. I have a Task that I start in my flow using
    the_task.map(list_of_ints)
    . The Task will have to Map >10 items. When A) running the flow directly from within the .py file using
    flow.run()
    AND B) setting
    executor=LocalDaskExecutor()
    I’m able to have it use more than 1 thread. But as soon as I try to run this Flow in an Agent OR if I remove the Dask Executor (While executing the .py file directly), I can’t make it run in parallel! Recap: 1. Execute the flow from .py file WITH
    executor=LocalDaskExecutor()
    = Works 2. Execute the flow from .py file with “default” Executor = Nope 3. Any combinaison of executor running in an agent = Nope Any suggestion will be welcome! Thanks.
    j
    • 2
    • 13
  • j

    Jeff Brainerd

    10/14/2020, 12:54 PM
    Hi 😛refect: team I am seeing consistent timeout issues trying to run graphql queries, either programmatically or in the UI. Other parts of the UI seem fine. Is this just me?
    {
      "graphQLErrors": [
        {
          "path": [
            "flow_run"
          ],
          "message": "Operation timed out",
          "extensions": {
            "code": "API_ERROR"
          }
        }
      ],
      "networkError": null,
      "message": "GraphQL error: Operation timed out"
    }
    j
    j
    • 3
    • 24
  • a

    Aaron Y

    10/14/2020, 3:57 PM
    Im trying to build in testing, but am having trouble importing the tasks. Also, is this a proper setup?
    from ... import ??
    
    def test_say_hello():
        assert say_hello() == True
    PostProcessingTemplate/
      PostProcessingFlow.py
      PostProcessingTasks.py
      tests/
        test_PostProcessingTasks.py
    j
    • 2
    • 8
  • k

    Krzysztof Nawara

    10/14/2020, 5:54 PM
    I provided code examples, should be ready to take a look now 🙂 Thanks!
  • t

    Tom Augspurger

    10/14/2020, 5:56 PM
    Question about environments and storage: is it possible to use a
    DaskKubernetesEnvironment
    environment with a custom
    scheduler_spec_file
    /
    worker_spec_file
    , and
    GitHub
    storage together? For pangeo-forge, we don't want our users to worry about things like storage / execution environments if they don't need to, so we provide a default: https://github.com/pangeo-forge/pangeo-forge/pull/14/files#diff-467822c6f6378f68bea635c429827a2caf36c7f16cb25944cc7b5146262cf35aR32-R68. Users just write their pipeline and push it to GitHub (e.g. https://github.com/TomAugspurger/example-pipeline/blob/main/recipe/pipeline.py#L30-L41). When I register and run a flow with this setup, I notice that my custom spec files aren't being used (defined at https://github.com/pangeo-forge/pangeo-forge/pull/14/files#diff-267b30d97c826b0afcae2110fe8ca4acfe6f35a6321d80f5fcc74ea9b7547fc0). We just need to update the
    ServiceAccount
    to be
    pangeo-forge
    rather than
    default
    . So my questions would be: 1. Is it common to use DaskKubernetesEnvironment and GitHub storage, rather than Docker storage? 2. Any suggestions on debugging why my custom spec files aren't being used? When I used
    Docker
    storage they were used (but I've changed other things too).kkkkkkk
    j
    • 2
    • 1
  • m

    Mitchell Bregman

    10/14/2020, 6:35 PM
    using Docker storage, i am looking to create a custom base image which extends upon prefect base image just adding a few additional internal python packages… any examples out there? is there a different way you suggest
    pip install
    some internal dependencies that live within a private pypi registry?
    m
    b
    • 3
    • 3
  • a

    Alberto de Santos

    10/14/2020, 7:51 PM
    Hi there,
  • a

    Alberto de Santos

    10/14/2020, 7:51 PM
    Can anyone tell me how to get rid of these labels: ”
    'azure-flow-storage', 'gcs-flow-storage', 's3-flow-storage',
          'github-flow-storage', 'webhook-flow-storage'
    Even when setting up the TOML file with labels, they still appear.
    r
    j
    c
    • 4
    • 22
  • r

    Robin

    10/14/2020, 9:55 PM
    Hi everybody, we are experimenting with
    flow.run_config
    instead of
    flow.environment
    because the DEBUG logging on AWS EKS did not work with
    flow.environment
    . The good news: DEBUG level logging now works! The bad news: the tasks are not executed parallely, even after setting
    flow.executor = DaskExecutor()
    Do you know how to enable parallelization?
    a
    • 2
    • 1
  • r

    Robin

    10/14/2020, 9:59 PM
    We were able to run flows with the following command via linux, but not via windows. 🤷‍♂️
    prefect run flow --name "e3dc-all_tasks-flow" --project "eks_test_01" -ps '{"system_id_index_range": [0, 200]}'
    However, prefect suggests this command also for windows. Not a big issue for us, but for some users it might be ...
  • o

    Ognjen Nikolic

    10/14/2020, 11:01 PM
    Hi everyone, my first post here as I am new to Prefect. For my use-case, I am trying to construct a workflow running on a FargateCluster that reads a master file containing a list of large files (from S3) then map over each large file with several custom Docker images and Python tasks. However, we haven't been able to get even a simple Docker image task running (see below), the CreateContainer task results in a DockerException "No such file or directory". Any help would be greatly appreciated.
    cluster = FargateCluster(
        image="<http://xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/prefect-dask:latest|xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/prefect-dask:latest>",
        scheduler_cpu=1024,
        scheduler_mem=4096,
        worker_cpu=256,
        worker_mem=512,
        cloudwatch_logs_group="prefect-dask-test",
        task_role_policies=['arn:aws:iam::aws:policy/AmazonS3FullAccess',
                            'arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess']
    )
    
    # Prefect task definitions
    create_container = CreateContainer(
        image_name="ubuntu",
        command="ls",
        volumes=['/var/run/docker.sock', '/var/run/docker.sock']
    )
    start = StartContainer()
    logs = GetContainerLogs()
    wait = WaitOnContainer()
    
    # Prefect Flow definition
    with Flow("Prefect Test Workflow") as flow:
        container_id = create_container()
        start = start(container_id=container_id)
        wait = wait(container_id=container_id)
        logs = logs(container_id=container_id)
        logs.set_upstream(wait)
    
    state = flow.run(executor=DaskExecutor(address=cluster.scheduler_address))
    
    Error encountered:
    [2020-10-14 22:45:03] INFO - prefect.TaskRunner | Task 'CreateContainer': Starting task run...
    [2020-10-14 22:45:03] ERROR - prefect.TaskRunner | Unexpected error: DockerException("Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))")
    Any help would be greatly appreciated for how to correctly run Docker images as tasks within Prefect on FargateCluster.
    n
    m
    • 3
    • 5
  • i

    Isaac Brodsky

    10/14/2020, 11:38 PM
    Hi, I was looking at the MySQLFetch task today and it seems like it does not allow parameterizing the connection details? That doesn’t seem to allow me to pass a parameter for the db name, or more to the point a secret parameter for the db password
    m
    • 2
    • 2
  • r

    Rob Fowler

    10/15/2020, 2:24 AM
    I have an issue with LocalDaskExecutor, I have reduced my code down to a simple example. In this example, if I run at tasks, each sleeping for 3 seconds, the flow does not map some of the tasks. I suspect it's my use of a mapped task. The range is simple a list of numbers. What happens is it does not run the 'slow_task' for every item in the list.
    I get: | Flow run FAILED: some reference tasks failed on account of it never scheduling one of the slow_task workers.If there is anything but no load on the machine it fails. If the machine is 1% CPU it works.# python slow.py --range=10 --sleep_time=3
    from time import sleep
    import argparse
    
    from prefect import Flow, Parameter, unmapped, task, context
    from prefect.engine.executors import LocalDaskExecutor
    
    
    @task(timeout=9)
    def slow_task(opts, item, scripts):
        logger = context.get('logger')
        <http://logger.info|logger.info>(f"==== IN TASK {item} Sleeping {opts.sleep_time}")
        sleep(opts.sleep_time)
        <http://logger.info|logger.info>(f"## Awake {item}")
        return item
    
    
    @task
    def produce_range(opts):
        return range(opts.range)
    
    
    with Flow("PS Version") as flow:
        scripts = Parameter('scripts')
        opts = Parameter('opts')
    
        nrange = produce_range(opts)
        results = slow_task.map(item=nrange,
                                scripts=unmapped(scripts),
                                opts=unmapped(opts))
    
    
    if __name__ == '__main__':
    
        parser = argparse.ArgumentParser(description='test pywinrm')
        parser.add_argument('--workers', type=int, default=10)
        parser.add_argument('--sleep_time', type=int, default=2)
        parser.add_argument('--range', type=int, default=10)
    
        opts = parser.parse_args()
    
        executor = LocalDaskExecutor(num_workers=opts.workers)
        flow.run(executor=executor,
                 scripts="hello",
                 opts=opts)
    c
    m
    j
    • 4
    • 38
  • r

    Rob Fowler

    10/15/2020, 7:55 AM
    another thread, let's say I can't get the above fixed. I was thinking I can just use a concurrent futures executor. The submit and wait model seems pretty normal and I can submit a thread for execution simply with a concurrent.futures.ThreadPoolExecutor, but the wait needs a bit more than the documentation provides, as it does not pass the 'future' produced by the submit and expects something more complex returned (I can wait on it) is there anyone who can point me to some shortcut on this: https://gist.github.com/mianos/b89e70d23b812930daf115d2b76cf71b
    m
    • 2
    • 9
  • y

    Yanghui Ou

    10/15/2020, 12:44 PM
    Hi, I am trying to replace a make based work flow with prefect. I was wondering how I can implement a file centric work flow. If an upstream task doesn’t have a return value but generates a file instead, and the downstream task takes that file as input and process that file, what is the best way to specify such dependency? Here’s a simple mock-up for my question:
    class GenerateFile( Task ):
      def run( self ):
        with open( 'result.txt', 'w' ) as f:
          f.write( f'This file is generated by {self.name}.' )
    
    class ProcessFile( Task ):
      def run( self ):
        with open( 'result.txt', 'r' ) as f:
          print( f.read() )
    
    gen_task   = GenerateFile()
    print_task = PrintFile()
    
    with Flow( 'test caching' ) as flow:
      gen_result   = gen_task()
      print_result = print_task( upstream_tasks=[ gen_result ] )
    Is there a better way to do it other than manually set the
    upstream_tasks
    ? Another question is how can I specify the generated file as target such that I get the same caching behavior as make? I tried
    gen_task   = GenerateFile( target='result.txt', checkpoint=True, result=LocalResult( dir='.' ) )
    but it does not seem to work.
    e
    • 2
    • 3
  • m

    Marwan Sarieddine

    10/15/2020, 1:19 PM
    Hi dask folks - is there an environment variable I can set for dask-kubernetes not to use a nanny ?
    m
    j
    • 3
    • 15
  • r

    raphBL

    10/15/2020, 1:54 PM
    Hi, I had the misfortune on advocating for Prefect against Airflow at a Data-Scientist weekly meeeting, and ended up with a small workshop/demo to do next time. Do you have any material to share with me? In particular, I found your demo with the meteo posted on slack, really cool, if it’s available somewhere. Of course, I’ll start with the Hello-World and I’ll refer theem to the awessome doc 🙌 Wish you all a nice day/night, around the world !
    🙌 1
    d
    k
    • 3
    • 9
  • n

    Newskooler

    10/15/2020, 1:57 PM
    Hi 👋, If I want to do a consitional retry of a task based on provided paramter, is this the way to do it: https://docs.prefect.io/core/examples/parameterized_flow.html or is there e a way to define this in this decorator itself (or otherwise)?
    d
    • 2
    • 10
  • n

    Newskooler

    10/15/2020, 2:21 PM
    Hi (again) 👋 , A question in regards to execution order of mapped (some of which fail and retry). I observed that with 1 worker on LocalDaskExecutor, if a task fails during the retry wait time, no other mapped task is executed. Is this the expected behaviour and is it possible to have it such that other mapped tasks keep going until the one waiting to be retried is in this “wait” stage? I read through here (https://docs.prefect.io/core/examples/retries_with_mapping.html) and it seems to not address this.
    d
    • 2
    • 5
  • a

    Alberto de Santos

    10/15/2020, 4:15 PM
    Hi community! Can anyone share a (best-practice) script to setup Agents, Flows and create_flow_run? Not the standard, but just a prod-like script, allowing for instance to setup an agent if down, ensure the agnet is working properly and so on…
    d
    • 2
    • 60
  • a

    Alexander

    10/15/2020, 8:29 PM
    Heya, community 🖖. I've been poking prefect for couple of weeks already, trying to understand how it can be used in production environment. I like the almost cloud native support via docker. But it has its quircks though. The most difficult part in setupping production CI process with prefect is flows registration. I just dont get it. It works nice in a local environment when you run prefect server and agents locally from the same python venv and your code is in one place. 1. To register flow, you have to actually execute python file. This means your flow registration environment must be equal to your flow production execution environment. Which gives you no choice but use docker for your production environment. With some CI which do not support docker-in-docker, this makes everything harder. 2. If you have many flows, you have to register all flows one by one, you need to write some script which will register all flows in a folder or maintain a singe script where all flows are registered which needs to maintain. I need to write considerable amount of code to maintain more than 1 flow. 3. Local agent is just not enough for production. If you use LocalAgent, it must run it in flow production environment. If you update flow production environment (added new dependency), you need to restart local agent. But you cant because it may be executing some tasks. 4. Docker agent, this is my favourite. It has its own quircks. For example, i was extremely surprized when i found that it will override some settings in task execution container with its own settings (like logging level). Other thing is again a multi-flow registration. You either have distinct docker storages in every flow object which means 100 flows 100 docker images built, or you have one docker storage for all flows, which means again you must have central flow registration script which will create storage, assign flows to it, build it, then assign storage to flows, then register them. And you need to write this script by yourself. 5. Every time you register you bump a new flow version in UI. If you dont want that, you need to come up with some checks or hash comparisons to undestand if flow is changed and you need a register or not. Again you need to do it yourself. I was able to solve all problems by coming up with this workflow: 1. Build flow production environment docker image 2.
    docker run
    this image and call flows registration script (written myself) 3. In this script, i iterate over all python scripts in flows folder; import this scripts instead of exec approach used in extract_flow_from_file or whatever; put its flow object into a list 4. Create docker storage with desired settings - it uses same production environment dockerfile which used in step #1; add all flows to this storage 5. Build this storage 6. Assign built storage object to all flows 7. Register all flows. I am lucky that all flows are in the same project and have same registration settings (for now). It will be painful to come up with approach to how do per-flow registration customization in such generic script All this required significant experimentation, prefect source code reading (it is magnificent, no jokes. I had a real pleasure reading it). I wish there were best practices put in prefect docs about flow registration and production CI setup. I was curious what are best practices in your prefect community for production flows registration? What your best choice of running tasks? How do you deliver flows source code to prod?
    👋 1
    👍 2
    👀 1
    d
    s
    • 3
    • 7
  • a

    Alexander

    10/15/2020, 8:33 PM
    I think a solution here could be some functionality in agents to monitor some directory and register flows from it if file is changed. This will solve several problems: flow version will be bumped only when file is changed; no need to write anything yourself; flows registration is streamlined and similar to how airflow scheduler works. you just put flows source code in some directory and dont care. What do you think? I would love to try to implement this myself 😎
    👍 2
    b
    j
    • 3
    • 4
  • j

    josh

    10/15/2020, 9:15 PM
    Hey team, Prefect version 
    0.13.11
     has been released and here are a few notable changes: 🕒 Per clock labels in schedules 📦 Gitlab storage ⬆️ Auto-upload flow scripts to S3 and GCS 🐞 Several bug fixes 🎃 So many Hacktoberfest contributors  A big thank you to our contributors who helped out with this release! Full changelog:
    Untitled
    :marvin: 7
    😍 5
    :party-parrot: 3
    r
    • 2
    • 1
  • r

    Robin

    10/15/2020, 10:30 PM
    Hey community, I am just getting started with coiled, trying to get a minimal example running on cloud (locally was super simple to get started based on this prefect coiled tutorial). First I need to login to coiled cloud properly, but how? Do I have to call
    coiled.Cloud()
    explicitly? Or change the coiled config path in the docker image? 🙂 For further context, see failed task id: https://cloud.prefect.io/accure/flow-run/935509e3-a3d5-41db-8cb4-0e9ca36ff56a?logId=e0a0cc4e-1f70-46df-b4cd-078c67082ac3
    Traceback (most recent call last):
      File "/usr/local/lib/python3.8/site-packages/prefect/engine/runner.py", line 48, in inner
        new_state = method(self, state, *args, **kwargs)
      File "/usr/local/lib/python3.8/site-packages/prefect/engine/flow_runner.py", line 418, in get_flow_run_state
        with self.check_for_cancellation(), executor.start():
      File "/usr/local/lib/python3.8/contextlib.py", line 113, in __enter__
        return next(self.gen)
      File "/usr/local/lib/python3.8/site-packages/prefect/engine/executors/dask.py", line 260, in start
        with self.cluster_class(**self.cluster_kwargs) as cluster:  # type: ignore
      File "/usr/local/lib/python3.8/site-packages/coiled/cluster.py", line 105, in __init__
        self.cloud = cloud or Cloud.current(asynchronous=asynchronous)
      File "/usr/local/lib/python3.8/site-packages/coiled/core.py", line 180, in current
        raise ValueError("Please first connect with coiled.Cloud(...)")
    ValueError: Please first connect with coiled.Cloud(...)
    Maybe the error message is explicitly describing, what to do, but it's not entirely forwarded by prefect?
    minimal_coiled_flow.py
    j
    j
    • 3
    • 5
Powered by Linen
Title
r

Robin

10/15/2020, 10:30 PM
Hey community, I am just getting started with coiled, trying to get a minimal example running on cloud (locally was super simple to get started based on this prefect coiled tutorial). First I need to login to coiled cloud properly, but how? Do I have to call
coiled.Cloud()
explicitly? Or change the coiled config path in the docker image? 🙂 For further context, see failed task id: https://cloud.prefect.io/accure/flow-run/935509e3-a3d5-41db-8cb4-0e9ca36ff56a?logId=e0a0cc4e-1f70-46df-b4cd-078c67082ac3
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/prefect/engine/runner.py", line 48, in inner
    new_state = method(self, state, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/prefect/engine/flow_runner.py", line 418, in get_flow_run_state
    with self.check_for_cancellation(), executor.start():
  File "/usr/local/lib/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.8/site-packages/prefect/engine/executors/dask.py", line 260, in start
    with self.cluster_class(**self.cluster_kwargs) as cluster:  # type: ignore
  File "/usr/local/lib/python3.8/site-packages/coiled/cluster.py", line 105, in __init__
    self.cloud = cloud or Cloud.current(asynchronous=asynchronous)
  File "/usr/local/lib/python3.8/site-packages/coiled/core.py", line 180, in current
    raise ValueError("Please first connect with coiled.Cloud(...)")
ValueError: Please first connect with coiled.Cloud(...)
Maybe the error message is explicitly describing, what to do, but it's not entirely forwarded by prefect?
minimal_coiled_flow.py
j

Jim Crist-Harif

10/15/2020, 10:37 PM
cc @James Bourbeau, @Matthew Rocklin?
j

James Bourbeau

10/16/2020, 3:30 AM
@Robin does your
/home/zaubeerer/.config/dask/coiled.yaml
configuration file have your Coiled token in it? You can obtain the token at https://cloud.coiled.io/profile
You can also pass the token in through the
DASK_COILED__TOKEN
environment variable
r

Robin

10/16/2020, 9:20 AM
Yes, I used
coiled login
which created the following
coiled.yaml
coiled:
  aws:
    region: us-east-2
  server: <https://cloud.coiled.io>
  token: asdf
  user: user_name
I added
DASK_COILED__TOKEN
additionally to the prefect secrets, but it did not do the trick 🤔
View count: 1