Stef van Dijk
06/02/2021, 10:12 AMSnehotosh
06/02/2021, 11:45 AMMckelle
06/02/2021, 9:13 PMAccess to manifest at '<https://accounts.google.com/o/oauth2/v2/auth?…>' (redirected from '<http://maindomain.com|maindomain.com>') from origin '<http://maindomain.com|maindomain.com>' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Does anyone have any experience with this or anything similar?Chohang Ng
06/03/2021, 1:56 AMflow_1_flow = StartFlowRun(flow_name='flow_1',project_name='tester',wait = True)
flow_2_flow = StartFlowRun(flow_name='flow_2',project_name='tester',wait = True)
flow_3_flow = StartFlowRun(flow_name='flow_3',project_name='tester',wait = True)
flow_4_flow = StartFlowRun(flow_name='flow_4',project_name='tester',wait = True)
flow_5_flow = StartFlowRun(flow_name='flow_5',project_name='tester',wait = True)
with Flow("main-flow", schedule=weekday_schedule,executor=LocalExecutor(),
run_config=LocalRun()) as flow:
flow_3 = flow_3_flow()
flow_1_flow().set_upstream(flow_2_flow())
step_2 = flow_4_flow(upstream_tasks = [flow_1_flow(),flow_3])
step_3 = flow_5_flow(upstream_tasks= [step_2])
flow.register(project_name='tester')
Chohang Ng
06/03/2021, 1:57 AMChohang Ng
06/03/2021, 4:25 PMIsmail Cenik
06/03/2021, 5:34 PMChohang Ng
06/03/2021, 8:18 PMChohang Ng
06/03/2021, 8:44 PMDaniel Davee
06/03/2021, 9:39 PMJoël Luijmes
06/04/2021, 8:43 AMJohan Wåhlin
06/04/2021, 11:32 AMll
06/04/2021, 1:43 PM# task 1
./my_cpp_executable <http://file1.xyz|file1.xyz>
...
# task 1000
./my_cpp_executable <http://file1000.xyz|file1000.xyz>
Each task takes about 4-8 compute hours on 4 CPUs/32G~ memory and our scheduled workloads take up about 20,000-40,000+ compute hours per day.
From what I can tell the only supported strategy for running a large batch of embarrassingly parallel tasks right now is to use Dask.
We have it working but I feel Dask is more oriented to (i) interactive analysis workloads, (ii) pure Python tasks, (iii) small jobs that fit onto local disk for each Dask node. Feels awkward to invoke a Dask executor for a one-line shell execution for a high-throughput, long-running, queued (num_tasks >> num_cluster_nodes) workload. We prefer not to have to support Dask on our infrastructure as it adds a whole other set of things that our sysengs have to maintain.
Seems more suitable if you supported any job queueing systems typically found in HPC environments like SGE, Slurm or HTCondor. I figure many of your target users in the fintech, scientific computing, meteorological space will already have SGE or Mesos cluster set up in their environment, but not a Dask cluster.Michael Brown
06/04/2021, 3:23 PMGarret Cook
06/04/2021, 8:01 PMTomás Emilio Silva Ebensperger
06/05/2021, 2:39 PM@task(log_stdout=True, state_handlers=[handler], timeout=1800)
Colin
06/05/2021, 3:48 PMKamil Gorszczyk
06/05/2021, 9:57 PMRobert Hales
06/07/2021, 9:02 AMAn error occurred (ClientException) when calling the RegisterTaskDefinition operation: Too many concurrent attempts to create a new revision of the specified family."
Fabrice Toussaint
06/07/2021, 11:50 AMUnexpected error: ModuleNotFoundError("No module named 'prefect'")
I did specify Prefect in the environment variables of the pod specification, so I do not know why it cannot be found. If anyone can help me, please let me know 🙂.
EDIT: Also I am using the KubeCluster class (dask_kubernetes.KubeCluster)...disabled account...
06/07/2021, 4:36 PMDaniel Davee
06/07/2021, 7:41 PMKao Phetchareun
06/07/2021, 7:59 PMDamien Ramunno-Johnson
06/07/2021, 11:26 PMLukas N.
06/08/2021, 5:03 PMGarret Cook
06/08/2021, 10:05 PMRaed
06/09/2021, 7:15 AMstream▾
flow.run_configs = KubernetesRun(
image=<image>,
env={
"PREFECT__CONTEXT__SECRETS__BITBUCKET_ACCESS_TOKEN": os.environ[
"BITBUCKET_ACCESS_TOKEN"
]
},
image_pull_policy="Always",
)
flow.storage = Bitbucket(
project=<project>,
repo=<repo>,
path="flows/example_flow.py",
access_token_secret="BITBUCKET_ACCESS_TOKEN",
)
I get the following error on the UI
Failed to load and execute Flow's environment: ValueError('Local Secret "BITBUCKET_ACCESS_TOKEN" was not found.')
Wouldn't the access token secret have been set in the run config?
I also have the same environment variable in a custom docker image being used by the run configFlorian Kühnlenz
06/09/2021, 1:41 PMKrapi Shah
06/10/2021, 5:04 PMGarret Cook
06/10/2021, 7:47 PMGarret Cook
06/10/2021, 7:47 PMKevin Kho
06/10/2021, 7:48 PMprefect.context.flow_name
?Garret Cook
06/10/2021, 7:49 PMKevin Kho
06/10/2021, 7:50 PMMichael Adkins
06/10/2021, 7:53 PMflow_run_name
or the flow_name
?Garret Cook
06/10/2021, 7:54 PMMichael Adkins
06/10/2021, 7:54 PMGarret Cook
06/10/2021, 7:54 PMMichael Adkins
06/10/2021, 7:55 PMGarret Cook
06/10/2021, 7:56 PMMichael Adkins
06/10/2021, 7:56 PMGarret Cook
06/10/2021, 7:57 PMMichael Adkins
06/10/2021, 7:58 PMGarret Cook
06/10/2021, 7:58 PMMichael Adkins
06/10/2021, 7:58 PMGarret Cook
06/10/2021, 7:59 PM