Braun Reyes11/23/2022, 9:01 PM
Santiago Gonzalez11/23/2022, 9:45 PM
Main class from jar could not be found
Do you have any idea of why these types of issues happens time to time? BTW: I am using
The output directory does not exists, so it could not be synchronize to AWS S3
agent to handle EC2 Instances creation, execution and termination
Ryan Sattler11/24/2022, 4:42 AM
Deepanshu Aggarwal11/24/2022, 6:39 AM
Deepanshu Aggarwal11/24/2022, 7:02 AM
07:01:33.898 | ERROR | Task run 'run_executor-a1954751-160' - Crash detected! Execution was interrupted by an unexpected exception: AssertionError
Eden11/24/2022, 7:16 AM
It works perfectly fine. However, when I modify Concurrency into, for example, 3 It failed to run jobs 😞
Deepanshu Aggarwal11/24/2022, 8:31 AM
iñigo11/24/2022, 8:43 AM
Deepanshu Aggarwal11/24/2022, 9:06 AM
Sylvain Hazard11/24/2022, 10:10 AM
method. It felt like a good way to encapsulate complex tasks and improve code readability. Creating abstract tasks was also something I did sometimes. Is this behavior gone or has it evolved ? I couldn't find much in the docs regarding this unfortunately.
Tim Galvin11/24/2022, 12:59 PM
I am running a known version of my workflow on a known dataset, which has worked perfectly fine dozens of times before. It seems to be saying the the
Encountered exception during execution: Traceback (most recent call last): File "/software/projects/askaprt/tgalvin/setonix/miniconda3/envs/acesprefect2/lib/python3.9/site-packages/prefect/engine.py", line 612, in orchestrate_flow_run waited_for_task_runs = await wait_for_task_runs_and_report_crashes( File "/software/projects/askaprt/tgalvin/setonix/miniconda3/envs/acesprefect2/lib/python3.9/site-packages/prefect/engine.py", line 1317, in wait_for_task_runs_and_report_crashes if not state.type == StateType.CRASHED: AttributeError: 'coroutine' object has no attribute 'type'
above is not an
model -- rather a coroutine. All my tasks are using the normal
decorator around normal non-async python functions.
Boris Tseytlin11/24/2022, 4:28 PM
on it, but when I try to retrieve it later by
I get error 404 from Prefect.
ValueError: Unable to find block document named test-minio-url for block type string
@pytest.fixture(autouse=True, scope="session") def prefect_test_fixture(): with prefect_test_harness(): yield @pytest.fixture(scope="session") def minio_blocks(prefect_test_fixture): minio_creds_block = MinIOCredentials( minio_root_user=Config.MINIO_USER, minio_root_password=Config.MINIO_PASSWORD, ) minio_creds_block.save("test-minio-creds") minio_url_block = String(Config.MINIO_URL) minio_url_block.save("test-minio-url") return minio_creds_block, minio_url_block @pytest.fixture def dummy_mission(minio_blocks): minio_creds_block, minio_url_block = minio_blocks minio_url = String.load(minio_url_block).value # <- ERROR HERE minio_url = minio_url.split("/")[-1:] minio_creds = MinIOCredentials.load(minio_creds_block)
Sami Serbey11/24/2022, 5:23 PM
redsquare11/24/2022, 5:46 PM
davzucky11/24/2022, 11:51 PM
which is set from the context. You can find sample test code on the thread The test keep failing with the erorr
prefect.exceptions.MissingContextError: There is no active flow or task run context.
wonsun11/25/2022, 7:21 AM
Andrei Tulbure11/25/2022, 7:30 AM
Zinovev Daniil11/25/2022, 10:16 AM
roady11/25/2022, 10:24 AM
One state in c being failed means none of following do_something tasks run, whereas I would like all of the do_something tasks to run apart from ones where c is failed. I can get the desired behaviour by linking the tasks explicitly: changing the argument of do_something from a to c (and removing the wait_for kwarg).
# Prefect 2.6.9 # Python 3.8 from prefect import flow, task, get_run_logger @task def add_one(x): if x==1: raise Exception("Raised exception") return x+1 @task def do_something(dummy): get_run_logger().info("Doing something") return @flow def mapped_flow_not_dependent(): a = list([0,2,3]) b = add_one.map(a, return_state=True) c = add_one.map(b, return_state=True) d = do_something.map(a, return_state=True, wait_for = [c]) print(c) print(d) return "Flow completes" if _name_ == "_main_": mapped_flow_not_dependent()
Joshua Greenhalgh11/25/2022, 11:30 AM
James Zhang11/25/2022, 1:43 PM
Thuy Tran11/25/2022, 4:00 PM
But I'm getting the error below that it's an unexpected keyword. Not sure what I'm doing wrong. It's running on prem using version 2.6.9.
Flow could not be retrieved from deployment. Traceback (most recent call last): File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/opt/prefect/processor.py", line 3, in <module> from data_import import data_import_process File "/opt/prefect/data_import.py", line 8, in <module> from data_cleaning import cleaning_process File "/opt/prefect/data_cleaning.py", line 42, in <module> @task(cache_result_in_memory=False) TypeError: task() got an unexpected keyword argument 'cache_result_in_memory' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/envs/prefect/lib/python3.10/site-packages/prefect/engine.py", line 256, in retrieve_flow_then_begin_flow_run flow = await load_flow_from_flow_run(flow_run, client=client) File "/opt/conda/envs/prefect/lib/python3.10/site-packages/prefect/client.py", line 103, in with_injected_client return await fn(*args, **kwargs) File "/opt/conda/envs/prefect/lib/python3.10/site-packages/prefect/deployments.py", line 69, in load_flow_from_flow_run flow = await run_sync_in_worker_thread(import_object, str(import_path)) File "/opt/conda/envs/prefect/lib/python3.10/site-packages/prefect/utilities/asyncutils.py", line 57, in run_sync_in_worker_thread return await anyio.to_thread.run_sync(call, cancellable=True) File "/opt/conda/envs/prefect/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/opt/conda/envs/prefect/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/opt/conda/envs/prefect/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, *args) File "/opt/conda/envs/prefect/lib/python3.10/site-packages/prefect/utilities/importtools.py", line 193, in import_object module = load_script_as_module(script_path) File "/opt/conda/envs/prefect/lib/python3.10/site-packages/prefect/utilities/importtools.py", line 156, in load_script_as_module raise ScriptError(user_exc=exc, path=path) from exc prefect.exceptions.ScriptError: Script at 'processor.py' encountered an exception
Tibs11/25/2022, 5:03 PM
Deepak Pilligundla11/25/2022, 5:25 PM
Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/prefect/cli/build_register.py", line 134, in load_flows_from_script namespace = runpy.run_path(abs_path, run_name="<flow>") File "/usr/local/lib/python3.7/runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "/usr/local/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/repo/src/flows/4i_ssp_bene_data_shrng/4i_ssp_bene_data_shrng.py", line 14, in <module> import snowflake.connector as sf File "/usr/local/lib/python3.7/site-packages/snowflake/connector/__init__.py", line 16, in <module> from .connection import SnowflakeConnection File "/usr/local/lib/python3.7/site-packages/snowflake/connector/connection.py", line 25, in <module> from . import errors, proxy File "/usr/local/lib/python3.7/site-packages/snowflake/connector/errors.py", line 18, in <module> from .telemetry_oob import TelemetryService File "/usr/local/lib/python3.7/site-packages/snowflake/connector/telemetry_oob.py", line 20, in <module> from .vendored import requests File "/usr/local/lib/python3.7/site-packages/snowflake/connector/vendored/requests/__init__.py", line 119, in <module> from ..urllib3.contrib import pyopenssl File "/usr/local/lib/python3.7/site-packages/snowflake/connector/vendored/urllib3/contrib/pyopenssl.py", line 50, in <module> import OpenSSL.SSL File "/usr/local/lib/python3.7/site-packages/OpenSSL/__init__.py", line 8, in <module> from OpenSSL import SSL, crypto File "/usr/local/lib/python3.7/site-packages/OpenSSL/SSL.py", line 19, in <module> from OpenSSL.crypto import ( File "/usr/local/lib/python3.7/site-packages/OpenSSL/crypto.py", line 3232, in <module> name="load_pkcs7_data", TypeError: deprecated() got an unexpected keyword argument 'name'
eddy davies11/25/2022, 6:10 PM
Thuy Tran11/25/2022, 7:23 PM
argument to enable this?
Trevor Campbell11/25/2022, 8:11 PM
. Any of them can raise a SKIP signal, and if that happens, I want to skip all downstream tasks. It isn't a failure or a success, it's more of a "I'm not ready to run this yet, so don't run anything that depends on my output yet" Is that possible to do in Orion? I saw one earlier thread here about it, but the outcome was inconclusive... • one option is just to return a cancelled state, but that seems to suggest failure (which in my case would prompt a message to the admin, which I definitely don't want to happen for SKIPs. SKIPs happen very often in my particular case -- far more common than any other outcome) • another is to return a completed state, but then I need annoying
A -> B -> C -> D
statements everywhere checking the outcome of previous tasks (skip vs. was actually run successfully). Actually, the whole reason I started using Prefect in the first place was for its ability to easily control flows where things get skipped 😉
Anqi Lu11/28/2022, 4:03 AM
Mahesh11/28/2022, 8:27 AM
Sylvain Hazard11/28/2022, 8:43 AM
as well as the very simple flow copied below. Running the flow with
prefect orion start --port 5000
randomly ends up crashing with this error :
. Is this an issue related to the server ? Am I forgetting to await something ? Anything I could do to fix it ?
RuntimeError: The connection pool was closed while 2 HTTP requests/responses were still in-flight