Braun Reyes
11/23/2022, 9:01 PMSantiago Gonzalez
11/23/2022, 9:45 PMMain class from jar could not be found
• The output directory does not exists, so it could not be synchronize to AWS S3
Do you have any idea of why these types of issues happens time to time?
BTW: I am using boto3
SSM
agent to handle EC2 Instances creation, execution and terminationRyan Sattler
11/24/2022, 4:42 AMDeepanshu Aggarwal
11/24/2022, 6:39 AMDeepanshu Aggarwal
11/24/2022, 7:02 AM07:01:33.898 | ERROR | Task run 'run_executor-a1954751-160' - Crash detected! Execution was interrupted by an unexpected exception: AssertionError
Eden
11/24/2022, 7:16 AMunlimited
It works perfectly fine.
However, when I modify Concurrency into, for example, 3 It failed to run jobs 😞Deepanshu Aggarwal
11/24/2022, 8:31 AMiñigo
11/24/2022, 8:43 AMDeepanshu Aggarwal
11/24/2022, 9:06 AMSylvain Hazard
11/24/2022, 10:10 AMrun
method. It felt like a good way to encapsulate complex tasks and improve code readability. Creating abstract tasks was also something I did sometimes. Is this behavior gone or has it evolved ? I couldn't find much in the docs regarding this unfortunately.Tim Galvin
11/24/2022, 12:59 PMEncountered exception during execution:
Traceback (most recent call last):
File "/software/projects/askaprt/tgalvin/setonix/miniconda3/envs/acesprefect2/lib/python3.9/site-packages/prefect/engine.py", line 612, in orchestrate_flow_run
waited_for_task_runs = await wait_for_task_runs_and_report_crashes(
File "/software/projects/askaprt/tgalvin/setonix/miniconda3/envs/acesprefect2/lib/python3.9/site-packages/prefect/engine.py", line 1317, in wait_for_task_runs_and_report_crashes
if not state.type == StateType.CRASHED:
AttributeError: 'coroutine' object has no attribute 'type'
I am running a known version of my workflow on a known dataset, which has worked perfectly fine dozens of times before. It seems to be saying the the state
above is not an orion
model -- rather a coroutine. All my tasks are using the normal task
decorator around normal non-async python functions.Boris Tseytlin
11/24/2022, 4:28 PM.save
on it, but when I try to retrieve it later by load
I get error 404 from Prefect.
ValueError: Unable to find block document named test-minio-url for block type string
@pytest.fixture(autouse=True, scope="session")
def prefect_test_fixture():
with prefect_test_harness():
yield
@pytest.fixture(scope="session")
def minio_blocks(prefect_test_fixture):
minio_creds_block = MinIOCredentials(
minio_root_user=Config.MINIO_USER,
minio_root_password=Config.MINIO_PASSWORD,
)
minio_creds_block.save("test-minio-creds")
minio_url_block = String(Config.MINIO_URL)
minio_url_block.save("test-minio-url")
return minio_creds_block, minio_url_block
@pytest.fixture
def dummy_mission(minio_blocks):
minio_creds_block, minio_url_block = minio_blocks
minio_url = String.load(minio_url_block).value # <- ERROR HERE
minio_url = minio_url.split("/")[-1:][0]
minio_creds = MinIOCredentials.load(minio_creds_block)
Sami Serbey
11/24/2022, 5:23 PMredsquare
11/24/2022, 5:46 PMdavzucky
11/24/2022, 11:51 PMget_run_logger()
which is set from the context. You can find sample test code on the thread
The test keep failing with the erorr
prefect.exceptions.MissingContextError: There is no active flow or task run context.
wonsun
11/25/2022, 7:21 AMAndrei Tulbure
11/25/2022, 7:30 AMZinovev Daniil
11/25/2022, 10:16 AMroady
11/25/2022, 10:24 AM# Prefect 2.6.9
# Python 3.8
from prefect import flow, task, get_run_logger
@task
def add_one(x):
if x==1:
raise Exception("Raised exception")
return x+1
@task
def do_something(dummy):
get_run_logger().info("Doing something")
return
@flow
def mapped_flow_not_dependent():
a = list([0,2,3])
b = add_one.map(a, return_state=True)
c = add_one.map(b, return_state=True)
d = do_something.map(a, return_state=True, wait_for = [c])
print(c)
print(d)
return "Flow completes"
if _name_ == "_main_":
mapped_flow_not_dependent()
One state in c being failed means none of following do_something tasks run, whereas I would like all of the do_something tasks to run apart from ones where c is failed. I can get the desired behaviour by linking the tasks explicitly: changing the argument of do_something from a to c (and removing the wait_for kwarg).Joshua Greenhalgh
11/25/2022, 11:30 AMJames Zhang
11/25/2022, 1:43 PMThuy Tran
11/25/2022, 4:00 PM@task(cache_result_in_memory=False)
But I'm getting the error below that it's an unexpected keyword. Not sure what I'm doing wrong. It's running on prem using version 2.6.9.
Flow could not be retrieved from deployment.
Traceback (most recent call last):
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/opt/prefect/processor.py", line 3, in <module>
from data_import import data_import_process
File "/opt/prefect/data_import.py", line 8, in <module>
from data_cleaning import cleaning_process
File "/opt/prefect/data_cleaning.py", line 42, in <module>
@task(cache_result_in_memory=False)
TypeError: task() got an unexpected keyword argument 'cache_result_in_memory'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/prefect/lib/python3.10/site-packages/prefect/engine.py", line 256, in retrieve_flow_then_begin_flow_run
flow = await load_flow_from_flow_run(flow_run, client=client)
File "/opt/conda/envs/prefect/lib/python3.10/site-packages/prefect/client.py", line 103, in with_injected_client
return await fn(*args, **kwargs)
File "/opt/conda/envs/prefect/lib/python3.10/site-packages/prefect/deployments.py", line 69, in load_flow_from_flow_run
flow = await run_sync_in_worker_thread(import_object, str(import_path))
File "/opt/conda/envs/prefect/lib/python3.10/site-packages/prefect/utilities/asyncutils.py", line 57, in run_sync_in_worker_thread
return await anyio.to_thread.run_sync(call, cancellable=True)
File "/opt/conda/envs/prefect/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/opt/conda/envs/prefect/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/opt/conda/envs/prefect/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/opt/conda/envs/prefect/lib/python3.10/site-packages/prefect/utilities/importtools.py", line 193, in import_object
module = load_script_as_module(script_path)
File "/opt/conda/envs/prefect/lib/python3.10/site-packages/prefect/utilities/importtools.py", line 156, in load_script_as_module
raise ScriptError(user_exc=exc, path=path) from exc
prefect.exceptions.ScriptError: Script at 'processor.py' encountered an exception
Tibs
11/25/2022, 5:03 PMDeepak Pilligundla
11/25/2022, 5:25 PMTraceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/prefect/cli/build_register.py", line 134, in load_flows_from_script
namespace = runpy.run_path(abs_path, run_name="<flow>")
File "/usr/local/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/local/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/repo/src/flows/4i_ssp_bene_data_shrng/4i_ssp_bene_data_shrng.py", line 14, in <module>
import snowflake.connector as sf
File "/usr/local/lib/python3.7/site-packages/snowflake/connector/__init__.py", line 16, in <module>
from .connection import SnowflakeConnection
File "/usr/local/lib/python3.7/site-packages/snowflake/connector/connection.py", line 25, in <module>
from . import errors, proxy
File "/usr/local/lib/python3.7/site-packages/snowflake/connector/errors.py", line 18, in <module>
from .telemetry_oob import TelemetryService
File "/usr/local/lib/python3.7/site-packages/snowflake/connector/telemetry_oob.py", line 20, in <module>
from .vendored import requests
File "/usr/local/lib/python3.7/site-packages/snowflake/connector/vendored/requests/__init__.py", line 119, in <module>
from ..urllib3.contrib import pyopenssl
File "/usr/local/lib/python3.7/site-packages/snowflake/connector/vendored/urllib3/contrib/pyopenssl.py", line 50, in <module>
import OpenSSL.SSL
File "/usr/local/lib/python3.7/site-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import SSL, crypto
File "/usr/local/lib/python3.7/site-packages/OpenSSL/SSL.py", line 19, in <module>
from OpenSSL.crypto import (
File "/usr/local/lib/python3.7/site-packages/OpenSSL/crypto.py", line 3232, in <module>
name="load_pkcs7_data",
TypeError: deprecated() got an unexpected keyword argument 'name'
eddy davies
11/25/2022, 6:10 PMThuy Tran
11/25/2022, 7:23 PM--memory="[memory_limit]"
argument to enable this?Trevor Campbell
11/25/2022, 8:11 PMA -> B -> C -> D
. Any of them can raise a SKIP signal, and if that happens, I want to skip all downstream tasks. It isn't a failure or a success, it's more of a "I'm not ready to run this yet, so don't run anything that depends on my output yet"
Is that possible to do in Orion? I saw one earlier thread here about it, but the outcome was inconclusive...
• one option is just to return a cancelled state, but that seems to suggest failure (which in my case would prompt a message to the admin, which I definitely don't want to happen for SKIPs. SKIPs happen very often in my particular case -- far more common than any other outcome)
• another is to return a completed state, but then I need annoying if
statements everywhere checking the outcome of previous tasks (skip vs. was actually run successfully). Actually, the whole reason I started using Prefect in the first place was for its ability to easily control flows where things get skipped 😉Anqi Lu
11/28/2022, 4:03 AMMahesh
11/28/2022, 8:27 AMSylvain Hazard
11/28/2022, 8:43 AMprefect orion start --port 5000
as well as the very simple flow copied below. Running the flow with python log_flow.py
randomly ends up crashing with this error : RuntimeError: The connection pool was closed while 2 HTTP requests/responses were still in-flight
. Is this an issue related to the server ? Am I forgetting to await something ? Anything I could do to fix it ?