Hi I’m trying to use Blocks for using secrets in my ETL. However, I keep on running into errors whe...
m

Milan Valadou

over 2 years ago
Hi I’m trying to use Blocks for using secrets in my ETL. However, I keep on running into errors when trying to load them:
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Input In [1], in <cell line: 6>()
      3 secret_block = Secret.load("etlharvestqaspassword")
      5 # Access the stored secret
----> 6 secret_block.get()

AttributeError: 'coroutine' object has no attribute 'get'
I defined the block within the Orion UI, because when I tried to define it via code in a simple script (as suggested here), I get the following kind of error:
prefect.exceptions.PrefectHTTPStatusError: Client error '422 Unprocessable Entity' for url '<http://ephemeral-orion/api/block_documents/>'
Response: {'exception_message': 'Invalid request received.', 'exception_detail': [{'loc': ['body', 'name'], 'msg': 'name must only contain lowercase letters, numbers, and dashes', 'type': 'value_error'}, {'loc': ['body', '__root__'], 'msg': 'Names must be provided for block documents.', 'type': 'value_error'}], 'request_body': {'name': 'test2_password', 'data': {'value': 'test2'}, 'block_schema_id': '8019abd6-409a-4f91-9367-bc8343c31763', 'block_type_id': '29fb0ec8-f7e9-4527-984c-48f8675f2bc4', 'is_anonymous': False}}
For more information check: <https://httpstatuses.com/422>
I’m mostly using Prefect within a jupyter notebook and from within a virtualenv. Thanks in advance for anyone who could point me to what’s going on 🙂
Hey folks :wave: I’m getting these warnings while running tests on a new collection I’m working on `...
a

ale

about 2 years ago
Hey folks 👋 I’m getting these warnings while running tests on a new collection I’m working on
../../../.pyenv/versions/3.10.9/envs/prefect-huggingface/lib/python3.10/site-packages/prefect/testing/standard_test_suites/task_runners.py:155
  /Users/alessandro.lollo/.pyenv/versions/3.10.9/envs/prefect-huggingface/lib/python3.10/site-packages/prefect/testing/standard_test_suites/task_runners.py:155: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo?  You can register custom marks to avoid this warning - for details, see <https://docs.pytest.org/en/stable/how-to/mark.html>
    @pytest.mark.flaky(max_runs=4)  # Threads do not consistently yield
Not sure what’s the issue here, anyone experiencing this?
tests/test_tasks.py::test_get_inference_result_fails
  /Users/alessandro.lollo/.pyenv/versions/3.10.9/envs/prefect-huggingface/lib/python3.10/site-packages/prefect/orion/database/migrations/versions/sqlite/2022_04_25_135207_b75d279ba985_replace_version_with_checksum.py:102: RemovedIn20Warning: Deprecated API features detected! These feature(s) are not compatible with SQLAlchemy 2.0. To prevent incompatible upgrades prior to updating applications, ensure requirements files are pinned to "sqlalchemy<2.0". Set environment variable SQLALCHEMY_WARN_20=1 to show all deprecation warnings.  Set environment variable SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message. (Background on SQLAlchemy 2.0 at: <https://sqlalche.me/e/b8d9>)
    meta_data = sa.MetaData(bind=connection)

tests/test_tasks.py::test_get_inference_result_fails
  /Users/alessandro.lollo/.pyenv/versions/3.10.9/lib/python3.10/contextlib.py:142: SAWarning: Skipped unsupported reflection of expression-based index ix_flow_run__coalesce_start_time_expected_start_time_desc
    next(self.gen)

tests/test_tasks.py::test_get_inference_result_fails
  /Users/alessandro.lollo/.pyenv/versions/3.10.9/lib/python3.10/contextlib.py:142: SAWarning: Skipped unsupported reflection of expression-based index ix_flow_run__coalesce_start_time_expected_start_time_asc
    next(self.gen)

-- Docs: <https://docs.pytest.org/en/stable/how-to/capture-warnings.html>
These seems to be related to SQLAlchemy 2.0. However, in my environment I have
prefect==2.7.11
and
sqlalchemy==1.4.46
Anyone experiencing the same? Is this something I should be worried about? 😅
1