hey guys, has anyone seen this kind of strange Num...
# ask-community
j
hey guys, has anyone seen this kind of strange Numba error before? I was trying to train a Top2Vec model, which uses Umap and Numba at the lower level, what strange is, starting from certain amount of training data, the training throws this error
Copy code
...
File "/opt/prefect/workflow/tasks/model_train.py", line 64, in train_model
    model = Top2Vec(
  File "/opt/conda/envs/prefect/lib/python3.9/site-packages/top2vec/Top2Vec.py", line 668, in __init__
    umap_model = umap.UMAP(**umap_args).fit(self.document_vectors)
  File "/opt/conda/envs/prefect/lib/python3.9/site-packages/umap/umap_.py", line 2516, in fit
    ) = nearest_neighbors(
  File "/opt/conda/envs/prefect/lib/python3.9/site-packages/umap/umap_.py", line 328, in nearest_neighbors
    knn_search_index = NNDescent(
  File "/opt/conda/envs/prefect/lib/python3.9/site-packages/pynndescent/pynndescent_.py", line 920, in __init__
    self._neighbor_graph = nn_descent(
  File "/opt/conda/envs/prefect/lib/python3.9/site-packages/numba/core/dispatcher.py", line 468, in _compile_for_args
    error_rewrite(e, 'typing')
  File "/opt/conda/envs/prefect/lib/python3.9/site-packages/numba/core/dispatcher.py", line 409, in error_rewrite
    raise e.with_traceback(None)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
[1m[1m[1m[1mFailed in nopython mode pipeline (step: nopython frontend)
[1mUntyped global name 'print':[0m [1m[1mCannot determine Numba type of <class 'function'>[0m
[1m
File "../conda/envs/prefect/lib/python3.9/site-packages/pynndescent/pynndescent_.py", line 252:[0m
[1mdef nn_descent_internal_low_memory_parallel(
    <source elided>
        if verbose:
[1m            print("\t", n + 1, " / ", n_iters)
[0m            [1m^[0m[0m
[0m
[0m[1mDuring: resolving callee type: type(CPUDispatcher(<function nn_descent_internal_low_memory_parallel at 0x7fe7ea5adee0>))[0m
[0m[1mDuring: typing of call at /opt/conda/envs/prefect/lib/python3.9/site-packages/pynndescent/pynndescent_.py (358)
[0m
[0m[1mDuring: resolving callee type: type(CPUDispatcher(<function nn_descent_internal_low_memory_parallel at 0x7fe7ea5adee0>))[0m
[0m[1mDuring: typing of call at /opt/conda/envs/prefect/lib/python3.9/site-packages/pynndescent/pynndescent_.py (358)
[0m
[1m
File "../conda/envs/prefect/lib/python3.9/site-packages/pynndescent/pynndescent_.py", line 358:[0m
[1mdef nn_descent(
    <source elided>
    if low_memory:
[1m        nn_descent_internal_low_memory_parallel(
[0m        [1m^[0m[0m
The whole thing runs perfectly fine on my local, the python environment is also the same, but on the Prefect KubernetesJob it has the problem… I’m sure I’ve given the job enough resources (CPU & RAM)… anything to do with the parallelism? I don’t know much about the Numba
CPUDispatcher
, can it be it’s not supported in a Prefect Task? being stuck on this for days… 😣
z
Weird 😕 if you can open an issue with a minimal reproducible example we might be able to help
Untyped global name ‘print’:[0m [1m[1mCannot determine Numba type of <class ‘function’>
Is very weird… they must have support for
print
j
the whole thing is really weird… i’ll try to make a mre
THANKS FOR YOUR TIP! It lies on
log_prints
!!
z
Ah that makes more sense
Yeah don’t use that with numpy probably 😄
numba*
j
i wouldn’t have thought about that…
not sure if that’s a bug?
z
I’m not sure either, it seems pretty unlikely that we can dynamically patch print while using numba
j
not sure how
log_prints
is implemented, it seems to break numba… 🤔
🙏 1
z
We patch
builtins.print
326 Views