runtime error

Exit code: 1. Reason: ion_pytorch_model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 167M/167M [00:03<00:00, 43.7MB/s] Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s] Loading pipeline components...: 14%|β–ˆβ– | 1/7 [00:10<01:04, 10.80s/it] Loading pipeline components...: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 4/7 [00:15<00:09, 3.23s/it] Loading pipeline components...: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 6/7 [00:16<00:02, 2.06s/it] Loading pipeline components...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7/7 [00:16<00:00, 2.33s/it] Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/llama_cpp/_ctypes_extensions.py", line 81, in load_shared_library return ctypes.CDLL(str(lib_path), **cdll_args) # type: ignore File "/usr/local/lib/python3.10/ctypes/__init__.py", line 374, in __init__ self._handle = _dlopen(self._name, mode) OSError: libcuda.so.1: cannot open shared object file: No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/app.py", line 6, in <module> from llmdolphin import (get_llm_formats, get_dolphin_model_format, File "/app/llmdolphin.py", line 12, in <module> from llama_cpp import Llama File "/usr/local/lib/python3.10/site-packages/llama_cpp/__init__.py", line 1, in <module> from .llama_cpp import * File "/usr/local/lib/python3.10/site-packages/llama_cpp/llama_cpp.py", line 8, in <module> from ._ggml import ( File "/usr/local/lib/python3.10/site-packages/llama_cpp/_ggml.py", line 21, in <module> libggml = ctypes_ext.load_shared_library("ggml", libggml_base_path) File "/usr/local/lib/python3.10/site-packages/llama_cpp/_ctypes_extensions.py", line 83, in load_shared_library raise RuntimeError(f"Failed to load shared library '{lib_path}': {e}") RuntimeError: Failed to load shared library '/usr/local/lib/python3.10/site-packages/llama_cpp/lib/libggml.so': libcuda.so.1: cannot open shared object file: No such file or directory

Container logs:

Fetching error logs...