LLaVA-φ: Efficient Multi-Modal Assistant with Small Language Model
Paper
•
2401.02330
•
Published
•
18
This is a multimodal implementation of Phi2 model inspired by LlaVA-Phi.
Use the code below to get started with the model.
git clone https://github.com/zhuyiche/llava-phi.git
cd llava-phi
conda create -n llava_phi python=3.10 -y
conda activate llava_phi
pip install --upgrade pip # enable PEP 660 support
pip install -e .
python llava_phi/eval/run_llava_phi.py --model-path="RaviNaik/Llava-Phi2" \
--image-file="https://huggingface.co/Navyabhat/Llava-Phi2/resolve/main/people.jpg?download=true" \
--query="How many people are there in the image?"
This implementation is based on wonderful work done by:
LlaVA-Phi
Llava
Phi2