How to install Phi-3-vision 128k onnx

Posted on June 17, 2024


Open-source LLMs (text modality) are involution and competitive among the AI companies. There are also quantization optimized small LLMs available (e.g. llm.cpp, ollama), which are running realtime in the low-memory consumer GPU (e.g. 4070 10GB for me), and even CPUs (but inference slowly). In contrast, open-sourced multimodal LLMs are not so. For example, my favorite Llava model (by Ollama) is not upgraded almost more than half-year, its latest version is still 1.6 in Ollama (Although the auhors have new versions in LLaVA: Large Language and Vision Assistant). That is why it is so exciting to read Microsoft releasing Phi-3 vision 128K with amazing performance comparing with GPT-4o. Although it can load in 10GB GPU, it will fail when prompt is longer (OOM). I am waiting for the community of llm.cpp or Ollama to add support. It looks quite slow. Up to now, the only available quant version (~2GB) is Microfoft released onnx version.

Following onnx install instruction, you can install Phi-3-vision 128k in your local desktop. But it needs a little effort. According to the instruction, I install in Apple M1 (generation is slow, about 100s to process a query), and Ubuntu 22.04 RTX-4070 (3s to process a query).

Install in Apple M1 is quite easy comparing with Ubuntu, bassed on my experience. So I will mainly focus on install on Ubuntu. Enironment configure is a must to success.

  • Step-1: Recommend to install from open-source

    • git clone --recursive https://github.com/Microsoft/onnxruntime.git
    • git clone https://github.com/microsoft/onnxruntime-genai

  • Step-2: create a new conda environment

    • When I first install, I use existing conda environment. But it always report protoc version incompatible. It spends me a lot of time (considering compile onnxruntime lib very slow, about 1-hour), and then you get compile fail. I google but not find a good solution. After investigate, protoc in Ubuntu 22.04 is 3.20 while onnxruntime use 3.12. So I remove protoc 3.20 and create new conda env (you need install numpy after env ready)

  • Step-3: compile onnxruntime

    • cd onnxruntime
    • ./build.sh --build_shared_lib --skip_tests --parallel 8 --use_cuda --config Release --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES="89"

      • --parallel: you need set a number of parallel threads based on how many cores in you desktop. If you not set, default it will use maximum. Then you will find desktop is no response.
      • --cmake_extra_defines: MUST set. Default is 80, which may not compatible with your GPU. For RTX 4070, CUDA ARCHITECTURES is 89. You can google to find the value for your GPU. If you donot set, you will get cuda architecture not match.
      • CUDA version: 12.3 in my system
      • The step is very slow. Just rest

    • After compiling completed, compy the library and head files to onnxruntime-genai.

      • cp include/onnxruntime/core/session/onnxruntime_c_api.h ../onnxruntime-genai/ort/include
      • cp build/Linux/Release/libonnxruntime*.so* ../onnxruntime-genai/ort/lib

  • Step-4: compile onnxruntime-genai

    • cd ../onnxruntime-genai
    • python build.py --use_cuda --config Release --parallel 8 --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES="89"
    • cd build/Linux/Release/wheel

      • pip install *.whl

  • Step-5: Download Phi-3 vision 128K onnx model (CUDA)

    • huggingface-cli download microsoft/Phi-3-vision-128k-instruct-onnx-cuda --local-dir YOUR_LOCAL_DIR

      • I remove --include cuda-int4-rtn-block-32/*, because command fails if use it.

    • Download sample code to do test

      • curl https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi3v.py -o phi3v.py
      • python phi3v.py -m cuda-int4-rtn-block-32 (model path).

        • Please note: the demo code only accept image file in your local PC. So if you want to input other format, like base64, image URL, you need write your own wrap-up function to process.

  • Step-6: Wrap up your own service. I build a server based on Phi-3 vision, a compatible API interface. The following image is a streamlit app I build, which let user upload local image or input image url, then you input prompt, the app will call Phi-3 vision service and return result.

Example 1

Prompt: tell me about the image

Example 2

Prompt: tell me a story based on the image

Based on your applications, you can tune prompt to generate what you want output.