Deploy local chatgpt or deepseek like application using gptme

Posted on March 15, 2025


If you are concern privacy issue or cost of running agent based application, now you can do it by setup local application using GPTME: personal AI assistant/agent in your terminal. Currently the repository cannot well support local models. If you want to support local models using Ollama, you can install from the repository of my forked gptme, gptme: support Ollama local models such as gemma3, llama3.2-vision, ... The main addon of my version is to add local models supported into model-meta. The following is supported local models.

"local": {
        "gemma3": {
            "context": 128_000,
            "max_output": 32_768,
            "price_input": 0.0,
            "price_output": 0.0,
            "supports_vision": True,
        },
        "llama3.2-vision": {
            "context": 128_000,
            "max_output": 32_768,
            "price_input": 0.0,
            "price_output": 0.0,
            "supports_vision": True,
        },
        "deepseek-r1": {
            "context": 128_000,
            "max_output": 32_768,
            "price_input": 0.0,
            "price_output": 0.0,
            "supports_vision": False,
        },
        "qwen2.5": {
            "context": 128_000,
            "max_output": 32_768,
            "price_input": 0.0,
            "price_output": 0.0,
            "supports_vision": False,
        },        
    },

Step-by-step to install gptme

  • Install pipx if not installed: pip install --user pipx
  • Install gptme:

    • pipx install gptme
    • Install with browser tool and server support: pipx install 'gptme[server,browser]'

  • Setup configure: ~/.config/gptme/config.toml, example

    • MODEL can be changed at running time, e.g. if you want to use model llama3.2-vision, you can start gptme --model local/llama3.2-vision

[prompt]
about_user = "I am a curious human programmer."
response_preference = "Basic concepts don't need to be explained."

[prompt.project]
activitywatch = "ActivityWatch is a free and open-source automated time-tracker that helps you track how you spend your time on your devices."
gptme = "gptme is a CLI to interact with large language models in a Chat-style interface, enabling the assistant to execute commands and code on the local machine, letting them assist in all kinds of development and terminal-based work."

[env]
MODEL="local/gemma3"
OPENAI_BASE_URL="http://localhost:11434/v1"

#TOOL_FORMAT = "markdown" # Select the tool formal. One of `markdown`, `xml`, `tool`
TOOL_ALLOWLIST = "save,append,patch,ipython,shell,browser"  # Comma separated list of allowed tools
TOOL_MODULES = "gptme.tools,custom.tools" # List of python comma separated python module path

  • Start gptme: gptme. If you want to change default model, you can run gptme --model local/MODEL_NAME

    • If you local model not found, you can ollama pull MODEL_NAME to downlod models to local

Here is example to test local gemma3 vision

example to test browser

Based on gptme, many interesting application can be deployed. But current performance and robustness looks not good. A lot of efforts are needed for a good production.