Tired of complex AI setups? 😩 llama.ui
is an open-source desktop application that provides a beautiful ✨, user-friendly interface for interacting with large language models (LLMs) powered by llama.cpp
. Designed for simplicity and privacy 🔒, this project lets you chat with powerful quantized models on your local machine - no cloud required! 🚫☁️
This repository is a fork of llama.cpp WebUI with:
- Fresh new styles 🎨
- Extra functionality ⚙️
- Smoother experience ✨
- ✨ Open our hosted UI instance
- ⚙️ Click the gear icon → General settings
- 🌐 Set "Base URL" to your local llama.cpp server (e.g.
http://localhost:8080
) - 🎉 Start chatting with your AI!
🔧 Need HTTPS magic for your local instance? Try this mitmproxy hack!
Uh-oh! Browsers block HTTP requests from HTTPS sites 😤. Since llama.cpp
uses HTTP, we need a bridge 🌉. Enter mitmproxy - our traffic wizard! 🧙♂️
Local setup:
mitmdump -p 8443 --mode reverse:http://localhost:8080/
Docker quickstart:
docker run -it -p 8443:8443 mitmproxy/mitmproxy mitmdump -p 8443 --mode reverse:http://localhost:8080/
Pro-tip with Docker Compose:
services:
mitmproxy:
container_name: mitmproxy
image: mitmproxy/mitmproxy:latest
ports:
- '8443:8443' # 🔁 Port magic happening here!
command: mitmdump -p 8443 --mode reverse:http://localhost:8080/
# ... (other config)
⚠️ Certificate Tango Time!
- Visit http://localhost:8443
- Click "Trust this certificate" 🤝
- Restart 🦙 llama.ui page 🔄
- Profit! 💸
Voilà! You've hacked the HTTPS barrier! 🎩✨
- 📦 Grab the latest release from our releases page
- 🗜️ Unpack the archive (feel that excitement! 🤩)
- ⚡ Fire up your llama.cpp server:
Linux/MacOS:
./server --host 0.0.0.0 \
--port 8080 \
--path "/path/to/llama.ui" \
-m models/llama-2-7b.Q4_0.gguf \
--ctx-size 4096
Windows:
llama-server ^
--host 0.0.0.0 ^
--port 8080 ^
--path "C:\path\to\llama.ui" ^
-m models\mistral-7b.Q4_K_M.gguf ^
--ctx-size 4096
- 🌐 Visit http://localhost:8080 and meet your new AI buddy! 🤖❤️
We're building something special together! 🚀
- 🎯 PRs are welcome! (Seriously, we high-five every contribution! ✋)
- 🐛 Bug squashing? Yes please! 🧯
- 📚 Documentation heroes needed! 🦸
- ✨ Make magic with your commits! (Follow Conventional Commits)
Prerequisites:
- 💻 macOS/Windows/Linux
- ⬢ Node.js >= 22
- 🦙 Local llama.cpp server humming along
Build the future:
npm ci # 📦 Grab dependencies
npm run build # 🔨 Craft the magic
npm start # 🎬 Launch dev server (http://localhost:5173) for live-coding bliss! 🔥
llama.ui is proudly MIT licensed - go build amazing things! 🚀 See LICENSE for details.
Made with ❤️ and ☕ by humans who believe in private AI