The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, No-code agent builder, MCP compatibility, and more.
-
Updated
Aug 22, 2025 - JavaScript
The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, No-code agent builder, MCP compatibility, and more.
kubewall - Single-Binary Kubernetes Dashboard with Multi-Cluster Management & AI Integration. (OpenAI / Claude 4 / Gemini / DeepSeek / OpenRouter / Ollama / Qwen / LMStudio)
ComfyUI-IF_AI_tools is a set of custom nodes for ComfyUI that allows you to generate prompts using a local Large Language Model (LLM) via Ollama. This tool enables you to enhance your image generation workflow by leveraging the power of language models.
DevoxxGenie is a plugin for IntelliJ IDEA that uses local LLM's (Ollama, LMStudio, GPT4All, Jan and Llama.cpp) and Cloud based LLMs to help review, test, explain your project code.
RAGLight is a modular framework for Retrieval-Augmented Generation (RAG). It makes it easy to plug in different LLMs, embeddings, and vector stores, and now includes seamless MCP integration to connect external tools and data sources.
OADIN is a Production-grade AI service infrastructure for AI PC development. Provides unified APIs for chat, embedding, text generation, and text-to-image services with support for both local (Ollama) and cloud AI providers (OpenAI, DeepSeek, etc.). Enables AI applications to share resources without bundling their own AI stacks.
A persistent local memory for AI, LLMs, or Copilot in VS Code.
Python app for LM Studio-enhanced voice conversations with local LLMs. Uses Whisper for speech-to-text and offers a privacy-focused, accessible interface.
A simple, locally hosted Web Search MCP server for use with Local LLMs
RetroChat is a powerful command-line interface for interacting with various AI language models. It provides a seamless experience for engaging with different chat providers while offering robust features for managing and customizing your conversations. The code in this repo is 100% AI generated. Nothing has been written by a human.
Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity and local-first deployments.
MESH-AI - Off-Grid LM-Studio / Ollama / OpenAI integration & Home Assistant API control for Meshtastic with custom commands, inbound / outbound Twilio SMS routing, Discord channel routing & GPS emergency alerts over sms, email & discord. Now with Windows, Linux & Docker support!
Whisper STT + Orpheus TTS + Gemma 3 using LM Studio to create a virtual assistant.
Add a description, image, and links to the lmstudio topic page so that developers can more easily learn about it.
To associate your repository with the lmstudio topic, visit your repo's landing page and select "manage topics."