Skip to content

Official Python SDK for MemFuse - the lightning-fast open-source memory layer that gives LLMs persistent, queryable memory across conversations and sessions.

License

Notifications You must be signed in to change notification settings

memfuse/memfuse-python

Repository files navigation

GitHub license


MemFuse Logo

MemFuse Python SDK
The official Python client for MemFuse, the open-source memory layer for LLMs.
Explore the Docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About MemFuse
  2. Installation
  3. Quick Start
  4. Examples
  5. Documentation
  6. Community & Support
  7. License

About MemFuse

Large language model applications are inherently stateless by design. When the context window reaches its limit, previous conversations, user preferences, and critical information simply disappear.

MemFuse bridges this gap by providing a persistent, queryable memory layer between your LLM and storage backend, enabling AI agents to:

  • Remember user preferences and context across sessions
  • Recall facts and events from thousands of interactions later
  • Optimize token usage by avoiding redundant chat history resending
  • Learn continuously and improve performance over time

This repository contains the official Python SDK for seamless integration with MemFuse servers. For comprehensive information about the MemFuse server architecture and advanced features, please visit the MemFuse Server repository.

Recent Updates

  • Enhanced Testing: Comprehensive E2E testing with semantic memory validation
  • Better Error Handling: Improved error messages and logging for easier debugging
  • Prompt Templates: Structured prompt management system for consistent LLM interactions
  • Performance Benchmarks: MSC dataset accuracy testing with 95% validation threshold

Installation

Note: This is the standalone Client SDK repository. If you need to install and run the MemFuse server, which is essential to use the SDK, please visit the MemFuse Server repository.

You can install the MemFuse Python SDK using one of the following methods:

Option 1: Install from PyPI (Recommended)

pip install memfuse

Option 2: Install from Source

git clone https://github.com/memfuse/memfuse-python.git
cd memfuse-python
pip install -e .

Quick Start

Here's a comprehensive example demonstrating how to use the MemFuse Python SDK with OpenAI:

from memfuse.llm import OpenAI
from memfuse import MemFuse
import os


memfuse_client = MemFuse(
  # api_key=os.getenv("MEMFUSE_API_KEY")
  # base_url=os.getenv("MEMFUSE_BASE_URL"),
)

memory = memfuse_client.init(
  user="alice",
  # agent="agent_default",
  # session=<randomly-generated-uuid>
)

# Initialize your LLM client with the memory scope
llm_client = OpenAI(
    api_key=os.getenv("OPENAI_API_KEY"),  # Your OpenAI API key
    memory=memory
)

# Make a chat completion request
response = llm_client.chat.completions.create(
    model="gpt-4o", # Or any model supported by your LLM provider
    messages=[{"role": "user", "content": "I'm planning a trip to Mars. What is the gravity there?"}]
)

print(f"Response: {response.choices[0].message.content}")
# Example Output: Response: Mars has a gravity of about 3.721 m/s², which is about 38% of Earth's gravity.

Contextual Follow-up

Now, ask a follow-up question. MemFuse will automatically recall relevant context from the previous conversation:

# Ask a follow-up question. MemFuse automatically recalls relevant context.
followup_response = llm_client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What are some challenges of living on that planet?"}]
)

print(f"Follow-up: {followup_response.choices[0].message.content}")
# Example Output: Follow-up: Some challenges of living on Mars include its thin atmosphere, extreme temperatures, high radiation levels, and the lack of liquid water on the surface.

MemFuse automatically manages the retrieval of relevant information and storage of new memories from conversations within the specified memory scope.

Advanced Features

Memory Validation & Testing

The SDK includes comprehensive testing capabilities to validate memory accuracy:

  • E2E Memory Tests: Automated tests that verify conversational context retention
  • Semantic Similarity Validation: Uses RAGAS framework for intelligent response verification
  • Performance Benchmarks: MSC (Multi-Session Chat) dataset testing with accuracy metrics

Error Handling & Debugging

Enhanced error messages provide clear guidance:

  • Connection Issues: Helpful instructions for starting the MemFuse server
  • API Errors: Detailed error responses with actionable information
  • Logging: Comprehensive logging for troubleshooting and monitoring

Examples

Explore comprehensive examples in the examples/ directory of this repository, featuring:

  • Basic Operations: Fundamental usage patterns and asynchronous operations
  • Conversation Continuity: Maintaining context across multiple interactions
  • UI Integrations: Gradio-based chatbot implementations with streaming support

Documentation

  • Server Documentation: For detailed information about the MemFuse server architecture and advanced configuration, visit the MemFuse online documentation
  • SDK Documentation: Comprehensive API references and guides will be available soon

Community & Support

Join our growing community:

If MemFuse enhances your projects, please ⭐ star both the server repository and this SDK repository!

License

This MemFuse Python SDK is licensed under the Apache 2.0 License. See the LICENSE file for complete details.

About

Official Python SDK for MemFuse - the lightning-fast open-source memory layer that gives LLMs persistent, queryable memory across conversations and sessions.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •