M-Maze Documentation

🧩

Overview

Understanding M-Maze's core architecture

M-Maze is a revolutionary memory-augmented AI assistant that transforms how AI systems interact with users by maintaining long-term memory and context awareness. Unlike traditional chatbots that start each conversation from scratch, M-Maze remembers your preferences, conversation history, and personal information through our advanced IMDMR technology.

Key Innovation: IMDMR System

M-Maze uses Intelligent Multi-Dimensional Memory Retrieval (IMDMR) technology to provide context-aware, personalized responses based on your entire conversation history. This 6-step process makes every interaction intelligent and personalized.

The 6 Steps of IMDMR Technology

1

Memory Analysis & Entity Extraction

Every message you send is analyzed using AWS Bedrock to extract key information like names, professions, locations, and interests. This creates a rich understanding of what you're sharing.

Example: "I'm Rajani from Mumbai, I work as a software engineer" → Extracts: Name (Rajani), Location (Mumbai), Profession (Software Engineer)
2

Intent Classification & Context Understanding

M-Maze determines what you're trying to achieve - whether you're introducing yourself, asking questions, or sharing information. It understands conversation flow and references.

Example: "What do you remember about me?" → Intent: Memory Retrieval
3

Multi-Dimensional Memory Storage

Your information is stored in our Qdrant vector database across multiple dimensions: semantic meaning, time, entities, categories, and quality scores. This enables intelligent retrieval.

Storage: 1024-dimensional embeddings with metadata (personal, professional, interests)
4

Intelligent Query Processing

When you ask questions, M-Maze analyzes your query and selects the best search strategy: entity-based, category-based, semantic similarity, or context-aware search.

Strategy: "Tell me about my work" → Category-based search (professional) + recent conversations
5

Advanced Memory Retrieval

Using multiple search strategies, M-Maze finds the most relevant memories, ranks them by relevance and quality, and provides context-aware information retrieval.

Process: Combines semantic similarity + metadata filtering + quality scoring
6

Context-Aware Response Generation

Finally, M-Maze generates personalized responses using AWS Bedrock Llama 3, incorporating retrieved memories to maintain conversation continuity and provide personalized insights.

Result: "I remember you're Rajani, a software engineer from Mumbai. How's work going?"

How It Works in Your M-Maze Experience

Even though the chat interface shows a fresh start each time you log in, the IMDMR system maintains all your conversation context in the backend. When you chat, M-Maze processes your messages through these 6 steps, creating intelligent, personalized responses that remember everything about you while providing a clean, focused user experience.

Core Features

  • Long-term Memory: Remembers conversations across sessions
  • Context Awareness: Understands conversation flow and references
  • Personalization: Adapts responses based on your preferences
  • Entity Recognition: Identifies and remembers names, places, and concepts
🏗️

System Architecture

Complete technical architecture of the IMDMR-powered M-Maze system

High-Level System Overview

🖥️

Frontend Layer

Next.js 14 + React 18
TypeScript + Tailwind CSS
Real-time chat interface
JWT authentication
⚙️

Backend Layer

FastAPI + Python 3.11
SQLAlchemy ORM
RESTful API endpoints
Async request handling
🧠

Memory Layer

Qdrant Vector DB
Llama 3.1 8B
1024D embeddings
Titan Embeddings v2
IMDMR algorithms

IMDMR System Architecture

Data Processing Pipeline

1
Input Processing

User message received via FastAPI endpoint

Tech: FastAPI + JWT validation + SQLite user lookup
2
Memory Analysis

Entity extraction & intent classification

Tech: Llama 3.1 8B + Custom prompt engineering
3
Memory Storage

Vector embedding generation & storage

Tech: Qdrant + Titan Embeddings v2 (1024D) + Metadata indexing

Memory Retrieval Pipeline

4
Query Processing

Intelligent search strategy selection

Tech: Multi-strategy search + Context analysis
5
Memory Retrieval

Semantic search & relevance ranking

Tech: Qdrant similarity search + Quality scoring
6
Response Generation

Context-aware AI response creation

Tech: Llama 3.1 8B + Retrieved memories + Context fusion

Data Flow Architecture

👤
💬
🧠
User InputAPI GatewayIMDMR Processing
📊
🗄️
💾
Memory AnalysisVector GenerationQdrant Storage
🔍
🤖
👤
Memory RetrievalAI GenerationUser Response

Technology Stack & Infrastructure

Development & Runtime

Frontend Framework:Next.js 14 + React 18
Backend Framework:FastAPI + Python 3.11
Database:SQLite + Qdrant Vector DB
AI Model:Llama 3.1 8B
Embeddings:Titan Embeddings v2
AI Platform:AWS Bedrock

🚀Infrastructure & Deployment

Containerization:Docker + Docker Compose
Reverse Proxy:Traefik
Authentication:JWT + bcrypt
API Design:RESTful + OpenAPI

Performance & Scalability Features

Performance

  • • Async request handling
  • • Vector similarity search
  • • Memory caching
  • • Optimized embeddings
📈

Scalability

  • • Horizontal scaling
  • • Load balancing
  • • Database sharding
  • • Microservices ready
🔒

Security

  • • JWT authentication
  • • Password hashing
  • • CORS protection
  • • Input validation

API Reference

POST/proj/m-maze/auth/signup

Create a new user account

{
"username": "string",
"password": "string"
}
POST/proj/m-maze/chat/message

Send a message to M-Maze

{
"message": "string"
}
GET/proj/m-maze/chat/history

Retrieve conversation history

Returns: User profile and conversation history

Usage Examples

Basic Conversation

U

"Hi, my name is Alex and I'm a software developer."

M

"Nice to meet you, Alex! I'll remember that you're a software developer. What kind of projects do you work on?"

Memory Retrieval

U

"What do you remember about me?"

M

"I remember that you're Alex, a software developer. We discussed your work on web applications and your interest in AI technologies."

Technical Specs

Backend:FastAPI + Python 3.11
Database:Qdrant Vector DB
AI Model:Llama 3.1 8B
Embeddings:Titan v2 (1024D)
AI Platform:AWS Bedrock

Ready to Try?

Experience the power of memory-augmented AI conversations

Launch M-Maze

Contributors & Developers

Tejas Pawar

Lead Developer & Project Architect

Full-stack development: From ideation to deployment, covering backend API, frontend interface, AI integration, and comprehensive documentation.