An MCP (Model Context Protocol) server that provides graph-based memory tools for AI agents using Dgraph as the backend database. Includes specialized tooling for ingesting and benchmarking against the Locomo-10 AI agent memory dataset. Built with TypeScript, Vercel AI SDK, and the Vercel MCP adapter.
- Entity Extraction: Automatically extracts entities from user messages using LLMs
- Vector Search: Semantic search through memories using embeddings
- Graph Relationships: Stores entities and memories with relationships in Dgraph
- Graph Algorithms: Analyzes graph structure (community detection and centrality)
- Locomo Benchmark Integration: Purpose-built ingestion for AI agent memory benchmarking
- Two MCP Tools:
save_user_message
: Process and save messages with entity extractiongraph_memory_search
: Vector-based search through stored memories
- Node.js 20+
- Dgraph database instance (local or cloud)
- OpenAI or Anthropic API key
The project uses standard Dgraph connection strings with dgraph.open()
:
- Local:
dgraph://localhost:9080
- With auth:
dgraph://user:password@localhost:9080
- Cloud:
dgraph://your-instance.cloud:443?sslmode=verify-ca&bearertoken=your-token
-
Clone and install dependencies:
npm install
-
Configure environment variables:
cp .env.example .env
Edit
.env
with your configuration:DGRAPH_CONNECTION_STRING=dgraph://localhost:9080 AI_PROVIDER=openai OPENAI_API_KEY=your_openai_api_key EMBEDDING_MODEL=text-embedding-3-small LLM_MODEL=gpt-4o-mini
For cloud instances, use the full connection string:
DGRAPH_CONNECTION_STRING=dgraph://your-instance.cloud:443?sslmode=verify-ca&bearertoken=your-token
-
Start Dgraph (if running locally):
docker run --rm -it -p 8080:8080 -p 9080:9080 -p 8000:8000 dgraph/standalone:latest
# Build the project
npm run build
# Run in development mode
npm run dev
# Start production server
npm start
# Lint code
npm run lint
# Type check
npm run type-check
# Run tests
npm test
# Run tests in watch mode
npm run test:watch
# Run tests with coverage
npm run test:coverage
# Run tests for CI
npm run test:ci
# Launch MCP Inspector for testing
npm run inspector
# Quick MCP server test
npm run test:mcp
The MCP Inspector is a debugging tool that allows you to test and interact with your MCP server directly. It provides a web interface to call tools, inspect responses, and debug your server implementation.
-
Install MCP Inspector:
npm install -g @modelcontextprotocol/inspector
-
Start Dgraph (if testing with real database):
docker run --rm -it -p 8080:8080 -p 9080:9080 -p 8000:8000 dgraph/standalone:latest
-
Configure your API keys in
.env
:OPENAI_API_KEY=your_actual_openai_api_key # or ANTHROPIC_API_KEY=your_actual_anthropic_api_key
-
Start the MCP Inspector:
npx @modelcontextprotocol/inspector
-
Configure the connection in the inspector web interface:
- Server Command:
node
- Server Arguments:
["dist/index.js"]
- Working Directory:
/path/to/your/graph-fetch/project
Or for development mode:
- Server Command:
npm
- Server Arguments:
["run", "dev"]
- Working Directory:
/path/to/your/graph-fetch/project
- Server Command:
-
Test the tools:
Save User Message Tool:
{ "message": "I met John Smith at Google headquarters in Mountain View yesterday to discuss the new AI project." }
Graph Memory Search Tool:
{ "query": "meetings with Google employees", "limit": 5 }
- save_user_message: Should extract entities (John Smith, Google, Mountain View, AI project) and save them to Dgraph with relationships
- graph_memory_search: Should return semantically similar memories based on vector embeddings
- Connection issues: Ensure the server builds successfully with
npm run build
- API errors: Verify your AI provider API key is correctly set in
.env
- Database errors: Make sure Dgraph is running and accessible on the configured port (
docker run --rm -it -p 8080:8080 -p 9080:9080 -p 8000:8000 dgraph/standalone:latest
) - Tool errors: Check the inspector console and server logs for detailed error messages
- Server startup fails: The MCP server requires Dgraph to be running to initialize. Start Dgraph before testing the server
You can also test the server manually using stdio:
# Start Dgraph (in separate terminal)
docker run --rm -it -p 8080:8080 -p 9080:9080 -p 8000:8000 dgraph/standalone:latest
# Build and start the server (in main terminal)
npm run build
echo '{"jsonrpc": "2.0", "id": 1, "method": "tools/list"}' | node dist/index.js
Here's a complete workflow to test the MCP server:
# 1. Start Dgraph
docker run --rm -it -p 8080:8080 -p 9080:9080 -p 8000:8000 dgraph/standalone:latest
# 2. In a new terminal, build the project
npm run build
# 3. Start MCP Inspector
npm run inspector
# 4. Open the inspector in your browser (usually http://localhost:3000)
# 5. Configure connection with: node dist/index.js
# 6. Test the tools with sample data
The project includes a comprehensive test suite with:
- DgraphService: Database operations, schema initialization, vector search
- AIService: Entity extraction, embedding generation, summary creation
- MCP Tools: save_user_message and graph_memory_search functionality
- MCP Server: End-to-end server functionality and tool integration
tests/
├── fixtures/ # Test data and mock objects
├── mocks/ # Service mocks (Dgraph, AI)
├── integration/ # Integration tests
└── setup.ts # Global test configuration
src/__tests__/ # Unit tests alongside source code
├── lib/ # Service unit tests
└── tools/ # Tool unit tests
# Run all tests
npm test
# Run tests with coverage report
npm run test:coverage
# Run tests in watch mode during development
npm run test:watch
# Run tests for CI (no watch, with coverage)
npm run test:ci
- Jest with TypeScript support via ts-jest
- ESM modules support for modern JavaScript
- Mocking of external dependencies (Dgraph, AI services)
- Coverage reporting with HTML and LCOV formats
- GitHub Actions CI pipeline for automated testing
- Install Vercel CLI:
npm i -g vercel
- Deploy:
vercel
- Set environment variables in Vercel dashboard
- Configure Dgraph Cloud or hosted Dgraph instance
Processes a user message, extracts entities, and saves them to Dgraph with relationships.
Parameters:
message
(string): The user message to process
Example:
{
"message": "I met John Smith at Google headquarters in Mountain View yesterday."
}
Searches for relevant memories using vector similarity on entity embeddings.
Parameters:
query
(string): Search querylimit
(number, optional): Max results (default: 10)
Example:
{
"query": "meetings with Google employees",
"limit": 5
}
Graph Fetch includes a specialized ingestion script for the Locomo-10 AI agent memory benchmark dataset. This benchmark contains 10 realistic multi-session conversations designed to test an AI agent's ability to build and maintain long-term memory across interactions.
# 1. Start Dgraph and MCP server
docker run --rm -it -p 8080:8080 -p 9080:9080 -p 8000:8000 dgraph/standalone:latest
npm run build && npm start
# 2. Test with small sample
node scripts/ingest-locomo.js --max-conversations 1 --max-sessions 1 --max-messages 5
# 3. Process larger batches
node scripts/ingest-locomo.js --max-conversations 2 --max-sessions 3 --max-messages 10
- 10 conversations between different speaker pairs
- ~190 sessions total (spanning days/weeks per conversation)
- ~4000+ messages with rich contextual information
- Realistic scenarios: Work stress, relationships, life events, personal growth
The ingestion script processes conversations through the Graph Fetch pipeline to:
- Extract entities (people, places, organizations, events, emotions)
- Build relationships between entities automatically
- Generate embeddings for semantic search
- Store everything in Dgraph as a connected knowledge graph
📋 Processing conversation 1: Caroline & Melanie
[SAVE] [1:56 pm on 8 May, 2023] Caroline: Hey Mel! Good to see you!
✅ Saved - Successfully saved message with 3 entities (1 new) and 2 relationships. Memory ID: 0x753d
✅ Ingestion completed!
📈 Total processed: 10 messages from 1 conversations
✅ Successfully saved: 10 ❌ Errors: 0
- Benchmark AI agent memory systems against standardized dataset
- Test graph-based memory retrieval with complex multi-session contexts
- Evaluate entity extraction on realistic conversation data
- Research long-term memory patterns in AI agent interactions
After ingestion, explore the generated knowledge graph with powerful DQL queries:
- Relationship analysis: Find support networks, advocacy patterns, community connections
- Entity centrality: Discover most connected people and concepts
- Temporal tracking: Analyze relationship evolution across conversations
- Semantic search: Query by relationship types like "expressed gratitude for", "member of", "advocates for"
📖 Complete Locomo Ingestion Guide →
🔍 Example DQL Queries & Analysis →
Graph Fetch includes a powerful Python companion service for advanced graph analytics that can discover semantic communities within your memory graph and enrich the data model with dedicated community nodes.
The service uses Label Propagation algorithm to automatically discover communities of related entities in your graph data. This unsupervised approach identifies clusters of entities that are semantically connected, revealing the underlying structure of your AI agent's memory.
How it works:
- Analyzes entity relationships in your existing Dgraph memory graph
- Discovers semantic communities using NetworkX label propagation algorithm
- Creates dedicated Community nodes with type
Community
in Dgraph - Establishes member relationships connecting each community to its member entities
- Enriches queries enabling community-based memory retrieval and analysis
The community detection transforms your flat entity-relationship structure into a rich, hierarchical graph:
Before: Entity ←→ relatedTo ←→ Entity
After: Entity ←→ relatedTo ←→ Entity
↕
Community
Each discovered community becomes a first-class entity with:
- Semantic groupings: Related people, concepts, events clustered together
- Metadata: Algorithm used, community size, execution timestamp
- Queryable structure: Find all members of a community or which communities an entity belongs to
Prerequisites:
- Python 3.11+ with UV package manager
- Existing Dgraph instance with Entity/Memory data
# Navigate to the graph algorithms service
cd graph-algos
# Install dependencies
uv sync
# Configure connection to your Dgraph instance
cp .env.example .env
# Edit .env with your DGRAPH_CONNECTION_STRING
# Run label propagation with community node creation
uv run graph-algos community --algorithm label_propagation --write --create-communities
# Verify communities were created
uv run python verify_communities.py
When run on an AI agent memory graph with 134 entities:
✅ Found 17 community nodes
📊 Community Statistics:
Total communities: 17
Total member relationships: 134
Average community size: 7.9
📊 Example Discovered Communities:
Community 0: Caroline, Mel, conversation dates (37 members)
Community 1: Melanie, inspiring stories, creative concepts (11 members)
Community 2: Mental health topics, charity events (3 members)
Community 4: Family relationships, community involvement (35 members)
- Contextual Memory Retrieval: Find memories by community themes rather than individual entities
- Semantic Organization: Understand how your AI agent naturally groups related information
- Conversation Analysis: Identify topic clusters and relationship patterns across sessions
- Memory Summarization: Generate community-level summaries and insights
- Graph Exploration: Navigate memory graph by semantic communities rather than individual connections
🔧 Complete Graph Algorithms Documentation →
Graph Fetch consists of a TypeScript MCP server for AI agent memory operations and a Python companion service for advanced graph analytics.
fetch/
├── src/ # TypeScript MCP server source code
│ ├── lib/
│ │ ├── dgraph.ts # Dgraph database operations and schema management
│ │ └── ai.ts # AI operations (entity extraction, embeddings, summaries)
│ ├── tools/
│ │ ├── save-user-message.ts # MCP tool for processing and saving user messages
│ │ └── graph-memory-search.ts # MCP tool for vector-based memory search
│ ├── types/
│ │ └── index.ts # TypeScript type definitions and interfaces
│ ├── test-fixtures/
│ │ └── test-data.ts # Shared test data and mock objects
│ ├── __tests__/ # Unit tests alongside source code
│ │ ├── lib/
│ │ │ ├── ai.test.ts # AIService unit tests with mocked providers
│ │ │ └── dgraph.test.ts # DgraphService tests with mocked database
│ │ └── tools/
│ │ ├── save-user-message.test.ts # Save message tool integration tests
│ │ └── graph-memory-search.test.ts # Search tool functionality tests
│ ├── test-setup.ts # Global test configuration and setup
│ └── index.ts # MCP server initialization and HTTP transport
├── tests/
│ ├── integration/ # End-to-end MCP server integration tests
│ ├── fixtures/ # Test data and mock objects
│ └── mocks/ # Service mocks (Dgraph, AI SDK)
├── scripts/
│ ├── ingest-locomo.ts # Locomo-10 benchmark dataset ingestion script
│ ├── ingest-locomo.js # Compiled JavaScript version
│ └── README.md # Ingestion guide and usage examples
├── eval/
│ ├── README.md # Analysis guides and example DQL queries
│ └── locomo/
│ └── Locomo-10.json # Complete AI agent memory benchmark dataset
├── graph-algos/ # Python companion service for graph analytics
│ ├── src/
│ │ └── graph_algos/
│ │ ├── core/
│ │ │ ├── config.py # Pydantic configuration management
│ │ │ ├── dgraph_client.py # Python Dgraph client with auth support
│ │ │ └── logger.py # Structured logging configuration
│ │ ├── algorithms/
│ │ │ ├── base.py # Abstract base class for all algorithms
│ │ │ ├── centrality.py # NetworkX centrality implementations
│ │ │ ├── community.py # Community detection algorithms
│ │ │ └── graph_builder.py # Dgraph to NetworkX graph conversion
│ │ ├── api/
│ │ │ └── server.py # Flask REST API server
│ │ ├── schedulers/
│ │ │ └── periodic_runner.py # APScheduler cron-style execution
│ │ └── cli.py # Click-based command-line interface
│ ├── tests/ # Python test suite
│ ├── examples/
│ │ ├── api_client.py # API usage examples
│ │ ├── community_analysis.py # Community detection examples
│ │ └── run_pagerank.py # Centrality algorithm examples
│ ├── config/ # Configuration templates
│ ├── docs/ # Additional documentation
│ ├── pyproject.toml # Python project and UV dependency configuration
│ ├── uv.lock # UV dependency lock file
│ ├── .env.example # Environment variable template
│ ├── .gitignore # Python-specific gitignore rules
│ ├── Dockerfile # Docker container configuration
│ ├── docker-compose.yml # Multi-service Docker setup
│ └── README.md # Graph algorithms service documentation
├── img/
│ ├── fetch.png # Project logo
│ ├── fetch-schema.png # Graph data model visualization
│ └── arrows/
│ └── fetch-schema.json # Arrows graph editor schema file
├── dist/ # Compiled JavaScript output
├── node_modules/ # Node.js dependencies
├── package.json # Node.js project configuration and dependencies
├── package-lock.json # Node.js dependency lock file
├── tsconfig.json # TypeScript compiler configuration
├── jest.config.js # Jest testing framework configuration
├── eslint.config.js # ESLint code quality configuration
├── nodemon.json # Development server auto-reload configuration
├── vercel.json # Vercel deployment configuration
├── .env.example # Environment variables template
├── CLAUDE.md # AI assistant project context and instructions
└── README.md # Main project documentation
-
src/lib/dgraph.ts
:DgraphService
class handling all Dgraph database operations- Connection management with
dgraph.open()
connection strings - Schema initialization and management (Entity, Memory, Community types)
- CRUD operations for entities, memories, and relationships
- Vector similarity search using HNSW index
- Faceted relationship storage with metadata
- Connection management with
-
src/lib/ai.ts
:AIService
class for AI operations using Vercel AI SDK- Multi-provider support (OpenAI, Anthropic) with automatic provider switching
- Entity extraction from messages using structured LLM prompts
- Vector embedding generation for semantic search
- Memory summarization and relationship extraction
- Configurable models for different AI providers
-
src/tools/save-user-message.ts
:SaveUserMessageTool
class- Processes user messages through complete entity extraction pipeline
- Extracts entities (people, places, organizations, events, emotions, concepts)
- Generates embeddings for each entity and the message
- Stores entities with relationships and faceted edges in Dgraph
- Creates Memory nodes linked to extracted entities
- Returns structured response with entity counts and memory ID
-
src/tools/graph-memory-search.ts
:GraphMemorySearchTool
class- Vector-based semantic search through stored memories
- Generates query embeddings and performs HNSW similarity search
- Retrieves relevant entities and their connected memories
- AI-powered summarization of search results
- Configurable result limits and similarity thresholds
-
src/index.ts
: MCP server initialization and HTTP transport setup- Express server with CORS support for web clients
- MCP protocol implementation with tool registration
- Environment-based configuration loading
- Service initialization and dependency injection
- Error handling and graceful shutdown
-
src/types/index.ts
: TypeScript type definitionsEntity
,Memory
,EntityRelationship
data modelsDgraphConfig
,AIConfig
configuration interfaces- MCP tool argument and response types
- Vector search and graph operation types
-
src/__tests__/
: Comprehensive test suite with Jestlib/ai.test.ts
: AIService unit tests with mocked providerslib/dgraph.test.ts
: DgraphService tests with mocked databasetools/save-user-message.test.ts
: Tool integration teststools/graph-memory-search.test.ts
: Search functionality tests
-
src/test-fixtures/test-data.ts
: Shared test data and fixtures -
tests/integration/
: End-to-end MCP server integration tests
-
scripts/ingest-locomo.ts
: Locomo-10 benchmark dataset ingestion- Processes 10 multi-session AI agent conversations (~4000 messages)
- Batch processing with configurable conversation/session/message limits
- Progress tracking and error reporting
- Integration with Graph Fetch entity extraction pipeline
- Designed for AI agent memory benchmarking and evaluation
-
eval/
: Benchmark analysis and DQL query examplesREADME.md
: Analysis guides and example querieslocomo/Locomo-10.json
: Complete benchmark dataset
graph-algos/src/graph_algos/core/
:config.py
: Pydantic-based configuration management with environment variablesdgraph_client.py
: Python Dgraph client withdgraph://
connection string support, SSL/bearer token authlogger.py
: Structured logging configuration with JSON/text output formats
graph-algos/src/graph_algos/algorithms/
:base.py
:BaseAlgorithm
abstract class with timing, error handling, and result storagecentrality.py
: NetworkX centrality implementations (PageRank, Betweenness, Closeness, Eigenvector)community.py
: Community detection algorithms (Louvain, Label Propagation, Leiden, Greedy Modularity)graph_builder.py
:GraphBuilder
class for converting Dgraph data to NetworkX graphs
-
graph-algos/src/graph_algos/api/server.py
: Flask REST API server- Endpoints for running algorithms (
/centrality/run
,/community/run
) - Graph information and health check endpoints
- JSON request/response handling with error management
- Endpoints for running algorithms (
-
graph-algos/src/graph_algos/cli.py
: Click-based command-line interface- Commands for centrality, community detection, and batch processing
- Support for
--create-communities
flag to create Community nodes - Configuration via CLI arguments or environment variables
-
graph-algos/src/graph_algos/schedulers/periodic_runner.py
: APScheduler cron-style execution- Configurable periodic algorithm execution
- Multiple scheduler backends (BlockingScheduler, BackgroundScheduler)
- Community Node Creation: Transforms community detection results into first-class Dgraph nodes
- Multi-Algorithm Support: Runs multiple algorithms in parallel with result aggregation
- NetworkX Integration: Full compatibility with NetworkX ecosystem and algorithms
- Production Ready: Comprehensive error handling, logging, and configuration management
package.json
: Node.js dependencies, scripts, and MCP server configurationpyproject.toml
: Python project configuration with UV dependency managementtsconfig.json
: TypeScript compiler configuration with ESM modulesjest.config.js
: Jest testing framework setup with TypeScript supportvercel.json
: Vercel deployment configuration for MCP server
Graph Fetch creates a rich knowledge graph that you can explore using Dgraph's DQL (Dgraph Query Language). Here are practical examples of queries to explore your agent memory data, tested and verified with actual results.
Count total entities and memories:
{
entity_count(func: type(Entity)) {
count(uid)
}
memory_count(func: type(Memory)) {
count(uid)
}
}
Result:
{
"entity_count": [{"count": 134}],
"memory_count": [{"count": 85}]
}
Get entity types:
{
entity_types(func: type(Entity)) {
type
}
}
Result: Shows distribution of PERSON, CONCEPT, EVENT, ORGANIZATION, PLACE, and DATE entities across 134 total entities.
Sample PERSON entities:
{
persons(func: type(Entity)) @filter(eq(type, "PERSON")) {
uid
name
type
description
createdAt
}
}
Result:
{
"persons": [
{
"uid": "0x7558",
"name": "Caroline",
"type": "PERSON",
"description": "One of the individuals communicating in the text.",
"createdAt": "2025-08-09T18:31:51.16Z"
},
{
"uid": "0x7559",
"name": "Mel",
"type": "PERSON",
"description": "The other individual being addressed by Caroline.",
"createdAt": "2025-08-09T18:31:51.721Z"
},
{
"uid": "0x755d",
"name": "Melanie",
"type": "PERSON",
"description": "A person engaging in a conversation.",
"createdAt": "2025-08-09T18:31:57.489Z"
}
]
}
Sample CONCEPT entities:
{
concepts(func: type(Entity)) @filter(eq(type, "CONCEPT")) {
uid
name
type
description
createdAt
}
}
Result: Returns 89 concept entities including LGBTQ, support group, inspiring stories, transgender stories, support, painting, group, edu, career options, jobs, mental health, self-care, family, running, reading, playing my violin, fam, summer break, camping, adoption agencies, kids, kids in need, future family, friends, mentors, LGBTQ community, transgender journey, trans community, gender identity, inclusion, inclusivity, acceptance, community, hope, understanding, courage, vulnerability, stories, love, inclusive, change, journey, impact, life's challenges, banker, business, dance studio, dancing, biz, Dance, contemporary dance, Contemporary dance, dance class, clothing store, store, dance floor, Dance floors, Marley flooring, dance studios, Marley, dance, trendy pieces, furniture, decor, chandelier, customers, cool oasis, shopping experience, family road trip, aerial yoga, kickboxing, Kickboxing, local politics, education, infrastructure, funding, campaign, networking, Networking, community's education, future generations, tools for success, foundation of progress and opportunity, system, positive difference, passion, online group, supportive community, group of people, advice, encouragement, like-minded individuals.
Sample EVENT entities:
{
events(func: type(Entity)) @filter(eq(type, "EVENT")) {
uid
name
type
description
createdAt
}
}
Result: Returns 23 event entities including dates (8 May, 2023, 25 May, 2023, 9 June 2023, 20 January, 2023, 29 January, 2023, 1 February, 2023, 17 December, 2022, 22 December, 2022, 1 January, 2023), times (3:56 pm, 4:04 pm, 8:30 pm), and activities (charity race, last Saturday, event, my school event, pic, toy drive).
Get memories with their connected entities:
{
memories(func: type(Memory)) {
uid
content
timestamp
entities {
uid
name
type
}
}
}
Result: Returns 85 memories with their content, timestamps, and connected entities. Each memory shows the conversation content and links to relevant entities (people, concepts, events, places).
Find memories by specific person:
{
caroline_memories(func: type(Memory)) @filter(anyoftext(content, "Caroline")) {
uid
content
timestamp
entities {
name
type
}
}
}
Result: Returns memories mentioning Caroline, showing her conversations about LGBTQ support groups, transgender stories, adoption plans, and school events.
Find highly connected entities:
{
connected_entities(func: type(Entity)) {
uid
name
type
relatedTo @facets(weight) {
uid
name
type
}
}
}
Result: Shows entities with their relationship networks. Caroline has 35+ connections, Melanie has 25+ connections, Jon has 30+ connections, Gina has 25+ connections, and John has 30+ connections to various concepts, events, and other entities.
Search for specific concept relationships:
{
dance_entities(func: type(Entity)) @filter(eq(name, "dance")) {
uid
name
type
description
}
dancing_entities(func: type(Entity)) @filter(eq(name, "dancing")) {
uid
name
type
description
}
}
Result:
{
"dance_entities": [
{
"uid": "0x75de",
"name": "dance",
"type": "CONCEPT",
"description": "An art form that Jon is passionate about and aiming to pursue professionally."
}
],
"dancing_entities": [
{
"uid": "0x75bc",
"name": "dancing",
"type": "CONCEPT",
"description": "The art of movement of the body, usually to music."
}
]
}
Get entities in chronological order:
{
timeline_entities(func: type(Entity)) @filter(ge(createdAt, "2025-08-09T18:31:00Z")) {
uid
name
type
createdAt
}
}
Result: Returns all 134 entities ordered by creation time, showing the progression of conversation topics and entity extraction over time.
Get memories in chronological order:
{
timeline_memories(func: type(Memory)) @filter(ge(timestamp, "2025-08-09T18:31:00Z")) {
uid
content
timestamp
}
}
Result: Returns all 85 memories ordered by timestamp, showing conversation flow from May 2023 through February 2023.
With community nodes created, you can perform sophisticated graph queries:
# Find all communities and their members
{
communities(func: type(Community)) {
uid name algorithm member_count
members { uid name type }
}
}
# Find which communities a person belongs to
{
caroline(func: eq(name, "Caroline")) {
name
~members { name algorithm community_id }
}
}
# Search memories by community context
{
mental_health_community(func: eq(name, "label_propagation_community_2")) {
name member_count
members {
name type
~relatedTo {
dgraph.type
content # If linked to Memory nodes
}
}
}
}
Find memories containing specific concepts:
{
lgbtq_memories(func: type(Memory)) @filter(anyoftext(content, "LGBTQ")) {
uid
content
entities {
name
type
}
}
}
Result:
{
"lgbtq_memories": [
{
"uid": "0x7561",
"content": "[1:56 pm on 8 May, 2023] Caroline: I went to a LGBTQ support group yesterday and it was so powerful.",
"entities": [
{"name": "Caroline", "type": "PERSON"},
{"name": "8 May, 2023", "type": "EVENT"},
{"name": "LGBTQ", "type": "CONCEPT"},
{"name": "support group", "type": "CONCEPT"}
]
},
{
"uid": "0x7592",
"content": "[7:55 pm on 9 June, 2023] Caroline: Hey Melanie! How's it going? I wanted to tell you about my school event last week. It was awesome! I talked about my transgender journey and encouraged students to get involved in the LGBTQ community. It was great to see their reactions. It made me reflect on how far I've come since I started transitioning three years ago.",
"entities": [
{"name": "Caroline", "type": "PERSON"},
{"name": "Melanie", "type": "PERSON"},
{"name": "LGBTQ community", "type": "CONCEPT"},
{"name": "my school event", "type": "EVENT"},
{"name": "transgender journey", "type": "CONCEPT"}
]
}
]
}
Note: The anyoftext()
function only works on fields with fulltext indexing (like content
), not on name
fields.
Find shortest path between entities:
{
q(func: eq(name, "Caroline")) {
a as uid
}
q1(func: eq(name, "Melanie")) {
b as uid
}
path as shortest(from: uid(a), to: uid(b), numpaths: 5) {
relatedTo @facets(weight)
}
path(func: uid(path)) {
uid
name
type
}
}
Result: Returns empty path results, indicating no direct relationship path exists between Caroline and Melanie in the current graph structure.
Entity type distribution:
{
type_stats(func: type(Entity)) {
type
count(uid)
}
}
Result: Returns entity types but count aggregation doesn't work as expected in this version. Use the basic count query instead.
Memory timeline analysis:
{
memory_timeline(func: type(Memory)) {
timestamp
count(uid)
}
}
Result: Similar count aggregation issue. Use the basic memory count query for accurate totals.
Find related concepts:
{
related_concepts(func: eq(name, "dance")) {
name
type
relatedTo {
name
type
description
}
}
}
Result:
{
"related_concepts": [
{
"name": "dance",
"type": "CONCEPT"
}
]
}
Discover entity clusters:
{
entity_clusters(func: type(Entity)) {
name
type
relatedTo {
name
type
}
}
}
Result: Returns all entities with their relationships, showing the complete knowledge graph structure.
Graph Fetch includes geospatial data for PLACE entities, enabling location-based queries and analysis. Some places have precise coordinates while others represent conceptual locations.
Find all places with geographic coordinates:
{
places_with_coords(func: type(Entity)) @filter(eq(type, "PLACE") AND has(location)) {
uid
name
type
description
location
}
}
Result:
{
"places_with_coords": [
{
"uid": "0x75cf",
"name": "Paris",
"type": "PLACE",
"description": "A city in France that Jon visited.",
"location": {
"type": "Point",
"coordinates": [2.3522, 48.8566]
}
},
{
"uid": "0x75d1",
"name": "Rome",
"type": "PLACE",
"description": "Capital city of Italy.",
"location": {
"type": "Point",
"coordinates": [12.4964, 41.9028]
}
}
]
}
Find all place entities (with and without coordinates):
{
all_places(func: type(Entity)) @filter(eq(type, "PLACE")) {
uid
name
type
description
location
}
}
Result: Returns 7 place entities including:
- With coordinates: Paris (France), Rome (Italy)
- Conceptual places: downtown, a great spot, homeless shelter, neighborhood, area
Find places by specific name:
{
paris_info(func: eq(name, "Paris")) {
uid
name
type
description
location
relatedTo {
name
type
description
}
}
}
Result:
{
"paris_info": [
{
"uid": "0x75cf",
"name": "Paris",
"type": "PLACE",
"description": "A city in France that Jon visited.",
"location": {
"type": "Point",
"coordinates": [2.3522, 48.8566]
}
}
]
}
Find memories mentioning specific places:
{
paris_memories(func: type(Memory)) @filter(anyoftext(content, "Paris")) {
uid
content
timestamp
entities {
name
type
location
}
}
}
Result:
{
"paris_memories": [
{
"uid": "0x75d0",
"content": "[2:32 pm on 29 January, 2023] Jon: Hey Gina! Thanks for asking. I'm on the hunt for the ideal spot for my dance studio and it's been quite a journey! I've been looking at different places and picturing how the space would look. I even found a place with great natural light! Oh, I've been to Paris yesterday! It was sooo cool.",
"timestamp": "2025-08-09T18:36:34.214Z",
"entities": [
{"name": "Gina", "type": "PERSON"},
{"name": "Jon", "type": "PERSON"},
{"name": "dance studio", "type": "CONCEPT"},
{"name": "Paris", "type": "PLACE", "location": {"type":"Point","coordinates":[2.3522,48.8566]}}
]
}
]
}
Find places mentioned in conversations:
{
place_mentions(func: type(Entity)) @filter(eq(type, "PLACE")) {
name
description
location
~relatedTo {
name
type
content
}
}
}
Result: Shows places and their connections to other entities and memories in the conversation graph.
Find memories and entities within 1000km of specific coordinates:
{
# Find places within 1000km of Rome (12.4964, 41.9028)
# Rome coordinates: [12.4964, 41.9028]
# Paris coordinates: [2.3522, 48.8566] - within 1000km of Rome
places_near_rome(func: type(Entity)) @filter(eq(type, "PLACE")) {
uid
name
type
description
location
}
# Find memories mentioning places near Rome
memories_near_rome(func: type(Memory)) @filter(anyoftext(content, "Rome") OR anyoftext(content, "Paris")) {
uid
content
timestamp
entities {
name
type
location
}
}
}
Result:
{
"places_near_rome": [
{
"uid": "0x75cf",
"name": "Paris",
"type": "PLACE",
"description": "A city in France that Jon visited.",
"location": {
"type": "Point",
"coordinates": [2.3522, 48.8566]
}
},
{
"uid": "0x75d1",
"name": "Rome",
"type": "PLACE",
"description": "Capital city of Italy.",
"location": {
"type": "Point",
"coordinates": [12.4964, 41.9028]
}
},
{
"uid": "0x75d3",
"name": "downtown",
"type": "PLACE",
"description": "A central area of a city, known for accessibility."
},
{
"uid": "0x75e5",
"name": "a great spot",
"type": "PLACE",
"description": "A location that is being referred to as cozy and inviting."
},
{
"uid": "0x75fc",
"name": "homeless shelter",
"type": "PLACE",
"description": "A facility providing assistance to homeless individuals."
},
{
"uid": "0x760a",
"name": "place",
"type": "PLACE",
"description": "A residential area where people live."
},
{
"uid": "0x7616",
"name": "area",
"type": "PLACE",
"description": "The unspecified geographic location where John aims to improve education."
}
],
"memories_near_rome": [
{
"uid": "0x75d0",
"content": "[2:32 pm on 29 January, 2023] Jon: Hey Gina! Thanks for asking. I'm on the hunt for the ideal spot for my dance studio and it's been quite a journey! I've been looking at different places and picturing how the space would look. I even found a place with great natural light! Oh, I've been to Paris yesterday! It was sooo cool.",
"timestamp": "2025-08-09T18:36:34.214Z",
"entities": [
{"name": "Gina", "type": "PERSON"},
{"name": "Jon", "type": "PERSON"},
{"name": "dance studio", "type": "CONCEPT"},
{"name": "Paris", "type": "PLACE", "location": {"type":"Point","coordinates":[2.3522,48.8566]}}
]
},
{
"uid": "0x75d2",
"content": "[2:32 pm on 29 January, 2023] Gina: Wow, nice spot! Where is it? Got any other features you want to think about before you decide? Paris?! That is really great Jon! Never had a chance to visit it. Been only to Rome once.",
"timestamp": "2025-08-09T18:36:42.49Z",
"entities": [
{"name": "Gina", "type": "PERSON"},
{"name": "Jon", "type": "PERSON"},
{"name": "29 January, 2023", "type": "EVENT"},
{"name": "Paris", "type": "PLACE", "location": {"type":"Point","coordinates":[2.3522,48.8566]}},
{"name": "Rome", "type": "PLACE", "location": {"type":"Point","coordinates":[12.4964,41.9028]}}
]
}
]
}
Geographic Analysis:
- Rome: [12.4964, 41.9028] - Central Italy
- Paris: [2.3522, 48.8566] - Northern France
- Distance: Paris is approximately 1,100km from Rome (within reasonable range)
- Conversation Context: Jon visited Paris, Gina has been to Rome
- Memory Connections: Both cities mentioned in dance studio planning conversation
The geospatial data in Graph Fetch follows the GeoJSON Point format:
{
"location": {
"type": "Point",
"coordinates": [longitude, latitude]
}
}
Available coordinates:
- Paris, France: [2.3522, 48.8566] (longitude, latitude)
- Rome, Italy: [12.4964, 41.9028] (longitude, latitude)
Conceptual places (without coordinates) include:
- downtown, homeless shelter, neighborhood, area, a great spot
User Message → MCP Server → AI Service → Entity Extraction
↓
DgraphService → Store Entities & Memories
↓
[Optional] Python Service → Community Detection → Community Nodes
↓
DQL Queries ← Graph Memory Search ← Vector Similarity
This architecture provides a complete pipeline from raw conversational data to sophisticated graph analytics, enabling AI agents to build, search, and analyze long-term semantic memory.
- Rasmussen, Preston. "Zep: A Temporal Knowledge Graph Architecture for Agent Memory." arXiv, 20 Jan. 2025, arxiv.org/abs/2501.13956.
- Packer, Charles, et al. "MemGPT: Towards LLMs as Operating Systems." arXiv, 12 Oct. 2023, arxiv.org/abs/2310.08560.
- Wigg, Danny, et al. "Temporal Agents with Knowledge Graphs." OpenAI Cookbook, 2025
- Chhikara, Prateek, et al. "Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory." arXiv, 28 Apr. 2025, arxiv.org/abs/2504.19413.
- Jain, Manish. "Dgraph: Synchronously Replicated, Transactional and Distributed Graph Database." Version 0.8, Dgraph Labs, Inc., 1 Mar. 2021