Skip to content

Moonsong-Labs/ai-nexus

Repository files navigation

AI-Nexus Agent

CI Integration Tests Open in - LangGraph Studio

This repo provides a simple ReAct-style agent with a tool to save memories. This is a simple way to let an agent persist important information to reuse later. In this case, we save all memories scoped to a configurable user_id, which lets the bot learn a user's preferences across conversational threads.

Memory Diagram

Using Memory in Your Agent

To add semantic memory capability to your existing agent:

  1. Create a custom agent by extending the AgentGraph base class:
from common.graph import AgentGraph
from common.configuration import AgentConfiguration
from common.components.memory import MemoryConfiguration, SemanticMemory

class MyCustomAgent(AgentGraph):
    def __init__(
        self,
        *,
        agent_config: Optional[AgentConfiguration] = None,
        checkpointer: Optional[Checkpointer] = None,
        store: Optional[BaseStore] = None,
    ):
        # Create config with memory settings enabled
        agent_config = agent_config or AgentConfiguration()
        agent_config.memory.use_memory = True  # Enable memory
        agent_config.memory.user_id = "user123"  # Set namespace for memories

        # Alternatively, you can inline the config
        agent_config = agent_config or Configuration(
            memory=MemoryConfiguration(use_memory=True, user_id = "user123"), 
            system_prompt=OTHER_SYSTEM_PROMPT,
        )
        
        super().__init__(
            name="My Custom Agent",
            agent_config=agent_config,
            checkpointer=checkpointer,
            store=store,
        )
        
    def create_builder(self) -> StateGraph:
        # Create graph builder
        builder = StateGraph(State)
        
        # Initialize LLM
        llm = init_chat_model(self._agent_config.model)
        
        # Get memory tools and any other tools needed
        all_tools = []
        if self._memory:  # The memory is automatically initialized in the parent class
            all_tools += self._memory.get_tools()
            
        # Add your own custom tools here
        # all_tools += [my_custom_tool1, my_custom_tool2]
        
        # Bind tools to LLM
        if all_tools:
            llm = llm.bind_tools(all_tools)
            
        # Create graph and add nodes
        # ...
        
        return builder
  1. Or simply enable memory in a configuration for an existing agent:
from agent_template.configuration import Configuration

# Create configuration with memory settings
config = Configuration()
config.memory.use_memory = True  # Enable memory
config.memory.load_static_memories = True  # Load static memories if available
config.memory.user_id = "user123"  # Set namespace for memories

# Initialize your agent with this configuration
agent = AgentTemplateGraph(agent_config=config)

That's it! Your agent can now create, search, and manage semantic memories with the built-in tools.

Running Locally

Update .env

Create a .env file.

cp .env.example .env

Update the following required environment variables in .env:

  • GOOGLE_API_KEY

(rest may be omitted unless testing locally)

Install uv

Install uv. It will take care of venv and python interpreters automatically.

Inside uv, the installed commands may be run as uv run -- <CMD>.

For example LangGraph can be run via:

uv run -- langgraph dev

Run

Run the agent locally via:

make run

This will invoke the sync and deps targets on its own.

Clean

make clean

Lint + Spell Check

make check

Linting and Formatting

make lint
make fmt

Spell Check

make spell_check
make spell_fix

Local Demo

make demo-ai

# or

make demo-human

Local Github Integration

Requires: smee for receiving github payloads to your local server.

npm install --global smee-client
  1. Install a GithubApp on a repo or organization and subscribe for events as needed.
  2. Run make serve to start the langgraph api server and interact with it.
  3. Run make smee or make smee-[CUSTOM_TAG] to open a smee endpoint to your local server.
  4. Use the smee endpoint in your GithubApp.
  5. Create issues, PRs, comments, etc.

Example Agent Usage

make run should open the LangGraph with the default model to interact with.

Navigate to the agent_template graph and have a conversation with it! Try sending some messages saying your name and other things the bot should remember.

Try saving memories via the upsert_memory tool:

<action> # "create memory" or "update memory"
content: The main content of the memory. For example:
         "User expressed interest in learning about French."
context: Additional context for the memory. For example:
         "This was mentioned while discussing career options in Europe."
memory_id: ONLY PROVIDE IF UPDATING AN EXISTING MEMORY.
          The memory to overwrite.

Example:

create memory
content: My name is John Doe
context: I was asked my name at the local DMV

> ... The memory ID is `ceb9b38e-7082-4eb0-b51f-05f8accbca5a`.  Is there anything else I can assist you with today?

update memory
content: My name is John Doe
context: This was mentioned because someone asked if I was Jill Dove's brother.
memory_id: ceb9b38e-7082-4eb0-b51f-05f8accbca5a

Assuming the bot saved some memories, create a new thread using the + icon. Then chat with the bot again - if you've completed your setup correctly, the bot should now have access to the memories you've saved!

You can review the saved memories by clicking the "memory" button.

Memories Explorer

Adding Agents

To experiment,

  1. Copy the agent_template and name it accordingly.
  2. Update package paths
  3. Add new agent package in pyproject.toml
  4. make run and navigate to the agent

Note You can change the default_graph key in langgraph.json locally to set the default graph to open.

How it works

This chat bot reads from your memory graph's Store to easily list extracted memories. If it calls a tool, LangGraph will route to the store_memory node to save the information to the store.

How to evaluate

Memory management can be challenging to get right, especially if you add additional tools for the bot to choose between. To tune the frequency and quality of memories your bot is saving, we recommend starting from an evaluation set, adding to it over time as you find and address common errors in your service.

We have provided a few example evaluation cases in the test file here. As you can see, the metrics themselves don't have to be terribly complicated, especially not at the outset.

We use LangSmith's @unit decorator to sync all the evaluations to LangSmith so you can better optimize your system and identify the root cause of any issues that may arise.

How to customize

  1. Customize memory content: we've defined a simple memory structure content: str, context: str for each memory, but you could structure them in other ways.
  2. Provide additional tools: the bot will be more useful if you connect it to other functions.
  3. Select a different model: We default to google_genai:gemini-1.5-flash. You can select a compatible chat model using provider/model-name via configuration. Example: openai/gpt-4.
  4. Customize the prompts: We provide a default prompt in the prompts.py file. You can easily update this via configuration.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 14