This repository contains the research paper and the experimental prototype for the SYNAPSE (Synthetic-data Native Adaptive Process for Software Engineering) framework.
SYNAPSE is a novel framework for software development that employs an autonomous AI agent to orchestrate the entire engineering lifecycle. Unlike traditional methods that rely on static metrics, the SYNAPSE agent dynamically adapts its own success criteria and decision-making models at each iteration.
The core innovation is adaptive governance: the agent moves beyond simple code generation to strategically manage trade-offs between performance, security, and maintainability, guided by high-level project goals.
This repository includes:
paper.md
: The full research paper.synapse_experiment/
: A Python-based simulation to validate the framework's core hypotheses.
The SYNAPSE agent operates in a continuous feedback loop:
graph TD;
A["Human (Strategist)<br>Defines High-Level Goal"] --> B["SYNAPSE Agent<br>(LLM/RL-based)"];
B -- "Executes Loop" --> C["1. Code & Test Generation"];
C -- "Evaluates against..." --> D["2. Dynamic Metric & Decision Selection<br>(MCDM/RL, Risk Assessment)"];
D -- "Proposes Refinement" --> B;
C -- "Commits Solution" --> E["Version Control<br>(Git, DB)"];
B -- "Reports & Seeks Clarification" --> A;
%% All nodes white background, black border for GitHub dark/light theme compatibility
style A fill:#fff,stroke:#222,stroke-width:2px
style B fill:#fff,stroke:#222,stroke-width:2px
style C fill:#fff,stroke:#222,stroke-width:2px
style D fill:#fff,stroke:#222,stroke-width:2px
style E fill:#fff,stroke:#222,stroke-width:2px
We validate SYNAPSE using a synthetic experiment centered on a resource-constrained pathfinding problem for a simulated drone. The goal is to find an optimal path that balances delivery time, energy consumption, safety, and payload integrity.
The experiment compares two agents:
- Static Agent: A control group agent that uses a fixed set of metrics.
- SYNAPSE Agent: An experimental agent that dynamically adapts its metrics and strategies based on the scenario.
Prerequisites:
- Python 3.11+
Setup:
-
Clone the repository:
git clone https://github.com/chernistry/synapse.git cd synapse/synapse_experiment
-
Create and activate a virtual environment:
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
Execution:
-
Run the main experiment script:
python main.py
-
View the results: The script will generate a timestamped
.csv
file (e.g.,results/experiment_results_YYYYMMDD_HHMMSS.csv
) in theresults
directory. -
Analyze the results with Jupyter: For a deep dive into the results, including visualizations and performance comparisons, open and run the analysis notebook:
jupyter notebook synapse_experiment/analysis_notebook.ipynb
This major update reflects the evolution of the experiment from a simple proof-of-concept to a more rigorous research prototype. Key changes include:
- Expanded Related Work: The literature review was updated to include recent 2023–2025 studies.
- Enhanced Agent Capabilities: The agent's decision-making was upgraded with more advanced metric selection models (PROMETHEE II & ELECTRE Tri-C) and a PPO-CRL policy layer.
- LLM Integration for Adaptation: Implemented LLM-powered metric adaptation using local Ollama with phi3.5 model, enabling more nuanced and contextual decision-making compared to rule-based approaches.
- Governance & Ethics: A new section was added to discuss alignment with emerging standards like the EU AI Act and ISO/IEC 5338.
- Richer Benchmarking: The experiment was extended to 10,000 stochastic scenarios, with results validated using Welch's t-test and Cliff's δ for statistical significance.
- Improved Results: The SYNAPSEAgent demonstrated a +28% increase in PPS and a 35% reduction in strategic risk.
- Reproducibility: Added documentation for reproducibility assets (Seed-Locker 1.2, Zenodo DOI).
This project is licensed under the MIT License - see the LICENSE file for details.