This package offers a framework for researchers to log and classify interactions between students and LLM-based tutors in educational settings. It supports structured, objective evaluation through classification, simulation, and visualization utilities, and is designed for flexible use across tasks of any scale. The framework supports both researchers with pre-collected datasets and those operating in data-sparse contexts. It designed as a modular tool that can be integrated at any stage of the evaluation process.
The package is designed to:
- Synthesize a labeled classification framework using user-defined categories
 - Simulate multi-turn studentβtutor dialogues via role-based prompting and structured seed messages
 - Wrap direct student-tutors interaction with locally hosted LLMs through a terminal-based interface
 - Fine-tune and apply classification models to label conversational turns
 - Visualize dialogue patterns with summary tables, frequency plots, temporal trends, and sequential dependencies
 
Overview of the underlying framework architecture:
Note that the framework and dialogue generation is integrated with LM Studio, and the wrapper and classifiers with Hugging Face.
The package currently requires Python 3.12 due to version constraints in core dependencies, particularly outlines.
EduChatEval can be installed via pip from PyPI:
pip install educhatevalOr from Github:
pip install git+https://github.com/laurawpaaby/EduChatEval.gitBelow is the simplest example of how to use the package. For more detailed and explained application examples, see the user guides in the documentation or explore the tutorial notebooks.
Import of each module:
# import modules
from educhateval import FrameworkGenerator, 
                        DialogueSimulator,
                        PredictLabels,
                        Visualizer1. Generate Label Framework 
An annotated dataset of is created using downloaded LLM, LM Studio, and a prompt template of the desired labels. (1.1)
The data is quality assessed and filtered in a few shot approach (1.2)
# 1.1
# initiate generator 
generator = FrameworkGenerator(
    model_name="llama-3.2-3b-instruct", # the model already downloaded and loaded via LM Studio
    api_url="http://localhost:1234/v1/completions" # the address of locally hosted LM Studio API endpoint that handles generation requests. Consist of server host, port, and path.
)
# apply generator to synthesize data
df_4 = generator.generate_framework(
    prompt_path="../templates/prompt_default_4types.yaml", # path to prompt template, can also be a direct dictionary
    num_samples=200                                      # number of samples per category to simulate
)
# 1.2 
# quality check and filter the data with classifier trained on a few true examples
filtered_df = generator.filter_with_classifier(
    train_data="../templates/manual_labeled.csv", # manually labeled training data
    synth_data=df_4                               # the data to quality check
)2. Synthesize Interaction 
Dialogues between two agents, a student and a tutor, are simulated to mimic student-chatbot interactions in real deployments.
Seed message and prompts are defined to guide the agent behavior.
# initiate simulater
simulator = DialogueSimulator(
    backend="mlx",                                       # choose either HF or MLX driven setup
    model_id="mlx-community/Qwen2.5-7B-Instruct-1M-4bit" # load model
)
# define seed_message and prompt scheme + mode
custom_prompts = {
    "conversation_types": { 
        "general_task_solving": { # the mode
            "student": "You are a student asking for help with your Biology homework.",
            "tutor": "You are a helpful tutor assisting a student. Provide short precise answers."
        },
    }
}
prompt = custom_prompts["conversation_types"]["general_task_solving"]
seed_message = "I'm trying to understand some basic concepts of human biology, can you help?" 
# Simulate the student-tutor dialogue
df_sim = simulator.simulate_dialogue(
    mode="general_task_solving",
    turns=10,                       # number of turns 
    seed_message_input=seed_message
    system_prompts=prompt
)
3. Classify and Predict
The annotaded data generated in Step 1 is used to train a classification model, which is then directly deployed to classify the messages of the dialogues from Step 2.
# initiate module to classify and predict labels
predictor = PredictLabels(model_name="distilbert/distilroberta-base") # model to be trained and used for predictions
annotaded_df = predictor.run_pipeline(
    train_data=filtered_df,         # the annotated data for training above
    new_data=df_sim,                # the generated dialogues 
    text_column="text",
    label_column="category",
    columns_to_classify=["student_msg", "tutor_msg"],
    split_ratio=0.2
)4. Visualize
The predicted dialogue classes of Step 3 are summarised and visualized for interpretation.
# initiate the module for descriptive visualizations 
viz = Visualizer()
# table of predicted categories (n, %) 
summary = viz.create_summary_table(
    df=annotaded_df,
    student_col="predicted_labels_student_msg",
    tutor_col="predicted_labels_tutor_msg"
)
# bar chart matching the table
viz.plot_category_bars(
    df=annotaded_df,
    student_col="predicted_labels_student_msg",
    tutor_col="predicted_labels_tutor_msg"
)
# line plot of predicted categories over turns
viz.plot_turn_trends(
    df=annotaded_df,
    student_col="predicted_labels_student_msg",
    tutor_col="predicted_labels_tutor_msg"
)
# bar chart over sequential category dependencies between agents
viz.plot_history_interaction(
    df=annotaded_df,
    student_col="predicted_labels_student_msg",
    tutor_col="predicted_labels_tutor_msg",     # only one requiring both student and tutor data
    focus_agent="student"                      # the agent to visualize category dependencies for
)| Documentation | Description | 
|---|---|
| π User Guide | Instructions on how to run the entire pipeline provided in the package | 
| π‘ Prompt Templates | Overview of system prompts, role behaviors, and instructional strategies | 
| π§ API References | Full reference for the educhateval API: classes, methods, and usage | 
| π€ About | Learn more about the thesis project, context, and contributors | 
The package is made by Laura Wulff Paaby
Feel free to reach out via:
This project builds on existing tools and ideas from the open-source community. While specific references are provided within the relevant scripts throughout the repository, the key sources of inspiration are also acknowledged here to highlight the contributions that have shaped the development of this package.
- 
Constraint-Based Data Generation β Outlines Package: Willard, Brandon T. & Louf, RΓ©mi (2023). Efficient Guided Generation for LLMs.
 - 
Chat Interface and Wrapper β Textual: McGugan, W. (2024, Sep). Anatomy of a Textual User Interface.
 - 
Package Design Inspiration: Thea Rolskov Sloth & Astrid Sletten Rybner
 - 
Code Debugging and Conceptual Feedback: Mina Almasi and Ross Deans Kristensen-McLachlan
 
βββ data/                                  
β   βββ generated_dialogue_data/           # Generated dialogue samples
β   βββ generated_tuning_data/             # Generated framework data for fine-tuning 
β   βββ logged_dialogue_data/              # Logged real dialogue data
β   βββ Final_output/                      # Final classified data 
β   βββ templates/                         # Prompt and seed templates
β
βββ docs/                                  # Markdowns to publish with MKDocs
β
βββ src/educhateval/                       # Main source code for all components
β   βββ chat_ui.py                         # CLI interface for wrapping interactions
β   βββ classification_utils.py            # Functions to run the different classificiation models deployed
β   βββ core.py                            # Main script behind package wrapping all functions as callable classes
β   βββ descriptive_results/               # Scripts and tools for result analysis
β   βββ dialogue_classification/           # Tools and models for dialogue classification
β   βββ dialogue_generation/               
β   β   βββ agents/                        # Agent definitions and role behaviors
β   β   βββ models/                        # Model classes and loading mechanisms
β   β   βββ txt_llm_inputs/                # Prompt loading functions
β   β   βββ chat_model_interface.py        # Interface layer for model communication
β   β   βββ chat.py                        # Script for orchestrating chat logic
β   β   βββ simulate_dialogue.py           # Script to simulate full dialogues between agents
β   βββ framework_generation/            
β   β   βββ outline_prompts/               # Prompt templates for outlines
β   β   βββ outline_synth_LMSRIPT.py       # Synthetic outline generation pipeline
β   β   βββ train_tinylabel_classifier.py  # Training small classifier on manually made true data
β
βββ tutorials/                             # Tutorials on how to use the package in different settings
β
βββ mkdocs.yml                             # MKDocs configuration file
βββ LICENSE                                # MIT License
βββ .python-version                        # Python version file for (Poetry)
βββ poetry.lock                            # Locked dependency versions (Poetry)
βββ pyproject.toml                         # Main project config and dependencies
β
βββ models/                                # (ignored) Folder for trained models 
βββ results/                               # (ignored) Folder for training checkpoints
βββ site/                                  # (ignored) MKDocs files for documentation

