Skip to content

Open-source repository for the paper "Correctness Assessment of Code Generated by Large Language Models Using Internal Representations"

License

Notifications You must be signed in to change notification settings

iSE-UET-VNU/OPENIA

Repository files navigation

OPENIA: Correctness Assessment of Code Generated by Large Language Models Using Internal Representations

License: MIT Python 3.8 arXiv

Introduction

In this paper, we introduce OPENIA, a novel white-box (open-box) framework that leverages these internal representations to assess the correctness of LLM-generated code. By systematically analyzing the intermediate states of representative open-source code LLMs, including DeepSeek-Coder, Code Llama, and Magicoder, across diverse code generation benchmarks, we found that these internal representations encode latent information, which strongly correlates with the correctness of the generated code.

Our results show that OPENIA consistently outperforms baseline models, achieving higher accuracy, precision, recall, and F1-Scores with up to a 2X improvement in standalone code generation and a 3X enhancement in repository-specific scenarios. By unlocking the potential of in-process signals, OPENIA paves the way for more proactive and efficient quality assurance mechanisms in LLM-assisted code generation.

The architecture

Project Overview

Quickstart

Download dataset

Download full repository here

Prepare Environment

First, we should set up a python environment. This code base has been tested under python 3.8.

$ conda create -n openia python=3.8
$ conda activate openia
$ pip install -r requirements.txt

Citation

If you're using RAMBO in your research or applications, please consider citing our paper:

@article{bui2025correctness,
  title={Correctness Assessment of Code Generated by Large Language Models Using Internal Representations},
  author={Bui, Tuan-Dung and Vu, Thanh Trong and Nguyen, Thu-Trang and Nguyen, Son and Vo, Hieu Dinh},
  journal={arXiv preprint arXiv:2501.12934},
  year={2025}
}

Contact us

If you have any questions, comments or suggestions, please do not hesitate to contact us.

License

MIT License

About

Open-source repository for the paper "Correctness Assessment of Code Generated by Large Language Models Using Internal Representations"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published