Mokav: Execution-driven differential testing with LLMs

It is essential to detect functional differences between programs in various software engineering tasks, such as automated program repair, mutation testing, and code refactoring. The problem of detecting functional differences between two programs can be reduced to searching for a difference exposin...

Full description

Saved in:
Bibliographic Details
Published inThe Journal of systems and software Vol. 230; p. 112571
Main Authors Etemadi, Khashayar, Mohammadi, Bardia, Su, Zhendong, Monperrus, Martin
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.12.2025
Subjects
Online AccessGet full text
ISSN0164-1212
1873-1228
DOI10.1016/j.jss.2025.112571

Cover

More Information
Summary:It is essential to detect functional differences between programs in various software engineering tasks, such as automated program repair, mutation testing, and code refactoring. The problem of detecting functional differences between two programs can be reduced to searching for a difference exposing test (DET): a test input that results in different outputs on the subject programs. In this paper, we propose Mokav, a novel execution-driven tool that leverages LLMs to generate DETs. Mokav takes two versions of a program (P and Q) and an example test input. When successful, Mokav generates a valid DET, a test input that leads to provably different outputs on P and Q. Mokav iteratively prompts an LLM with a specialized prompt to generate new test inputs. At each iteration, Mokav provides execution-based feedback from previously generated tests until the LLM produces a DET. We evaluate Mokav on 1535 pairs of Python programs collected from the Codeforces competition platform and 32 pairs of programs from the QuixBugs dataset. Our experiments show that Mokav outperforms the state-of-the-art, Pynguin and Differential Prompting, by a large margin. Mokav can generate DETs for 81.7% (1,255/1535) of the program pairs in our benchmark (versus 4.9% for Pynguin and 37.3% for Differential Prompting). We demonstrate that the iterative and execution-driven feedback components of the system contribute to its high effectiveness. •Mokav is the first LLM-based tool for difference exposing test generation•C4DET is a curated dataset of 1,535 pair of programs with small semantic differences•The iterative execution-driven approach of Mokav outperforms state-of-the-art•Example tests that indicate input structure significantly improve Mokav effectiveness
ISSN:0164-1212
1873-1228
DOI:10.1016/j.jss.2025.112571