Consistent Graph Model Generation with Large Language Models

Graph model generation from natural language requirements is an essential task in software engineering, for which large language models (LLMs) have become increasingly popular. A key challenge is ensuring that the generated graph models are consistent with domain-specific well-formed constraints. LL...

Full description

Saved in:
Bibliographic Details
Published inProceedings (IEEE/ACM International Conference on Software Engineering Companion. Online) pp. 218 - 219
Main Author Chen, Boqi
Format Conference Proceeding
LanguageEnglish
Published IEEE 27.04.2025
Subjects
Online AccessGet full text
ISSN2574-1934
DOI10.1109/ICSE-Companion66252.2025.00067

Cover

More Information
Summary:Graph model generation from natural language requirements is an essential task in software engineering, for which large language models (LLMs) have become increasingly popular. A key challenge is ensuring that the generated graph models are consistent with domain-specific well-formed constraints. LLM-generated graphs are often partially correct due to inconsistency with the constraints, limiting their practical usage. To address this, we propose a novel abstraction-concretization framework motivated by self-consistency for generating consistent models. Our approach first abstracts candidate models into a probabilistic partial model and then concretizes this abstraction into a consistent graph model. Preliminary evaluations on taxonomy generation demonstrate that our method significantly enhances both the consistency and quality of generated graph models.
ISSN:2574-1934
DOI:10.1109/ICSE-Companion66252.2025.00067