A Neural Model for Regular Grammar Induction
Grammatical inference is a classical problem in computational learning theory and a topic of wider influence in natural language processing. We treat grammars as a model of computation and propose a novel neural approach to induction of regular grammars from positive and negative examples. Our model...
Saved in:
| Main Authors | , , |
|---|---|
| Format | Journal Article |
| Language | English |
| Published |
23.09.2022
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.48550/arxiv.2209.11628 |
Cover
| Summary: | Grammatical inference is a classical problem in computational learning theory
and a topic of wider influence in natural language processing. We treat
grammars as a model of computation and propose a novel neural approach to
induction of regular grammars from positive and negative examples. Our model is
fully explainable, its intermediate results are directly interpretable as
partial parses, and it can be used to learn arbitrary regular grammars when
provided with sufficient data. We find that our method consistently attains
high recall and precision scores across a range of tests of varying complexity. |
|---|---|
| DOI: | 10.48550/arxiv.2209.11628 |