Learning Logic Programs Using Neural Networks by Exploiting Symbolic Invariance
Learning from Interpretation Transition (LFIT) is an unsupervised learning algorithm which learns the dynamics just by observing state transitions. LFIT algorithms have mainly been implemented in the symbolic method, but they are not robust to noisy or missing data. Recently, research works combinin...
Saved in:
| Published in | Inductive Logic Programming Vol. 13191; pp. 203 - 218 |
|---|---|
| Main Authors | , |
| Format | Book Chapter |
| Language | English |
| Published |
Switzerland
Springer International Publishing AG
01.01.2022
Springer International Publishing |
| Series | Lecture Notes in Computer Science |
| Subjects | |
| Online Access | Get full text |
| ISBN | 3030974537 9783030974534 |
| ISSN | 0302-9743 1611-3349 |
| DOI | 10.1007/978-3-030-97454-1_15 |
Cover
| Summary: | Learning from Interpretation Transition (LFIT) is an unsupervised learning algorithm which learns the dynamics just by observing state transitions. LFIT algorithms have mainly been implemented in the symbolic method, but they are not robust to noisy or missing data. Recently, research works combining logical operations with neural networks are receiving a lot of attention, with most works taking an extraction based approach where a single neural network model is trained to solve the problem, followed by extracting a logic model from the neural network model. However most research work suffer from the combinatorial explosion problem when trying to scale up to solve larger problems. In particular a lot of the invariance that hold in the symbolic world are not getting utilized in the neural network field. In this work, we present a model that exploits symbolic invariance in our problem. We show that our model is able to scale up to larger tasks than previous work. |
|---|---|
| ISBN: | 3030974537 9783030974534 |
| ISSN: | 0302-9743 1611-3349 |
| DOI: | 10.1007/978-3-030-97454-1_15 |