Learning Logic Programs Using Neural Networks by Exploiting Symbolic Invariance

Learning from Interpretation Transition (LFIT) is an unsupervised learning algorithm which learns the dynamics just by observing state transitions. LFIT algorithms have mainly been implemented in the symbolic method, but they are not robust to noisy or missing data. Recently, research works combinin...

Full description

Saved in:
Bibliographic Details
Published inInductive Logic Programming Vol. 13191; pp. 203 - 218
Main Authors Phua, Yin Jun, Inoue, Katsumi
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 01.01.2022
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN3030974537
9783030974534
ISSN0302-9743
1611-3349
DOI10.1007/978-3-030-97454-1_15

Cover

More Information
Summary:Learning from Interpretation Transition (LFIT) is an unsupervised learning algorithm which learns the dynamics just by observing state transitions. LFIT algorithms have mainly been implemented in the symbolic method, but they are not robust to noisy or missing data. Recently, research works combining logical operations with neural networks are receiving a lot of attention, with most works taking an extraction based approach where a single neural network model is trained to solve the problem, followed by extracting a logic model from the neural network model. However most research work suffer from the combinatorial explosion problem when trying to scale up to solve larger problems. In particular a lot of the invariance that hold in the symbolic world are not getting utilized in the neural network field. In this work, we present a model that exploits symbolic invariance in our problem. We show that our model is able to scale up to larger tasks than previous work.
ISBN:3030974537
9783030974534
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-97454-1_15