Neural Models for Syllogistic Logic
Date:
Abstract. Despite the success of neural networks to perform numerous cognitive tasks, few studies involving reasoning have been carried out, and it remains an open question whether neural networks can learn logic. We study the ability of neural models to learn syllogistic reasoning with multiple premises. We artificially created a consistent knowledge base, a set of premises to train neural networks, such that for a given hypothesis, their task is to select the necessary premises from the knowledge base to derive the hypothesis, whenever a proof exists. Furthermore, we are interested to assess how robust our models are in finding proofs mainly towards the following two scenarios, with more premises than those seen during training, and when premises are replaced by different equivalent ones. Thereby, we can determine how well the models generalize by per- forming compositionality tests in terms of the structure of reasoning.