Publication: Learning in Order to Reason
Open/View Files
Date
1995
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Roth, Dan. Learning in Order to Reason. Harvard Computer Science Group Technical Report TR-02-95.
Research Data
Abstract
Any theory aimed at understanding commonsense reasoning, the process that humans use to cope with the mundane but complex aspects of the world in evaluating everyday situations, should account for its flexibility, its adaptability, and the speed with which it is performed. In this thesis we analyze current theories of reasoning and argue that they do not satisfy those requirements. We then proceed to develop a new framework for the study of reasoning, in which a learning component has a principal role. We show that our framework efficiently supports a lot "more reasoning" than traditional approaches and at the same time matches our expectations of plausible patterns of reasoning in cases where other theories do not. In the first part of this thesis we present a computational study of the knowledge-based system approach, the generally accepted framework for reasoning in intelligent systems. We present a comprehensive study of several methods used in approximate reasoning as well as some reasoning techniques that use approximations in an effort to avoid computational difficulties. We show that these are even harder computationally than exact reasoning tasks. What is more surprising is that, as we show, even the approximate versions of these approximate reasoning tasks are intractable, and these severe hardness results on approximate reasoning hold even for very restricted knowledge representations. Motivated by these computational considerations we argue that a central question to consider, if we want to develop computational models for commonsense reasoning, is how the intelligent system acquires its knowledge and how this process of interaction with its environment influences the performance of the reasoning system. The Learning to Reason framework developed and studied in the rest of the thesis exhibits the role of inductive learning in achieving efficient reasoning, and the importance of studying reasoning and learning phenomena together. The framework is defined in a way that is intended to overcome the main computational difficulties in the traditional treatment of reasoning, and indeed, we exhibit several positive results that do not hold in the traditional setting. We develop Learning to Reason algorithms for classes of theories for which no efficient reasoning algorithm exists when represented as a traditional (formula-based) knowledge base. We also exhibit Learning to Reason algorithms for a class of theories that is not known to be learnable in the traditional sense. Many of our results rely on the theory of model-based representations that we develop in this thesis. In this representation, the knowledge base is represented as a set of models (satisfying assignments) rather than a logical formula. We show that in many cases reasoning with a model-based representation is more efficient than reasoning with a formula-based representation and, more significantly, that it suggests a new view of reasoning, and in particular, of logical reasoning. In the final part of this thesis, we address another fundamental criticism of the knowledge-based system approach. We suggest a new approach for the study of the non-monotonicity of human commonsense reasoning, within the Learning to Reason framework. The theory developed is shown to support efficient reasoning with incomplete information, and to avoid many of the representational problems which existing default reasoning formalisms face.
Description
Other Available Sources
Keywords
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service