Download Free Generalizing The Structure Of Explanations In Explanation Based Learning Book in PDF and EPUB Free Download. You can read online Generalizing The Structure Of Explanations In Explanation Based Learning and write the review.

Extending Explanation-Based Learning by Generalizing the Structure of Explanations presents several fully-implemented computer systems that reflect theories of how to extend an interesting subfield of machine learning called explanation-based learning. This book discusses the need for generalizing explanation structures, relevance to research areas outside machine learning, and schema-based problem solving. The result of standard explanation-based learning, BAGGER generalization algorithm, and empirical analysis of explanation-based learning are also elaborated. This text likewise covers the effect of increased problem complexity, rule access strategies, empirical study of BAGGER2, and related work in similarity-based learning. This publication is suitable for readers interested in machine learning, especially explanation-based learning.
Explanation-based learning is a recently developed approach to concept acquisition by computer. In this type of machine learning, a specific problem's solution is generalized into a form that can later be used to solve conceptually similar problems. A number of explanation-based generalization algorithms have been developed. Most do not alter the structure of the explanation of the specific problem - no additional objects nor inference rules are incorporated. Instead, these algorithms generalize by converting constants in the observed example to variables with constraints. However, many important concepts, in order to be properly learned, require that the structure of explanations be generalized. This can involve generalizing such things as the number of entities involved in a concept or the number of times some action is performed. For example, concepts such as momentum and energy conservation apply to arbitrary numbers of physical objects, clearing the top of a desk can require an arbitrary number of object relocations, and setting a table can involve an arbitrary number of guests. Two theories of extending explanations during the generalization process have been developed, and computer implementations have been created to computationally test these approaches. The Physics 101 system utilizes characteristics of mathematically-based problem solving to extend mathematical calculations in a psychologically-plausible way, while the BAGGER system implements a domain-independent approach to generalizing explanation structures. Both of these systems are described and the details of their algorithms presented. An approach to the operationality/generality trade-off and an empirical analysis of explanation-based learning are also presented.
Explanation-Based Learning (EBL) can generally be viewed as substituting background knowledge for the large training set of exemplars needed by conventional or empirical machine learning systems. The background knowledge is used automatically to construct an explanation of a few training exemplars. The learned concept is generalized directly from this explanation. The first EBL systems of the modern era were Mitchell's LEX2, Silver's LP, and De Jong's KIDNAP natural language system. Two of these systems, Mitchell's and De Jong's, have led to extensive follow-up research in EBL. This book outlines the significant steps in EBL research of the Illinois group under De Jong. This volume describes theoretical research and computer systems that use a broad range of formalisms: schemas, production systems, qualitative reasoning models, non-monotonic logic, situation calculus, and some home-grown ad hoc representations. This has been done consciously to avoid sacrificing the ultimate research significance in favor of the expediency of any particular formalism. The ultimate goal, of course, is to adopt (or devise) the right formalism.
Machine Learning: An Artificial Intelligence Approach, Volume III presents a sample of machine learning research representative of the period between 1986 and 1989. The book is organized into six parts. Part One introduces some general issues in the field of machine learning. Part Two presents some new developments in the area of empirical learning methods, such as flexible learning concepts, the Protos learning apprentice system, and the WITT system, which implements a form of conceptual clustering. Part Three gives an account of various analytical learning methods and how analytic learning can be applied to various specific problems. Part Four describes efforts to integrate different learning strategies. These include the UNIMEM system, which empirically discovers similarities among examples; and the DISCIPLE multistrategy system, which is capable of learning with imperfect background knowledge. Part Five provides an overview of research in the area of subsymbolic learning methods. Part Six presents two types of formal approaches to machine learning. The first is an improvement over Mitchell's version space method; the second technique deals with the learning problem faced by a robot in an unfamiliar, deterministic, finite-state environment.
Lists citations with abstracts for aerospace related reports obtained from world wide sources and announces documents that have recently been entered into the NASA Scientific and Technical Information Database.
The ability to learn is a fundamental characteristic of intelligent behavior. Consequently, machine learning has been a focus of artificial intelligence since the beginnings of AI in the 1950s. The 1980s saw tremendous growth in the field, and this growth promises to continue with valuable contributions to science, engineering, and business. Readings in Machine Learning collects the best of the published machine learning literature, including papers that address a wide range of learning tasks, and that introduce a variety of techniques for giving machines the ability to learn. The editors, in cooperation with a group of expert referees, have chosen important papers that empirically study, theoretically analyze, or psychologically justify machine learning algorithms. The papers are grouped into a dozen categories, each of which is introduced by the editors.
This comprehensive encyclopedia, in A-Z format, provides easy access to relevant information for those seeking entry into any aspect within the broad field of Machine Learning. Most of the entries in this preeminent work include useful literature references.
Most research in explanation-based learning involves relaxing constraints on the variables in the explanation of a specific example, rather than generalizing the structure of the explanation itself. However, this precludes the acquisition of concepts where an iterative process is implicitly represented in the explanation by a fixed number of applications. Such explanations must be reformulated during generalization. The fully-implemented BAGGER system analyzes explanation structures and detects extendible repeated, inter-dependent applications of rules. When any are found, the explanation is extended so that an arbitrary number of repeated applications of the original rule are supported. The final structure is then generalized and a new rule produced which embodies a crucial shift in representation. An important property of the extended rules is that their preconditions are expressed in terms of the initial state-they do not depend on the results of intermediate applications of the original rule. BAGGER's generalization algorithm is presented and empirical results that demonstrate the value of generalizing to N are reported. To illustrate the approach, the acquisition of a plan for building towers of arbitrary height is discussed in detail. Keywords: Artificial intelligence, Machine learning, Explanation-based learning, Empirical analysis.