Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence Papers from the Ray Solomonoff 85th Memorial Conference, Melbourne, VIC, Australia, November 30 - December 2, 2011

This proceedings of the Ray Solomonoff 85th memorial conference, presents 35 papers on universal Bayesian prediction and artificial intelligence (machine learning). A tribute to Solomonoff's work, which influences modern data mining, econometrics and more.

Saved in:
Bibliographic Details
Main Author Dowe, David L.
Format eBook Book
LanguageEnglish
Published Berlin, Heidelberg Springer Nature 2013
Springer
Springer Berlin / Heidelberg
Springer Berlin Heidelberg
Edition1
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN9783642449581
3642449581
3642449573
9783642449574
ISSN0302-9743
1611-3349
DOI10.1007/978-3-642-44958-1

Cover

Table of Contents:
  • 3 Mirage Codes Using Algorithmic Information Theory -- 4 Probabilistic Automata -- 5 Definitions of Transducers -- 6 Frequency Transducers -- References -- A Critical Survey of Some Competing Accounts of Concrete Digital Computation -- 1 Introduction -- 2 The Formal Symbol Manipulation Account -- 3 The Physical Symbol Systems Account -- 4 The Mechanistic Account of Computation -- 5 Discussion -- 6 Conclusion -- References -- Further Reflections on the Timescale of AI -- References -- Towards Discovering the Intrinsic Cardinality and Dimensionality of Time Series Using MDL -- 1 Introduction -- 2 Definitions and Notation -- 3 MDL Modeling of Time Series -- 4 Experimental Evaluation -- 4.1 An Example Application in Physiology -- 5 Discussion of Time and Space Complexity and Conclusions -- References -- Complexity Measures for Meta-learning and Their Optimality -- 1 Introduction -- 2 Complexity Measures for Learning Machines -- 3 Example of an Application -- 4 Summary -- References -- Design of a Conscious Machine -- Introduction -- 2 Functional Requirements -- 2.1 Worldly Facts -- 2.2 Smaller Factal Groups -- 2.3 Cognitive Functions -- 2.4 Macro Structures of the Whole Cortex (Brodmann) -- 3 Processing in an Information Domain -- 3.1 Caveats -- 4 Implementation Notes -- 4.1 The Modeling Strategy -- 4.2 Coincidence Records -- 4.3 Hippocampal Routing -- References -- No Free Lunch versus Occam's Razor in Supervised Learning -- 1 Introduction -- 2 Preliminaries -- 3 No Free Lunch Theorem -- 4 Complexity-Based Classification -- 5 Discussion -- References -- An Approximation of the Universal Intelligence Measure -- 1 Introduction -- 2 Background -- 2.1 Universal Intelligence Tests -- 2.2 Universal Intelligence Measure -- 3 Algorithmic Intelligence Quotient -- 3.1 Environment Sampling -- 3.2 Environment Simulation -- 3.3 Temporal Preference
  • Intro -- Preface -- Organization -- Table of Contents -- Introduction -- Introduction to Ray Solomonoff 85th Memorial Conference -- 1 Introduction - and Summary -- 1.1 Short Summary -- 1.2 (Universal) Turing Machines and Prediction -- 1.3 Technological Singularity (and Training Sequences) -- 2 Papers - Beginning in 1950 -- 3 Birth of the Theory in 1960 - and Onwards -- 3.1 End of the 1970s, and Fundamental Convergence Result -- 3.2 Notes on Papers from the 1980s -- 3.3 Notes on Papers from the 1990s -- 3.4 Notes on Papers from the 2000s -- 4 Further Notes (And Perhaps Some Afterthoughts) -- 4.1 Uniqueness of Logarithm-Loss Information-Theoretic Cost -- 4.2 Prediction, Inference, Induction, Explanation -- 4.3 How to Choose a Bayesian Prior? -- 4.4 Information Theory, (Artificial) Intelligence and Recognising It -- 4.5 A Music Note -- 4.6 Originality, Creativity, Humour, Illusion -- 4.7 Some Further Work -- 4.8 From Here -- References -- Invited Papers -- Ray Solomonoff and the New Probability -- 1 Introduction -- 2 Early Years -- 3 From the University to the Birth of AI -- 4 The Beginnings of AI -- 5 The Discovery of Algorithmic Probability -- 6 The Guerrilla Workshop -- 7 LaterWork -- References -- Universal Heuristics: How Do Humans Solve "Unsolvable" Problems? -- Partial Match Distance -- 1 Introduction -- 2 Partial Matching -- 3 TheDmin Distance -- 4 Question Answering -- 5 Voice Recognition Correction -- References -- Long Papers -- Falsification and Future Performance -- 1 Introduction -- 2 Measurement -- 2.1 Semantics -- 2.2 Risk -- 3 Statistical Learning Theory -- 4 Falsification -- 4.1 Empirical VC Entropy -- 4.2 Empirical Rademacher Complexity -- 5 Discussion -- References -- The Semimeasure Property of Algorithmic Probability - "Feature" or "Bug"? -- 1 Introduction -- 2 Notation -- 3 Algorithmic Probability (ALP)
  • 3.4 Reference Machine Selection -- 3.5 BF Reference Machine -- 3.6 Variance Reduction Techniques for AIQ Estimation -- 4 Empirical Results -- 4.1 Comparison of Artificial Agents -- 4.2 Measuring Agent Scalability -- 4.3 Environment Distribution -- 5 Related Work and Discussion -- 6 Conclusion -- References -- Minimum Message Length Analysis of the Behrens-Fisher Problem -- 1 Introduction -- 2 Minimum Message Length (MML) -- 3 MML and the Behrens-Fisher Problem -- 3.1 Shared Population Mean -- 3.2 Different Population Means -- 3.3 MML Hypothesis Testing -- 4 Simulation and Discussion -- 5 Extensions -- References -- MMLD Inference of Multilayer Perceptrons -- 1 Introduction -- 2 Minimum Message Length (MML) -- 2.1 The Wallace-Freeman Approximation -- 2.2 The MMLD Approximation -- 3 A General Algorithm for Computing MMLD Codelengths -- 3.1 Spherical Uncertainty Region -- 3.2 Ellipsoidal Uncertainty Region -- 3.3 A Simple Example: Univariate Normal Distribution -- 4 MMLD Inference of Multilayer Perceptrons -- 4.1 Prior Density for the Model Parameters -- 4.2 Prior Density for the Network Architecture -- 5 Discussion and Results -- References -- An Optimal Superfarthingale and Its Convergence over a Computable Topological Space -- 1 Introduction -- 2 Preliminaries -- 2.1 Algorithmic Probability -- 2.2 Algorithmic Randomness -- 2.3 Game-Theoretic Probability -- 2.4 Computable Topology -- 3 OptimalTest -- 3.1 Approximation -- 3.2 Existence of an Optimal Test -- 4 Optimal Integral Test -- 4.1 Computable Bound -- 4.2 Computable Enumeration -- 4.3 The Existence of an Optimal Integral Test -- 5 Optimal Superfarthingale -- 5.1 Effectivization of Game-Theoretic Probability -- 5.2 Convergence to a Measure -- References -- Diverse Consequences of Algorithmic Probability -- 1 Introduction -- 2 Solomonoff Induction -- 3 The Axiomatization of Artificial Intelligence
  • 2.3 ASNF (Abstraction SuperStructuring Normal Form) Theorems
  • 4 The Semimeasure Property of ALP -- 5 ALP's Application to Induction, and the Semimeasure Problem -- 6 "Bug" or "Feature"? -- 7 Another Way of Tackling the Semimeasure Problem -- References -- Inductive Inference and Partition Exchangeability in Classification -- 1 Introduction -- 2 Supervised Predictive Classification under Partition Exchangeability -- 3 Asymptotic Properties of Supervised Classifiers under Partition Exchangeability -- 4 Discussion -- References -- Learning in the Limit: A Mutational and Adaptive Approach -- 1 Introduction -- 2 The First-Order Adaptive Automaton -- 2.1 Notations and Technical Preliminaries -- 2.2 Automata Transformations -- 3 The Second-Order Adaptive Automaton -- 4 Second-Order Adaptive Automata and Learning in the Limit -- 4.1 Illustrating Example -- 5 Conclusion -- 5.1 Future Work -- References -- Algorithmic Simplicity and Relevance -- 1 Complexity, Simplicity and the Human Mind -- 2 Relevance -- 3 Simplicity Theory -- 4 Relevance from an Algorithmic Perspective -- 4.1 First-Order Relevance -- 4.2 Second-Order Relevance -- 5 Examples -- 5.1 The 'Nude Model' Story -- 5.2 The 'Rally' Discussion -- 6 Discussion -- References -- Categorisation as Topographic Mapping between Uncorrelated Spaces -- 1 Introduction -- 2 Topographic Mappings in the Brain -- 2.1 Easy to See Mappings -- 2.2 Continuous and Discrete - Ocular Dominance Stripes -- 3 The Topographic Extrapolation -- 3.1 Measuring Topographicity -- 3.2 Extrapolation -- 3.3 A Normal Similarity Measure -- 3.4 Extrapolation from Highly Topographic Functions -- 3.5 Independently Varying Spaces -- 4 Explaining the Categorical Nature of Language -- 5 Synaesthesia -- 6 Conclusion -- References -- Algorithmic Information Theory and Computational Complexity -- 1 Introduction -- 2 Tools from Algorithmic Information Theory
  • 4 Incremental Machine Learning -- 5 Cognitive Architecture -- 6 Philosophical Foundation and Consequences -- 7 Intellectual Property towards Infinity Point -- 8 Conclusion -- References -- An Adaptive Compression Algorithm in a Deterministic World -- 1 Introduction -- 2 Adaptive Compression -- 3 Excess and the RC-Frontier -- 4 Discussion -- References -- Toward an Algorithmic Metaphysics -- 1 TheToyWorldW -- 2 Things in -- 2.1 Composition and Division -- 2.2 Scattered Objects -- 2.3 Object Overlap and Coincidence -- 3 Properties in -- References -- Limiting Context by Using the Web to Minimize Conceptual Jump Size -- 1 Introduction -- 1.1 Common Sense Knowledge as a Contextual Filter -- 1.2 Subjectivity -- 1.3 What is Conceptual Jump Size -- 2 Our Trials with Commonsense Knowledge -- Self-correcting Universal Dialog System. -- Toward Concept Search and Manipulation. -- Generating Chains of Concepts. -- Evaluating Concept Triplets. -- - -- - -- - -- - -- - -- Limiting Context. -- Experiment and Its Results. -- 3 Object-Oriented Programming between Artificial and Natural Languages -- 4 Conclusions -- References -- Minimum Message Length Order Selection and Parameter Estimation of Moving Average Models -- 1 Introduction -- 1.1 The Minimum Message Length Principle -- 2 Message Lengths of Moving Average Models -- 2.1 Fisher Information Matrix -- 2.2 Minimising the Message Length -- 2.3 Properties of the MML87 Estimator -- 3 Evaluation -- 3.1 Parameter Estimation -- 3.2 Order Selection -- 3.3 The Southern Oscillation Index Time Series -- References -- Abstraction Super-Structuring Normal Forms: Towards a Theory of Structural Induction -- 1 Introduction -- 2 Abstraction Super-Structuring Normal Forms -- 2.1 Generative Grammars and Turing Machines -- 2.2 Structural Induction, Generative Grammars and Motivation