1st Edition

Why Machines Will Never Rule the World Artificial Intelligence without Fear

By Jobst Landgrebe, Barry Smith Copyright 2022
    354 Pages 2 B/W Illustrations
    by Routledge

    354 Pages 2 B/W Illustrations
    by Routledge

    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim:

    1. Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system.
    2. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer.

    In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence from mathematics, physics, computer science, philosophy, linguistics, and biology, setting up their book around three central questions: What are the essential marks of human intelligence? What is it that researchers try to do when they attempt to achieve "artificial intelligence" (AI)? And why, after more than 50 years, are our most common interactions with AI, for example with our bank’s computers, still so unsatisfactory? 

    Landgrebe and Smith show how a widespread fear about AI’s potential to bring about radical changes in the nature of human beings and in the human social order is founded on an error. There is still, as they demonstrate in a final chapter, a great deal that AI can achieve which will benefit humanity. But these benefits will be achieved without the aid of systems that are more powerful than humans, which are as impossible as AI systems that are intrinsically "evil" or able to "will" a takeover of human society.

    Foreword

    1. Introduction
    1.1 The Singularity
    1.2 Approach
    1.3 Limits to the modelling of animate nature
    1.4 The AI hype cycle
    1.5 Why machines will not inherit the earth
    1.6 How to read this book

    Part I: Properties of the human mind

    2. The human mind
    2.1 Basic characteristics of the human mind
    2.2 The mind-body problem: Monism and its varieties

    3. Human and machine intelligence
    3.1 Capabilities and dispositions
    3.2 Intelligence
    3.3 AI and human intelligence

    4. The nature of human language
    4.1 Why conversation matters
    4.2 Aspects of human language

    5. The variance and complexity of human language
    5.1 Conversations: An overview
    5.2 Levels of language production and interpretation
    5.3 Conversation contexts
    5.4 Discourse economy: implicit meaning
    5.5 Structural elements of conversation
    5.6 How humans pass the Turing test

    6. Social and ethical behaviour
    6.1 Can we engineer social capabilities?
    6.2 Intersubjectivity
    6.3 Social norms
    6.4 Moral norms
    6.5 Power

    Part II: The limits of mathematical models

    7. Complex systems
    7.1 Models
    7.2 Computability
    7.3 Systems
    7.4 The scope of extended Newtonian mathematics
    7.5 Complex systems
    7.6 Examples of complex systems

    8. Mathematical models of complex systems
    8.1 Multivariate distributions
    8.2 Deterministic and stochastic computable system models
    8.3 Newtonian limits of stochastic models of complex systems
    8.4 Descriptive and interpretative models of complex systems
    8.5 Predictive models of complex systems
    8.6 Naïve approaches to complex system modelling
    8.7 Refined approaches
    8.8 The future of complex system modelling

    Part III: The limits and potential of AI

    9. Why there will be no machine intelligence
    9.1 Brain emulation and machine evolution
    9.2 Intentions and drivenness
    9.3 Consciousness
    9.4 Philosophy of mind, computation and AI
    9.5 Objectifying intelligence and theoretical thinking

    10. Why machines will not master human language
    10.1 Language as a necessary condition for AGI
    10.2 Why machine language production always falls short
    10.3 AI conversation emulation
    10.4 Mathematical models of human conversations
    10.5 Why conversation machines are doomed to fail

    11. Why machines will not master social interaction
    11.1 No AI emulation of social behaviour
    11.2 AI and legal norms
    11.3 No machine emulation of morality

    12. Digital immortality
    12.1 Infinity stones
    12.2 What is a mind?
    12.3 Transhumanism
    12.4 Back to Bostrom

    13. AI spring eternal
    13.1 AI for non-complex systems
    13.2 AI for complex systems

    Glossary
    13.3 AI boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
    13.4 How AI will change the world . . . . . . . . . . . . . . . . . . . 293

    Biography

    Jobst Landgrebe is a scientist and entrepreneur with a background in philosophy, mathematics, neuroscience, and bioinformatics. Landgrebe is also the founder of Cognotekt, a German AI company which has since 2013 provided working systems used by companies in areas such as insurance claims management, real estate management, and medical billing. After more than 10 years in the AI industry, he has developed an exceptional understanding of the limits and potential of AI in the future.

    Barry Smith is one of the most widely cited contemporary philosophers. He has made influential contributions to the foundations of ontology and data science, especially in the biomedical domain. Most recently, his work has led to the creation of an international standard in the ontology field (ISO/IEC 21838), which is the first example of a piece of philosophy that has been subjected to the ISO standardization process.

    "It’s a highly impressive piece of work that makes a new and vital contribution to the literature on AI and AGI. The rigor and depth with which the authors make their case is compelling, and the range of disciplinary and scientific knowledge they draw upon is particularly remarkable and truly novel."
    Shannon Vallor, Baillie Gifford Chair, Edinburgh Futures Institute, The University of Edinburgh

    "The alluring nightmare in which machines take over running the planet and humans are reduced to drudges is not just far off or improbable: the authors argue that it is mathematically impossible. While drawing on a remarkable array of disciplines for their evidence, the argument of Landgrebe and Smith is in essence simple. Compulsory reading for those who fear the worst, but also for those inadvertently trying to bring it about."
    Peter M. Simons, Professor, Department of Philosophy, Trinity College Dublin

    "Just one year ago, Elon Musk claimed that AI will overtake humans “in less than five years”. Not so, say Landgrebe and Smith, who argue forcefully that it is mathematically impossible for machines to emulate the human mind. This is a timely, important, and thought-provoking contribution to the contemporary debate about AI’s consequences for the future of humanity."
    Berit Brogaard, Professor, Department of Philosophy University of Miami

    "This book challenges much linguistically underinformed AI optimism, documenting many foundational aspects of language that are seemingly intractable to computation, including its capacity for vagueness, its interrelation
    with context, and its vast underpinning of implicit assumptions about physical, interpersonal, and societal phenomena."
    Len Talmy, Professor, Center for Cognitive Science University at Buffalo

    "Landgrebe and Smith orchestrate a battery of arguments from philosophy, biology, computer science, linguistics, mathematics and physics in order to argue effectively and with great brio that AI has been oversold. The result is a model of how to bring together results from many different fields and argue for an important thesis."
    Kevin Mulligan, Professor, Department of Philosophy, University of Geneva

    "Why, after 50 years, are AI systems so bad at communicating with human beings? This book provides an entirely original answer to this question—but it can be summed up in just one short phrase: there is too much haphazardness in our language use. AI works by identifying patterns, for instance in dialogue, and then applying those same atterns to new dialogue. But every dialogue is different. The old patterns never work. AI must fail. If you care, read the book.’
    Ernie Lepore, Distinguished Professor, Department of Philosophy, Rutgers University