1st Edition

Algebraic Structures in Natural Language

Edited By Shalom Lappin, Jean-Philippe Bernardy Copyright 2023
    308 Pages 25 Color & 14 B/W Illustrations
    by CRC Press

    308 Pages 25 Color & 14 B/W Illustrations
    by CRC Press

    Algebraic Structures in Natural Language addresses a central problem in cognitive science concerning the learning procedures through which humans acquire and represent natural language. Until recently algebraic systems have dominated the study of natural language in formal and computational linguistics, AI, and the psychology of language, with linguistic knowledge seen as encoded in formal grammars, model theories, proof theories and other rule-driven devices. Recent work on deep learning has produced an increasingly powerful set of general learning mechanisms which do not apply rule-based algebraic models of representation. The success of deep learning in NLP has led some researchers to question the role of algebraic models in the study of human language acquisition and linguistic representation. Psychologists and cognitive scientists have also been exploring explanations of language evolution and language acquisition that rely on probabilistic methods, social interaction and information theory, rather than on formal models of grammar induction.

    This book addresses the learning procedures through which humans acquire natural language, and the way in which they represent its properties. It brings together leading researchers from computational linguistics, psychology, behavioral science and mathematical linguistics to consider the significance of non-algebraic methods for the study of natural language. The text represents a wide spectrum of views, from the claim that algebraic systems are largely irrelevant to the contrary position that non-algebraic learning methods are engineering devices for efficiently identifying the patterns that underlying grammars and semantic models generate for natural language input. There are interesting and important perspectives that fall at intermediate points between these opposing approaches, and they may combine elements of both. It will appeal to researchers and advanced students in each of these fields, as well as to anyone who wants to learn more about the relationship between computational models and natural language.

    1. On the Proper Role of Linguistically Oriented Deep Net Analysis in Linguistic Theorizing

    by Marco Baroni.

    2. What Artificial Neural Networks Can Tell Us About Human Language Acquisition

    by Alex Warstadt and Samuel R. Bowman.

    3. Grammar through Spontaneous Order

    by Nick Chater and Morten H. Christiansen.

    4. Language is Acquired in Interaction

    by Eve V. Clark.

    5. Why Algebraic Systems aren’t Sufficient for Syntax

    by Ben Ambridge.

    6. Learning Syntactic Structures from String Input

    by Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, and Roger Levy.

    7. Analyzing Discourse Knowledge in Pre-Trained LMs

    by Sharid Lo´aiciga.

    8. Linguistically Guided Multilingual NLP

    by Olga Majewska, Ivan Vuli´c, and Anna Korhonen.

    9. Word Embeddings are Word Story Embeddings (and that’s fine)

    by Katrin Erk and Gabriella Chronis.

    10. Algebra and Language: Reasons for (Dis)content

    by Lawrence S. Moss.

    11. Unitary Recurrent Networks

    by Jean-Philippe Bernardy and Shalom Lappin.


    Shalom Lappin is a Professor of Computational Linguistics at the University of Gothenburg, Professor of Natural Language Processing at Queen Mary University of London and Emeritus Professor of Computational Linguistics at King’s College London. His research focuses on the application of machine learning and probabilistic models to the representation and the acquisition of linguistic knowledge.

    Jean-Philippe Bernardy is a researcher at the University of Gothenburg. His main research interest is in interpretable linguistic models, in particular, those built from first principles of algebra, probability and geometry.

    “Lappin and Bernardy have assembled a great set of researchers who work on linguistic, cognitive science and natural language processing in deep neural network approaches to language.  The result is a state of the art collection of interest to anyone with interests in DNNs and their connection to human language.” --Edward A. F. Gibson, Professor, MIT Department of Brain & Cognitive Sciences