1st Edition

Supervised Machine Learning for Text Analysis in R

By Emil Hvitfeldt, Julia Silge Copyright 2022
    402 Pages 57 Color & 8 B/W Illustrations
    by Chapman & Hall

    402 Pages 57 Color & 8 B/W Illustrations
    by Chapman & Hall

    402 Pages 57 Color & 8 B/W Illustrations
    by Chapman & Hall

    Text data is important for many domains, from healthcare to marketing to the digital humanities, but specialized approaches are necessary to create features for machine learning from language. Supervised Machine Learning for Text Analysis in R explains how to preprocess text data for modeling, train models, and evaluate model performance using tools from the tidyverse and tidymodels ecosystem. Models like these can be used to make predictions for new observations, to understand what natural language features or characteristics contribute to differences in the output, and more. If you are already familiar with the basics of predictive modeling, use the comprehensive, detailed examples in this book to extend your skills to the domain of natural language processing.

    This book provides practical guidance and directly applicable knowledge for data scientists and analysts who want to integrate unstructured text data into their modeling pipelines. Learn how to use text data for both regression and classification tasks, and how to apply more straightforward algorithms like regularized regression or support vector machines as well as deep learning approaches. Natural language must be dramatically transformed to be ready for computation, so we explore typical text preprocessing and feature engineering steps like tokenization and word embeddings from the ground up. These steps influence model results in ways we can measure, both in terms of model metrics and other tangible consequences such as how fair or appropriate model results are. 

    I Natural Language Features

    1. Language and modeling

    Linguistics for text analysis

    A glimpse into one area: morphology

    Different languages

    Other ways text can vary

    Summary

    2. Tokenization

    What is a token?

    Types of tokens

    Character tokens

    Word tokens

    Tokenizing by n-grams

    Lines, sentence, and paragraph tokens

    Where does tokenization break down?

    Building your own tokenizer

    Tokenize to characters, only keeping letters

    Allow for hyphenated words

    Wrapping it in a function

    Tokenization for non-Latin alphabets

    Tokenization benchmark

    Summary

    3. Stop words

    Using premade stop word lists

    Stop word removal in R

    Creating your own stop words list

    All stop word lists are context-specific

    What happens when you remove stop words

    Stop words in languages other than English

    Summary

    4. Stemming

    How to stem text in R

    Should you use stemming at all?

    Understand a stemming algorithm

    Handling punctuation when stemming

    Compare some stemming options

    Lemmatization and stemming

    Stemming and stop words

    Summary

    5. Word Embeddings

    Motivating embeddings for sparse, high-dimensional data

    Understand word embeddings by finding them yourself

    Exploring CFPB word embeddings

    Use pre-trained word embeddings

    Fairness and word embeddings

    Using word embeddings in the real world

    Summary

    II Machine Learning Methods

    Regression

    A first regression model

    Building our first regression model

    Evaluation

    Compare to the null model

    Compare to a random forest model

    Case study: removing stop words

    Case study: varying n-grams

    Case study: lemmatization

    Case study: feature hashing

    Text normalization

    What evaluation metrics are appropriate?

    The full game: regression

    Preprocess the data

    Specify the model

    Tune the model

    Evaluate the modeling

    Summary

    Classification

    A first classification model

    Building our first classification model

    Evaluation

    Compare to the null model

    Compare to a lasso classification model

    Tuning lasso hyperparameters

    Case study: sparse encoding

    Two class or multiclass?

    Case study: including non-text data

    Case study: data censoring

    Case study: custom features

    Detect credit cards

    Calculate percentage censoring

    Detect monetary amounts

    What evaluation metrics are appropriate?

    The full game: classification

    Feature selection

    Specify the model

    Evaluate the modeling

    Summary

    III Deep Learning Methods

    Dense neural networks

    Kickstarter data

    A first deep learning model

    Preprocessing for deep learning

    One-hot sequence embedding of text

    Simple flattened dense network

    Evaluation

    Using bag-of-words features

    Using pre-trained word embeddings

    Cross-validation for deep learning models

    Compare and evaluate DNN models

    Limitations of deep learning

    Summary

    Long short-term memory (LSTM) networks

    A first LSTM model

    Building an LSTM

    Evaluation

    Compare to a recurrent neural network

    Case study: bidirectional LSTM

    Case study: stacking LSTM layers

    Case study: padding

    Case study: training a regression model

    Case study: vocabulary size

    The full game: LSTM

    Preprocess the data

    Specify the model

    Summary

    Convolutional neural networks

    What are CNNs?

    Kernel

    Kernel size

    A first CNN model

    Case study: adding more layers

    Case study: byte pair encoding

    Case study: explainability with LIME

    Case study: hyperparameter search

    The full game: CNN

    Preprocess the data

    Specify the model

    Summary

    IV Conclusion

    Text models in the real world

    Appendix

    A Regular expressions

    A Literal characters

    A Meta characters

    A Full stop, the wildcard

    A Character classes

    A Shorthand character classes

    A Quantifiers

    A Anchors

    A Additional resources

    B Data

    B Hans Christian Andersen fairy tales

    B Opinions of the Supreme Court of the United States

    B Consumer Financial Protection Bureau (CFPB) complaints

    B Kickstarter campaign blurbs

    C Baseline linear classifier

    C Read in the data

    C Split into test/train and create resampling folds

    C Recipe for data preprocessing

    C Lasso regularized classification model

    C A model workflow

    C Tune the workflow

    Biography

    Emil Hvitfeldt is a clinical data analyst working in healthcare, and an adjunct professor at American University where he is teaching statistical machine learning with tidymodels. He is also an open source R developer and author of the textrecipes package.

    Julia Silge is a data scientist and software engineer at RStudio PBC where she works on open source modeling tools. She is an author, an international keynote speaker and educator, and a real-world practitioner focusing on data analysis and machine learning practice.

    "I find this book very useful, as predictive modelling with text is an important field in data science and statistics, and yet the one that has been consistently under-represented in technical literature. Given the growing volume, complexity and accessibility of unstructured data sources, as well as the rapid development of NLP algorithms, knowledge and skills in this domain is in increasing demand. In particular, there’s a demand for pragmatic guidelines that offer not just the theoretical background to the NLP issues but also explain the end-to-end modelling process and good practices supported with code examples, just like "Supervised Machine Learning for Text Analysis in R" does. Data scientists and computational linguists would be a prime audience for this kind of publication and would most likely use it as both, (coding) reference and a textbook."
    ~Kasia Kulma, data science consultant

    "This book fills a critical gap between the plethora of text mining books (even in R) that are too basic for practical use and the more complex text mining books that are not accessible to most data scientists. In addition, this book uses statistical techniques to do text mining and text prediction and classification. Not all text mining books take this approach, and given the level of this book, it is one of its strongest features."
    ~Carol Haney, Quatrics

    "This book would be valuable for advanced undergraduates and early PhD students in a wide range of areas that have started using text as data…The main strength of the book is its connection to the tidyverse environment in R. It's relatively easy to pick up and do powerful things."
    ~David Mimno, Cornell University

    "The authors do a great job of presenting R programmers a variety of deep learning applications to text-based problems. Perhaps one of the best parts of this book is the section on interpretability, where the authors showcase methods to diagnose features on which these complex models rely to make their prediction. Considering how important the area of interpretability is to natural language processing research and is often skipped in applied textbooks, the authors should be commended for incorporating it in this book."
    ~Kanishka Misra, Purdue University

    "In conclusion, the presented book is extremely useful for graduate students, advanced researchers, and practitioners of statistics and data science who are interested in learning cutting-edge supervised ML techniques for text data. By utilizing the tidyverse environment and providing easy-to-understand R code examples with detailed study cases of real-world text mining problems, this book stands out and is a worthwhile read."
    -Han-Ming Wu, National Chengchi University, Biometrics, September 2022

    "The volume is a valuable methodological resource, primarily for students interested in data science, concerned with: understanding the fundamentals of preprocessing steps required to transform a corpus, not always large, into a structure that is a good fit for modeling; implementation of machine learning and deep learning algorithms for building text predictive models under given research contexts in which they have to be integrated."
    -Anca Vitcu in ISCB Book Reviews, September 2022