Advancing Natural Language Processing in Educational Assessment  book cover
1st Edition

Advancing Natural Language Processing in Educational Assessment




  • Available for pre-order on May 10, 2023. Item will ship after May 31, 2023
ISBN 9781032244525
May 31, 2023 Forthcoming by Routledge
264 Pages 52 B/W Illustrations

FREE Standard Shipping
USD $52.95

Prices & shipping based on shipping country


Preview

Book Description

Advancing Natural Language Processing in Educational Assessment examines the use of natural language technology in educational testing, measurement, and assessment. Recent developments in natural language processing (NLP) have enabled large-scale educational applications, though scholars and professionals may lack a shared understanding of the strengths and limitations of NLP in assessment as well as the challenges that testing organizations face in implementation. This first-of-its-kind book provides evidence-based practices for the use of NLP-based approaches to automated text and speech scoring, language proficiency assessment, technology-assisted item generation, gamification, learner feedback, and beyond. Spanning historical context, validity and fairness issues, emerging technologies, and implications for feedback and personalization, these chapters represent the most robust treatment yet about NLP for education measurement researchers, psychometricians, testing professionals, and policymakers.

Table of Contents

Preface

by Victoria Yaneva and Matthias von Davier

Section I: Automated Scoring

Chapter 1: The Role of Robust Software in Automated Scoring

by Nitin Madnani, Aoife Cahill, and Anastassia Loukina

Chapter 2: Psychometric Considerations when Using Deep Learning for Automated Scoring

by Susan Lottridge, Chris Ormerod, and Amir Jafari

Chapter 3: Speech Analysis in Assessment

by Jared C. Bernstein and Jian Cheng

Chapter 4: Assessment of Clinical Skills: A Case Study in Constructing an NLP-Based Scoring System for Patient Notes

by Polina Harik, Janet Mee, Christopher Runyon, and Brian E. Clauser

Section II: Item Development

Chapter 5: Automatic Generation of Multiple-Choice Test Items from Paragraphs Using Deep Neural Networks

by Ruslan Mitkov, Le An Ha, Halyna Maslak, Tharindu Ranasinghe, and Vilelmini Sosoni

Chapter 6: Training Optimus Prime, M.D.: A Case Study of Automated Item Generation using Artificial Intelligence – From Fine-Tuned GPT2 to GPT3 and Beyond

by Matthias von Davier

Chapter 7: Computational Psychometrics for Digital-first Assessments: A Blend of ML and Psychometrics for Item Generation and Scoring

by Geoff LaFlair, Kevin Yancey, Burr Settles, Alina A von Davier

Section III: Validity and Fairness

Chapter 8: Validity, Fairness, and Technology-based Assessment

by Suzanne Lane

Chapter 9: Evaluating Fairness of Automated Scoring in Educational Measurement

by Matthew S. Johnson and Daniel F. McCaffrey

Section IV: Emerging Technologies

Chapter 10: Extracting Linguistic Signal from Item Text and Its Application to Modeling Item Characteristics

by Victoria Yaneva, Peter Baldwin, Le An Ha, and Christopher Runyon

Chapter 11: Stealth Literacy Assessment: Leveraging Games and NLP in iSTART

by Ying Fang, Laura K. Allen, Rod D. Roscoe, and Danielle S. McNamara

Chapter 12: Measuring Scientific Understanding Across International Samples: The Promise of Machine Translation and NLP-based Machine Learning Technologies

by Minsu Ha and Ross H. Nehm

Chapter 13: Making Sense of College Students’ Writing Achievement and Retention with Automated Writing Evaluation

by Jill Burstein, Daniel McCaffrey, Steven Holtzman & Beata Beigman Klebanov

Contributor Biographies

...
View More

Editor(s)

Biography

Victoria Yaneva is Senior Data Scientist at the National Board of Medical Examiners, USA.

Matthias von Davier is Monan Professor of Education in the Lynch School of Education and Executive Director of TIMSS & PIRLS International Study Center at Boston College, USA.