Handbook of Automated Scoring: Theory into Practice, 1st Edition (Hardback) book cover

Handbook of Automated Scoring

Theory into Practice, 1st Edition

Edited by Duanli Yan, André A. Rupp, Peter W. Foltz

Chapman and Hall/CRC

450 pages

Purchasing Options:$ = USD
Hardback: 9781138578272
pub: 2020-02-20
SAVE ~$31.99
Available for pre-order. Item will ship after 20th February 2020

FREE Standard Shipping!


"Automated scoring engines…require a careful balancing of the contributions of technology, NLP, psychometrics, artificial intelligence, and the learning sciences. This volume is evidence that the theories, methodologies, and underlying technology that surround automated scoring have reached maturity, and that there is a growing acceptance by the field of these technologies." (From the Foreword by Alina von Davier, ACTNext Senior Vice President)

Handbook of Automated Scoring: Theory into Practice provides a scientifically-grounded overview of the key research efforts required to move automated scoring systems into operational practice. It examines the field of automated scoring from the viewpoint of related scientific fields serving as its foundation, the latest developments of computational methodologies utilized in automated scoring, and several large-scale real-world applications of automated scoring for complex learning and assessment systems.  The book is organized into three sections that cover (1) scientific foundations, (2) operational methodologies, and (3) practical illustrations. The book contains an overall introduction and synthesis chapter, a glossary, and a commentary for each section.

Table of Contents

Foreword - Alina von Davier, ACTNext

1. The past, present, and future of automated scoring for complex tasks - Peter Foltz, Pearson

Theoretical Foundations

2. Cognitive foundations of automated scoring - Malcolm Bauer, ETS

3. Assessment design with automated scoring in mind - Kristen DiCerbo, Pearson

4. Human rating with automated scoring in mind - Ed Wolfe, ETS

5. Natural language processing for writing and speaking - Aoife Cahill, ETS

6. Multimodal analytics for automated assessment - Sidney D'Mello, University of Colorado

7. International applications of automated essay scoring - Mark Shermis, University of Houston

Clear Lake

8. Public perception and communication around automated essay scoring - Scott Wood, Pacific Metrics

9. An evidentiary-reasoning perspective on automated scoring: Commentary on Section 1 - Bob Mislevy, ETS

Operational Methodologies

10. Operational human scoring at scale - Katie Pedley, ETS

11. System architecture design for scoring and delivery -Sue Lottridge, AIR

12. Design, development, and implementation of automated scoring systems - Christina Schneider, NWEA

13. Quality control for automated scoring in large-scale assessment contexts - Dan Shaw, ACT

14. A continuous flow system for a seamless integration of automated and human scoring - Kyle Habermehl, Pearson

15. Deep learning networks for automated scoring applications - Saad Khan, ACT Next

16. Validation of automated scoring systems - Duanli Yan , ETS

17. Operational considerations for automated scoring systems: Commentary on Section 2 -David Williamson, ETS

Practical Illustrations

18. Expanding automated writing evaluation - Jill Burstein, ETS

19. Automated writing process analysis - Paul Deane, ETS

20. Automated scoring of extended spontaneous speech - Klaus Zechner, ETS

21. Conversation-based learning and assessment environments - Art Graesser, University of Memphis

22. Automated scoring in intelligent tutoring systems - Bob Mislevy, ETS

23. Scoring of streaming data in game-based assessment - Russell Almond, Florida State University

24. Automated scoring in medical licensing - Melissa Margolis, NBME

25. At the birth of the future: Commentary on Section 3 - John Behrens, Pearson

26. Theory into practice: Reflections on the handbook - Andre Rupp, ETS

About the Editors

Duanli Yan is Director of Data Analysis and Computational Research in the Psychometrics, Statistics, and Data Sciences area at the Educational Testing Service, and Adjunct Professor at Fordham University and Rutgers University. She is a co-author of Bayesian Networks in Educational Assessment and Computerized Adaptive and Multistage Testing with R, editor for Practical Issues and Solutions for Computerized Multistage Testing, and co-editor for Computerized Multistage Testing: Theory and Applications. Her awards include the 2016 AERA Division D Significant Contribution to Educational Measurement and Research Methodology Award.

André A. Rupp is Research Director in the Psychometrics, Statistics, and Data Sciences area at the Educational Testing Service. He is co-author and co-editor of two award-winning interdisciplinary books entitled Diagnostic Measurement: Theory, Methods, and Applications and The Handbook of Cognition and Assessment: Frameworks, Methodologies, and Applications. His synthesis- and framework-oriented research has appeared in a wide variety of prestigious peer-reviewed journals.

Peter W. Foltz is Vice President in Pearson's AI and Products Solutions Organization and Research Professor at the University of Colorado’s Institute of Cognitive Science. His work covers machine learning and natural language processing for educational and clinical assessments, discourse processing, reading comprehension and writing skills, 21st Century skills learning, and large-scale data analytics. He has authored more than 150 journal articles, book chapters, and conference papers as well as multiple patents.

About the Series

Chapman & Hall/CRC Statistics in the Social and Behavioral Sciences

Learn more…

Subject Categories

BISAC Subject Codes/Headings:
COMPUTERS / Machine Theory
MATHEMATICS / Probability & Statistics / General