This book provides an up-to-date review of commonly undertaken methodological and statistical practices that are based partially in sound scientific rationale and partially in unfounded lore. Some examples of these “methodological urban legends” are characterized by manuscript critiques such as: (a) “your self-report measures suffer from common method bias”; (b) “your item-to-subject ratios are too low”; (c) “you can’t generalize these findings to the real world”; or (d) “your effect sizes are too low.”
What do these critiques mean, and what is their historical basis? More Statistical and Methodological Myths and Urban Legends catalogs several of these quirky practices and outlines proper research techniques. Topics covered include sample size requirements, missing data bias in correlation matrices, negative wording in survey research, and much more.
Table of Contents
Part I: General Issues 1. Is Ours a Hard Science (And Do We Care)? Ronald S. Landis and José M. Cortina 2. Publication Bias: Understanding the Myths Concerning Threats to the Advancement of Science George C. Banks, Sven Kepes, and Michael A. McDaniel Part II: Design Issues 3. Red-Headed No More: Tipping Points in Qualitative Research in Management Anne D. Smith, Laura T. Madden, and Donde Ashmos Plowman 4. Two Waves of Measurement Do Not a Longitudinal Study Make Robert E. Ployhart, and William I. MacKenzie Jr. 5. The Problem of Generational Change: Why Cross-Sectional Designs Are Inadequate for Investigating Generational Differences Brittany Gentile, Lauren A. Wood, Jean M. Twenge, Brian J. Hoffman, and W. Keith Campbell 6. Negatively Worded Items Negatively Impact Survey Research Dev K. Dalal and Nathan T. Carter 7. Missing Data Bias: Exactly How Bad Is Pairwise Deletion? Daniel A. Newman and Jonathan M. Cottrell 8. Size Matters... Just Not in the Way that You Think: Myths Surrounding Sample Size Requirements for Statistical Analyses Scott Tonidandel, Eleanor B. Williams, and James M. LeBreton Part III: Analytical Issues 9. Weight a Minute... What You See in a Weighted Composite Is Probably Not What You Get! Frederick L. Oswald, Dan J. Putka, and Jisoo Ock 10. Debunking Myths and Urban Legends about How to Identify Influential Outliers Herman Aguinis and Harry Joo 11. Pulling the Sobel Test Up By Its Bootstraps Joel Koopman, Michael Howe, and John R. Hollenbeck Part IV: Inferential Issues 12. "The" Reliability of Job Performance Ratins Equals 0.52 Dan J. Putka and Brian J. Hoffman 13. Use of "Independent" Meausres Does Not Solve the Shared Method Bias Problem Charles E. Lance and Allison B. Siminovsky 14. The Not-So-Direct Cross-Level Direct Effect Alexander C. LoPilato and Robert J. Vandenberg 15. Aggregation Aggravation: The Fallacy of the Wrong Level Revisited David J. Woehr, Andrew C. Loignon, and Paul Schmidt 16. The Practical Importance of Meaurement Invariance Neal Schmitt and Abdifatah A. Ali
Charles E. Lance is Principal, Organizational Research & Development and Professor Emeritus of Industrial/Organizational Psychology at the University of Georgia, USA.
Robert J. Vandenberg is the Robert O. Arnold Professor of Business in the Department of Management, Terry College of Business at the University of Georgia, USA.
"In science, there should be no shortcuts. Yet, as readers, authors, reviewers, and editors we often have knee-jerk reactions. This book serves as the perfect antidote against such reactions toward specific statistical and methodological practices.”—Filip Lievens, Professor of Personnel Management and Work and Organizational Psychology, Ghent University, Belgium
“Lance and Vandenberg’s collection provides a more complete understanding of everyday methodological decisions. Essential for graduate students and faculty alike.”—Donald D. Bergh, Louis D. Beaumont Chair of Business Administration and Professor of Management, The University of Denver, USA