1st Edition

Regression Analysis and Linear Models Concepts, Applications, and Implementation

    Emphasizing conceptual understanding over mathematics, this user-friendly text introduces linear regression analysis to students and researchers across the social, behavioral, consumer, and health sciences. Coverage includes model construction and estimation, quantification and measurement of multivariate and partial associations, statistical control, group comparisons, moderation analysis, mediation and path analysis, and regression diagnostics, among other important topics. Engaging worked-through examples demonstrate each technique, accompanied by helpful advice and cautions. The use of SPSS, SAS, and STATA is emphasized, with an appendix on regression analysis using R. The companion website (www.afhayes.com) provides datasets for the book's examples as well as the RLM macro for SPSS and SAS.

    Pedagogical Features:
    *Chapters include SPSS, SAS, or STATA code pertinent to the analyses described, with each distinctively formatted for easy identification.
    *An appendix documents the RLM macro, which facilitates computations for estimating and probing interactions, dominance analysis, heteroscedasticity-consistent standard errors, and linear spline regression, among other analyses.
    *Students are guided to practice what they learn in each chapter using datasets provided online.
    *Addresses topics not usually covered, such as ways to measure a variable’s importance, coding systems for representing categorical variables, causation, and myths about testing interaction.

    List of Symbols and Abbreviations
    1. Statistical Control and Linear Models
    1.1 Statistical Control
    1.1.1 The Need for Control
    1.1.2 Five Methods of Control
    1.1.3 Examples of Statistical Control
    1.2 An Overview of Linear Models
    1.2.1 What You Should Know Already
    1.2.2 Statistical Software for Linear Modeling and Statistical Control
    1.2.3 About Formulas
    1.2.4 On Symbolic Representations
    1.3 Chapter Summary
    2. The Simple Regression Model
    2.1 Scatterplots and Conditional Distributions
    2.1.1 Scatterplots
    2.1.2 A Line through Conditional Means
    2.1.3 Errors of Estimate
    2.2 The Simple Regression Model
    2.2.1 The Regression Line
    2.2.2 Variance, Covariance, and Correlation
    2.2.3 Finding the Regression Line
    2.2.4 Example Computations
    2.2.5 Linear Regression Analysis by Computer
    2.3 The Regression Coefficient versus the Correlation Coefficient
    2.3.1 Properties of the Regression and Correlation Coefficients
    2.3.2 Uses of the Regression and Correlation Coefficients
    2.4 Residuals
    2.4.1 The Three Components of Y
    2.4.2 Algebraic Properties of Residuals
    2.4.3 Residuals as Y Adjusted for Differences in X
    2.4.4 Residual Analysis
    2.5 Chapter Summary
    3. Partial Relationship and the Multiple Regression Model
    3.1 Regression Analysis with More Than One Predictor Variable
    3.1.1 An Example
    3.1.2 Regressors
    3.1.3 Models
    3.1.4 Representing a Model Geometrically
    3.1.5 Model Errors
    3.1.6 An Alternative View of the Model
    3.2 The Best-Fitting Model
    3.2.1 Model Estimation with Computer Software
    3.2.2 Partial Regression Coefficients
    3.2.3 The Regression Constant
    3.2.4 Problems with Three or More Regressors
    3.2.5 The Multiple Correlation R
    3.3 Scale-Free Measures of Partial Association
    3.3.1 Semipartial Correlation
    3.3.2 Partial Correlation
    3.3.3 The Standardized Regression Coefficient
    3.4 Some Relations among Statistics
    3.4.1 Relations among Simple, Multiple, Partial, and Semipartial Correlations
    3.4.2 Venn Diagrams
    3.4.3 Partial Relationships and Simple Relationships May Have Different Signs
    3.4.4 How Covariates Affect Regression Coefficients
    3.4.5 Formulas for bj, prj, srj, and R
    3.5 Chapter Summary
    4. Statistical Inference in Regression
    4.1 Concepts in Statistical Inference
    4.1.1 Statistics and Parameters
    4.1.2 Assumptions for Proper Inference
    4.1.3 Expected Values and Unbiased Estimation
    4.2 The ANOVA Summary Table
    4.2.1 Data = Model + Error
    4.2.2 Total and Regression Sums of Squares
    4.2.3 Degrees of Freedom
    4.2.4 Mean Squares
    4.3 Inference about the Multiple Correlation
    4.3.1 Biased and Less Biased Estimation of TR2
    4.3.2 Testing a Hypothesis about TR
    4.4 The Distribution of and Inference about a Partial Regression Coefficient
    4.4.1 Testing a Null Hypothesis about Tbj
    4.4.2 Interval Estimates for Tbj
    4.4.3 Factors Affecting the Standard Error of bj
    4.4.4 Tolerance
    4.5 Inferences about Partial Correlations
    4.5.1 Testing a Null Hypothesis about Tprj and Tsrj
    4.5.2 Other Inferences about Partial Correlations
    4.6 Inferences about Conditional Means
    4.7 Miscellaneous Issues in Inference
    4.7.1 How Great a Drawback Is Collinearity?
    4.7.2 Contradicting Inferences
    4.7.3 Sample Size and Nonsignificant Covariates
    4.7.4 Inference in Simple Regression (When k = 1)
    4.8 Chapter Summary
    5. Extending Regression Analysis Principles
    5.1 Dichotomous Regressors
    5.1.1 Indicator or Dummy Variables
    5.1.2 Y Is a Group Mean
    5.1.3 The Regression Coefficient for an Indicator Is a Difference
    5.1.4 A Graphic Representation
    5.1.5 A Caution about Standardized Regression Coefficients for Dichotomous Regressors
    5.1.6 Artificial Categorization of Numerical Variables
    5.2 Regression to the Mean
    5.2.1 How Regression Got Its Name
    5.2.2 The Phenomenon
    5.2.3 Versions of the Phenomenon
    5.2.4 Misconceptions and Mistakes Fostered by Regression to the Mean
    5.2.5 Accounting for Regression to the Mean Using Linear Models
    5.3 Multidimensional Sets
    5.3.1 The Partial and Semipartial Multiple Correlation
    5.3.2 What It Means If PR = 0 or SR = 0
    5.3.3 Inference Concerning Sets of Variables
    5.4 A Glance at the Big Picture
    5.4.1 Further Extensions of Regression
    5.4.2 Some Difficulties and Limitations
    5.5 Chapter Summary
    6. Statistical versus Experimental Control
    6.1 Why Random Assignment?
    6.1.1 Limitations of Statistical Control
    6.1.2 The Advantage of Random Assignment
    6.1.3 The Meaning of Random Assignment
    6.2 Limitations of Random Assignment
    6.2.1 Limitations Common to Statistical Control and Random Assignment
    6.2.2 Limitations Specific to Random Assignment
    6.2.3 Correlation and Causation
    6.3 Supplementing Random Assignment with Statistical Control
    6.3.1 Increased Precision and Power
    6.3.2 Invulnerability to Chance Differences between Groups
    6.3.3 Quantifying and Assessing Indirect Effects
    6.4 Chapter Summary
    7. Regression for Prediction
    7.1 Mechanical Prediction and Regression
    7.1.1 The Advantages of Mechanical Prediction
    7.1.2 Regression as a Mechanical Prediction Method
    7.1.3 A Focus on R Rather Than the Regression Weights
    7.2 Estimating True Validity
    7.2.1 Shrunken versus Adjusted R
    7.2.2 Estimating TRS
    7.2.3 Shrunken R Using Statistical Software
    7.3 Selecting Predictor Variables
    7.3.1 Stepwise Regression
    7.3.2 All Subsets Regression
    7.3.3 How Do Variable Selection Methods Perform?
    7.4 Predictor Variable Configurations
    7.4.1 Partial Redundancy (the Standard Configuration)
    7.4.2 Complete Redundancy
    7.4.3 Independence
    7.4.4 Complementarity
    7.4.5 Suppression
    7.4.6 How These Configurations Relate to the Correlation between Predictors
    7.4.7 Configurations of Three or More Predictors
    7.5 Revisiting the Value of Human Judgment
    7.6 Chapter Summary
    8. Assessing the Importance of Regressors
    8.1 What Does It Mean for a Variable to Be Important?
    8.1.1 Variable Importance in Substantive or Applied Terms
    8.1.2 Variable Importance in Statistical Terms
    8.2 Should Correlations Be Squared?
    8.2.1 Decision Theory
    8.2.2 Small Squared Correlations Can Reflect Noteworthy Effects
    8.2.3 Pearson’s r as the Ratio of a Regression Coefficient to Its Maximum Possible Value
    8.2.4 Proportional Reduction in Estimation Error
    8.2.5 When the Standard Is Perfection
    8.2.6 Summary
    8.3 Determining the Relative Importance of Regressors in a Single Regression Model
    8.3.1 The Limitations of the Standardized Regression Coefficient
    8.3.2 The Advantage of the Semipartial Correlation
    8.3.3 Some Equivalences among Measures
    8.3.4 Cohen’s f 2
    8.3.5 Comparing Two Regression Coefficients in the Same Model
    8.4 Dominance Analysis
    8.4.1 Complete and Partial Dominance
    8.4.2 Example Computations
    8.4.3 Dominance Analysis Using a Regression Program
    8.5 Chapter Summary
    9. Multicategorical Regressors
    9.1 Multicategorical Variables as Sets
    9.1.1 Indicator (Dummy) Coding
    9.1.2 Constructing Indicator Variables
    9.1.3 The Reference Category
    9.1.4 Testing the Equality of Several Means
    9.1.5 Parallels with Analysis of Variance
    9.1.6 Interpreting Estimated Y and the Regression Coefficients
    9.2 Multicategorical Regressors as or with Covariates

    Biography

    Richard B. Darlington, PhD, is Emeritus Professor of Psychology at Cornell University. He is a Fellow of the American Association for the Advancement of Science and has published extensively on regression and related methods, the cultural bias of mental tests, the long-term effects of preschool programs, and, most recently, the neuroscience of brain development and evolution.

    Andrew F. Hayes, PhD, is Distinguished Research Professor in the Haskayne School of Business at the University of Calgary, Alberta, Canada. His research and writing on data analysis has been published widely. Dr. Hayes is the author of Introduction to Mediation, Moderation, and Conditional Process Analysis and Statistical Methods for Communication Science, as well as coauthor, with Richard B. Darlington, of Regression Analysis and Linear Models. He teaches data analysis, primarily at the graduate level, and frequently conducts workshops on statistical analysis throughout the world. His website is www.afhayes.com.

    “This is a thorough and accessible introduction to regression analysis as conducted and reported in the psychology research literature. In addition to the basics, there is up-to-date coverage of more advanced topics--for example, interaction effects, path analysis, and mediation. Accompanying examples of statistical software code and output enable students to quickly utilize linear models in the analysis of their own data. This is the right textbook for first-year psychology graduate students, and I plan to continue using it."--Daniel Ozer, PhD, Department of Psychology, University of California, Riverside

    "This fantastic introduction to the general linear model takes the reader from first principles through to widely used techniques such as mediation and path analysis. The clear writing makes it a pleasure to read. Students will find the book an invaluable resource. There are plenty of insights, too, for even seasoned researchers and data analysts. Instructors and students will appreciate the logical structure and chapters that break the material up into manageable chunks."--Andy Field, PhD, Professor of Child Psychopathology, University of Sussex, United Kingdom

    "If you want to get the most bang for your buck out of your statistical training, investing in learning regression and linear models is the way to go. Nonetheless, many people find linear modeling to be confusing at first. This book breaks down all walls to mastering this fundamental analysis by providing a complete guide in an approachable, conversational style. The book begins with a comprehensive introduction to linear models and continues on to cover the most useful advanced topics, such as logistic regression and mediation and path analysis. A 'must-have' desk reference for entry-level learners and long-time practitioners alike."--Elizabeth Page-Gould, PhD, Canada Research Chair in Social Psychophysiology, University of Toronto

    "A terrific addition to the regression literature. I am often asked, 'How do I determine which regressor(s) is/are the most important?' The treatment of this topic is excellent, and the authors have done a fantastic job of bringing important issues to light. The applied nature of the text and the interweaving of software syntax and output are major improvements over similar books. I like the fact that the book has software package information for SPSS, SAS, and STATA. It has a nice balance; not too technical on the statistical side, but not simply a 'how to' on the software side. I could see this book being used as the main text in our department's graduate-level regression course."--Scott C. Roesch, PhD, Department of Psychology, San Diego State University

    "This is a great textbook for students who have only basic knowledge of statistics yet would like to gain a deep conceptual understanding of regression. The book is up to date in current methods in regression, with strong examples using SAS/SPSS/STATA.”--T. Chris Oshima, PhD, Department of Educational Policy Studies, Georgia State University
    -