Resulting from the need for greater realism in models of human and organizational behavior in military simulations, there has been increased interest in research on integrative models of human performance, both within the cognitive science community generally, and within the defense and aerospace industries in particular. This book documents accomplishments and lessons learned in a multi-year project to examine the ability of a range of integrated cognitive modeling architectures to explain and predict human behavior in a common task environment that requires multi-tasking and concept learning.
This unique project, called the Agent-Based Modeling and Behavior Representation (AMBR) Model Comparison, involved a series of human performance model evaluations in which the processes and performance levels of computational cognitive models were compared to each other and to human operators performing the identical tasks. In addition to quantitative data comparing the performance of the models and real human performance, the book also presents a qualitatively oriented discussion of the practical and scientific considerations that arise in the course of attempting this kind of model development and validation effort.
The primary audiences for this book are people in academia, industry, and the military who are interested in explaining and predicting complex human behavior using computational cognitive modeling approaches. The book should be of particular interest to individuals in any sector working in Psychology, Cognitive Science, Artificial Intelligence, Industrial Engineering, System Engineering, Human Factors, Ergonomics and Operations Research. Any technically or scientifically oriented professional or student should find the material fully accessible without extensive mathematical background.
Contents: Preface. Part I: Overview, Experiments, and Software. K.A. Gluck, R.W. Pew, M.J. Young, Background, Structure, and Preview of the Model Comparison. Y.J. Tenney, D.E. Diller, S. Deutsch, K. Godfrey, The AMBR Experiments: Methodology and Human Benchmark Results. S. Deutsch, D.E. Diller, B. Benyo, L. Feinerman, The Simulation Environment for the AMBR Experiments. Part II: Models of Multitasking and Category Learning. C. Lebiere, Constrained Functionality: Application of the ACT-R Cognitive Architecture to the AMBR Modeling Comparison. W. Zachary, J. Ryder, J. Stokes, F. Glenn, J-C. Le Mentec, T. Santarelli, A COGNET/iGEN Cognitive Model That Mimics Human Performance and Learning in a Simulated Work Environment. R.G. Eggleston, K.L. McCreight, M.J. Young, Distributed Cognition and Situated Behavior. R.S. Chong, R.E. Wray, Inheriting Constraint in Hybrid Cognitive Architectures: Applying the EASE Architecture to Performance and Learning in a Simplified Air Traffic Control Task. Part III: Conclusions, Lessons Learned, and Implications. D.E. Diller, K.A. Gluck, Y.J. Tenney, K. Godfrey, Comparison, Convergence, and Divergence in Models of Multitasking and Category Learning, and in the Architectures Used to Create Them. B.C. Love, In Vivo or In Vitro: Cognitive Architectures and Task-Specific Models. G.E. Campbell, A.E. Bolton, HBR Validation: Integrating Lessons Learned From Multiple Academic Disciplines, Applied Communities, and the AMBR Project. R.W. Pew, K.A. Gluck, S. Deutsch, Accomplishments, Challenges, and Future Directions for Human Behavior Representation.