This volume offers an expansion of ideas presented at a recent conference convened to identify the major strategies and more promising practices for assessing technology. The authors -- representing government, business, and university sectors -- helped to set the boundaries of present technology assessment by offering perspectives from computer science, cognitive and military psychology, and education. Their work explores both the use of techniques to assess technology and the use of technology to facilitate the assessment process.
The book's main purpose is to portray the state of the art in technology assessment and to provide conceptual options to help readers understand the power of technology. Technological innovation will continue to develop its own standards of practice and effectiveness. To the extent that these practices are empirically based, designers, supporters, and consumers will be given better information for their decisions.
Contents: Preface. H.F. O'Neil, Jr., E.L. Baker, Introduction. H.F. O'Neil, Jr., E.L. Baker, Y. Ni, A. Jacoby, K.M. Swigger, Human Benchmarking for the Evaluation of Expert Systems. J. Skrzypek, Machine Vision: Metrics for Evaluating. E.L. Baker, Human Benchmarking of Natural Language Systems. J.D. Moore, Assessment of Explanation Systems. A.M. Madni, Assessment of Enabling Technologies for Computer-Aided Concurrent Engineering (CACE). K.M. Swigger, Assessment of Software Engineering. R.J. Seidel, R.S. Perez, An Evaluation Model for Investigating the Impact of Innovative Educational Technology. W. Feurzeig, Visualization Tools for Model-Based Inquiry. H. Burns, Inventing Technology Assessments on Local Area Networks: An Estimate of the Importance of Motives and Collaborative Workplaces. J.D. Fletcher, What Networked Simulation Offers to the Assessment of Collectives.