Dependability benchmarking for computer systems /
edited by Karama Kanoun, Lisa Spainhower.
- 1 online resource (xviii, 362 pages) : illustrations (some color).
- Practitioners .
- Practitioners. .
Includes bibliographical references and index.
Prologue: Dependability Benchmarking: A Reality or a Dream? / The Autonomic Computing Benchmark / Analytical Reliability, Availability, and Serviceability Benchmarks / System Recovery Benchmarks / Dependability Benchmarking Using Environmental Test Tools / Dependability Benchmark for OLTP Systems / Dependability Benchmarking of Web Servers / Dependability Benchmarking of Automotive Control Systems / Toward Evaluating the Dependability of Anomaly Detectors / Vajra: Evaluating Byzantine-Fault-Tolerant Distributed Systems / User-Relevant Software Reliability Benchmarking / Interface Robustness Testing: Experience and Lessons Learned from the Ballista Project / Windows and Linux Robustness Benchmarks with Respect to Application Erroneous Behavior / DeBERT: Dependability Benchmarking of Embedded Real-Time Off-the-Shelf Components for Space Applications / Benchmarking the Impact of Faulty Drivers: Application to the Linux Kernel / Benchmarking the Operating System against Faults Impacting Operating System Functions / Neutron Soft Error Rate Characterization of Microprocessors / Karama Kanoun, Phil Koopman, Henrique Madeira, and Lisa Spainhower -- Joyce Coleman, Tony Lau, Bhushan Lokhande, Peter Shum, Robert Wisniewski, and Mary Peterson Yost -- Richard Elling, Ira Pramanick, James Mauro, William Bryson, and Dong Tang -- Richard Elling, Ira Pramanick, James Mauro, William Bryson, and Dong Tang -- Cristian Constantinescu -- Marco Vieira, Jo�ao Dur�aes, and Henrique Madeira -- Jo�ao Dur�aes, Marco Vieira, and Henrique Madeira -- Juan-Carlos Ruiz, Pedro Gil, Pedro Yuste, and David de-Andr�es -- Kymie M.C. Tan and Roy A. Maxion -- Sonya J. Wierman and Priya Narasimhan -- Mario R. Garzia -- Philip Koopman, Kobey DeVale, and John DeVale -- Karama Kanoun, Yves Crouzet, Ali Kalakech, and Ana-Elena Rugina -- Diamantino Costa, Ricardo Barbosa, Ricardo Maia, and Francisco Moreira -- Arnaud Albinet, Jean Arlat, and Jean-Charles Fabre -- Ravishankar Iyer, Zbigniew Kalbarczyk, and Weining Gu -- Cristian Constantinescu.
Use copy
A comprehensive collection of benchmarks for measuring dependability in hardware-software systems. As computer systems have become more complex and mission-critical, it is imperative for systems engineers and researchers to have metrics for a system's dependability, reliability, availability, and serviceability. Dependability benchmarks are useful for guiding development efforts for system providers, acquisition choices of system purchasers, and evaluations of new concepts by researchers in academia and industry. This book gathers together all dependability benchmarks developed to date by industry and academia and explains the various principles and concepts of dependability benchmarking. It collects the expert knowledge of DBench, a research project funded by the European Union, and the IFIP Special Interest Group on Dependability Benchmarking, to shed light on this important area. It also provides a large panorama of examples and recommendations for defining dependability benchmarks. Dependability Benchmarking for Computer Systems includes contributions from a credible mix of industrial and academic sources: IBM, Intel, Microsoft, Sun Microsystems, Critical Software, Carnegie Mellon University, LAAS-CNRS, Technical University of Valencia, University of Coimbra, and University of Illinois. It is an invaluable resource for engineers, researchers, system vendors, system purchasers, computer industry consultants, and system integrators.
Electronic reproduction. [Place of publication not identified] : HathiTrust Digital Library, 2010.
Master and use copy. Digital master created according to Benchmark for Faithful Digital Reproductions of Monographs and Serials, Version 1. Digital Library Federation, December 2002. http://purl.oclc.org/DLF/benchrepro0212