1st Edition

Implementing Parallel and Distributed Systems

    426 Pages 206 B/W Illustrations
    by Auerbach Publications

    426 Pages 206 B/W Illustrations
    by Auerbach Publications

    426 Pages 206 B/W Illustrations
    by Auerbach Publications

    Parallel and distributed systems (PADS) have evolved from the early days of computational science and supercomputers to a wide range of novel computing paradigms, each of which is exploited to tackle specific problems or application needs, including distributed systems, parallel computing, and cluster computing, generally called high-performance computing (HPC). Grid, Cloud, and Fog computing patterns are the most important of these PADS paradigms, which share common concepts in practice.

    Many-core architectures, multi-core cluster-based supercomputers, and Cloud Computing paradigms in this era of exascale computers have tremendously influenced the way computing is applied in science and academia (e.g., scientific computing and large-scale simulations). Implementing Parallel and Distributed Systems presents a PADS infrastructure known as Parvicursor that can facilitate the construction of such scalable and high-performance parallel distributed systems as HPC, Grid, and Cloud Computing.

    This book covers parallel programming models, techniques, tools, development frameworks, and advanced concepts of parallel computer systems used in the construction of distributed and HPC systems. It specifies a roadmap for developing high-performance client-server applications for distributed environments and supplies step-by-step procedures for constructing a native and object-oriented C++ platform.

    FEATURES:

    • Hardware and software perspectives on parallelism
    • Parallel programming many-core processors, computer networks and storage systems
    • Parvicursor.NET Framework: a partial, native, and cross-platform C++ implementation of the .NET Framework
    • xThread: a distributed thread programming model by combining thread-level parallelism and distributed memory programming models
    • xDFS: a native cross-platform framework for efficient file transfer
    • Parallel programming for HPC systems and supercomputers using message passing interface (MPI)

    Focusing on data transmission speed that exploits the computing power of multicore processors and cutting-edge system-on-chip (SoC) architectures, it explains how to implement an energy-efficient infrastructure and examines distributing threads amongst Cloud nodes. Taking a solid approach to design and implementation, this book is a complete reference for designing, implementing, and deploying these very complicated systems.

    1. Introduction
    2. IoT and Distributed Systems
    3. Advanced Operating System Concepts in Distributed Systems Design
    4. Parallelism for Many-Core Era: Hardware and Software Perspectives
    5. Parallelisation for Many-Core Era: A Programming Perspective
    6. Storage Systems: A Parallel Programming Perspective
    7. Computer Networks - A Parallel Programming Approach
    8. Parvicursor.NET Framework: A Partial, Native and Cross-Platform C++ Implementation of the .NET Framework
    9. Parvicursor Infrastructure to Facilitate the Design of Grid and Cloud Computing and HPC Systems
    10. xDFS: A Native Cross-Platform Framework for Efficient File Transfers in Dynamic Cloud and Internet Environments
    11. Parallel Programming Languages for High-Performance Computing

    Biography

    Alireza Poshtkohi applies computer science and mathematics to tackle grand research challenges in engineering, physics, and medicine. He has worked internationally in both academia and industry in many different roles ranging from computer scientist, neuroscientist, university lecturer, electronics engineer, software engineer, IT consultant and data centre architect, to full-stack developer. He holds BSc and MSc degrees and a PhD in electrical and electronics engineering and computational neuroscience, respectively. To date, he has taught 17 courses—such as parallel algorithms, advanced algorithms, operating systems, and computer networks, to name just a few—in electrical and computer engineering departments at different universities. His current research interests include applied mathematics, biophysics, high-performance computing, and theoretical physics.

    M. B. Ghaznavi-Ghoushchi holds a BSc degree from Shiraz University, Shiraz, Iran (1993), and MSc and PhD both from Tarbiat Modares University (TMU), Tehran, Iran, in 1997 and 2003, respectively. During 2003–2004, he was a researcher at TMU Institute of Information Technology. He is the founder and director of High-Performance and Cloud Computing (HPCC) and Integrated Circuits and Systems (ICS) laboratories at Shahed University. He is currently an associate professor at Shahed University, Tehran, Iran. His interests include VLSI Design; Low Power and Energy-Efficient circuits and systems; Computer-Aided Design Automation for Mixed Signal; and UML-based designs for SoC and Mixed-Signal.