Download Free Parallel Complexity Of Linear System Solution Book in PDF and EPUB Free Download. You can read online Parallel Complexity Of Linear System Solution and write the review.

This book presents the most important parallel algorithms for the solution of linear systems. Despite the evolution and significance of the field of parallel solution of linear systems, no book is completely dedicated to the subject. People interested in the themes covered by this book belong to two different groups: numerical linear algebra and theoretical computer science, and this is the first effort to produce a useful tool for both. The book is organized as follows: after introducing the general features of parallel algorithms and the most important models of parallel computation, the authors analyze the complexity of solving linear systems in the circuit, PRAM, distributed, and VLSI models. The approach covers both the general case (i.e. dense linear systems without structure) and many important special cases (i.e. banded, sparse, Toeplitz, circulant linear systems).
Although the origins of parallel computing go back to the last century, it was only in the 1970s that parallel and vector computers became available to the scientific community. The first of these machines-the 64 processor llliac IV and the vector computers built by Texas Instruments, Control Data Corporation, and then CRA Y Research Corporation-had a somewhat limited impact. They were few in number and available mostly to workers in a few government laboratories. By now, however, the trickle has become a flood. There are over 200 large-scale vector computers now installed, not only in government laboratories but also in universities and in an increasing diversity of industries. Moreover, the National Science Foundation's Super computing Centers have made large vector computers widely available to the academic community. In addition, smaller, very cost-effective vector computers are being manufactured by a number of companies. Parallelism in computers has also progressed rapidly. The largest super computers now consist of several vector processors working in parallel. Although the number of processors in such machines is still relatively small (up to 8), it is expected that an increasing number of processors will be added in the near future (to a total of 16 or 32). Moreover, there are a myriad of research projects to build machines with hundreds, thousands, or even more processors. Indeed, several companies are now selling parallel machines, some with as many as hundreds, or even tens of thousands, of processors.
"The suitability of different parallel architectures for solving randomly sparse linear systems is discussed. Based on the complexity of task scheduling, one parallel architecture, based on a broadcast bus, is presented and analyzed" -- abstract.
Since the first edition of this book was published in 1996, tremendous progress has been made in the scientific and engineering disciplines regarding the use of iterative methods for linear systems. The size and complexity of the new generation of linear and nonlinear systems arising in typical applications has grown. Solving the three-dimensional models of these problems using direct solvers is no longer effective. At the same time, parallel computing has penetrated these application areas as it became less expensive and standardized. Iterative methods are easier than direct solvers to implement on parallel computers but require approaches and solution algorithms that are different from classical methods. Iterative Methods for Sparse Linear Systems, Second Edition gives an in-depth, up-to-date view of practical algorithms for solving large-scale linear systems of equations. These equations can number in the millions and are sparse in the sense that each involves only a small number of unknowns. The methods described are iterative, i.e., they provide sequences of approximations that will converge to the solution.
This volume reviews, in the context of partial differential equations, algorithm development that has been specifically aimed at computers that exhibit some form of parallelism. Emphasis is on the solution of PDEs because these are typically the problems that generate high computational demands. The authors discuss architectural features of these computers insomuch as they influence algorithm performance, and provide insight into algorithm characteristics that allow effective use of hardware.
This book deals with numerical methods for solving large sparse linear systems of equations, particularly those arising from the discretization of partial differential equations. It covers both direct and iterative methods. Direct methods which are considered are variants of Gaussian elimination and fast solvers for separable partial differential equations in rectangular domains. The book reviews the classical iterative methods like Jacobi, Gauss-Seidel and alternating directions algorithms. A particular emphasis is put on the conjugate gradient as well as conjugate gradient -like methods for non symmetric problems. Most efficient preconditioners used to speed up convergence are studied. A chapter is devoted to the multigrid method and the book ends with domain decomposition algorithms that are well suited for solving linear systems on parallel computers.
This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of parallel iterative linear system solvers with emphasis on scalable preconditioners, (b) parallel schemes for obtaining a few of the extreme eigenpairs or those contained in a given interval in the spectrum of a standard or generalized symmetric eigenvalue problem, and (c) parallel methods for computing a few of the extreme singular triplets. Part IV focuses on the development of parallel algorithms for matrix functions and special characteristics such as the matrix pseudospectrum and the determinant. The book also reviews the theoretical and practical background necessary when designing these algorithms and includes an extensive bibliography that will be useful to researchers and students alike. The book brings together many existing algorithms for the fundamental matrix computations that have a proven track record of efficient implementation in terms of data locality and data transfer on state-of-the-art systems, as well as several algorithms that are presented for the first time, focusing on the opportunities for parallelism and algorithm robustness.
Different algorithms, based on Gaussian elimination, for the solution of dense linar systems of equations, are discussed for a multiprocessor ring. The number of processors is assumed not to exceed the problem size. A fairly general model for data transfer is proposed and the algorithms are analysed with respect to their requirements of arithmetic as well as communication times. This paper lays no claims to being either exhaustive or complete. Its objective is to compare a variety of algorithms, which are fairly reasonable to program and to analyse, for the solution of a single problem on a certain class of parallel architectures, thereby leading to a more realistic approach to future algorithm development on multiprocessor machines.
The discretization of separable elliptic partial differential equations leads to linear systems with special block triangular matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconsistant coefficients. A method was recently proposed to parallelize and vectorize BCR. Here, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches, including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational complexity lower than that of parallel BCR. Gallopoulos, E. and Saad, Youcef Unspecified Center NSF MIP-84-10110; NSF DCR-85-09970; NSF CCR-87-17942; NCC2-387...