Download Free Parallel Solution Of Sparse Linear Least Squares Problems On Distributed Memory Multiprocessors Book in PDF and EPUB Free Download. You can read online Parallel Solution Of Sparse Linear Least Squares Problems On Distributed Memory Multiprocessors and write the review.

We consider solving unconstrained least squares and equality constrained least squares problems on distributed memory multi-processors. First, we examine some issues related to matrix computations in general, on such architectures. We then describe three different algorithms to compute an orthogonal factorization of a matrix on a multi-processor, which are well suited for dense matrices. As for sparse matrices, efficient solution of problems involving large, sparse matrices on distributed memory multi-processors calls for the use of static data structures. Often, at the same time, it is critical to detect the rank of a matrix during the factorization to accurate results. We describe a rank detection strategy, using an incremental condition estimator, that computes a factorization using pre-determined static data structure. We present experimental evidence to show that the accuracy of the rank detection algorithm is comparable to the column pivoting and another recent procedure by Bischof. We further demonstrate that the algorithm is quite suitable for parallel sparse matrix factorizations, by showing good speed-ups on a hypercube with up to 128 processors. We use this algorithm to detect the rank of the constraint matrix in solving the equality constrained least squares problem. We use the weighting approach to solve the equality constrained least square problem, with two iterations of modified deferred correction technique, to improve the accuracy of the original solution. (KR).
Proceedings -- Parallel Computing.
The past year has been significant progress in algorithms and software for the solution of large-scale sparse systems of equations, least-squares problems, and optimization problems on advanced distributed-memory parallel machines. The progress made to date is described in this paper.
The method of least squares: the principal tool for reducing the influence of errors when fitting models to given observations.
This paper describes an efficient approach for solving sparse linear systems using direct method on a shared-memory vector multiprocessor computer. The direct method is divided into three steps: LU factorization, forward substitution and backward substitution. If the size of the linear system is large, LU factorization is a very time-consuming step, so that concurrency and vectorization are exploited to reduce execution time. Parallelism of LU factorization is obtained by partitioning the matrix using multilevel node-tearing techniques. The partitioned matrix is reordered into a NBBD (Nested Bordered-Block Diagonal) form. A nested-block data structure is used to store the sparse matrix, enabling the use of vectorization as well as multiprocessing to achieve high performance. This approach is suitable for many applications that require the repeated direct solution of sparse linear systems with identical matrix structure, such as circuit simulation. The approach has been implemented in a program that runs on an ALLIANT FX/8 vector multiprocessor with shared memory. Speedups in execution time compared to conventional serial computation with no vectorization are up to 20 using eight processors. Keywords: Sparse matrix, Parallel solution, Multi-processors vectorization, Linear systems, Nodes, Computers, Theses, Parallel orientation. (CP).
Mathematics of Computing -- Parallelism.