Download Free Parallelism Patterns And Performance In Iterative Mri Reconstruction Book in PDF and EPUB Free Download. You can read online Parallelism Patterns And Performance In Iterative Mri Reconstruction and write the review.

Magnetic Resonance Imaging (MRI) is a non-invasive and highly flexible medical imaging modality that does not expose patients ionizing radiation. MR Image acquisitions can be designed by varying a large number of contrast-generation parameters, and many clinical diagnostic applications exist. However, imaging speed is a fundamental limitation to many potential applications. Traditionally, MRI data have been collected at Nyquist sampling rates to produce alias-free images. However, many recent scan acceleration techniques produce sub-Nyquist samplings. For example, Parallel Imaging is a well-established acceleration technique that receives the MR signal simultaneously from multiple receive channels. Compressed sensing leverages randomized undersampling and the compressibility (e.g. via Wavelet transforms or Total-Variation) of medical images to allow more aggressive undersampling. Reconstruction of clinically viable images from these highly accelerated acquisitions requires powerful, usually iterative algorithms. Non-Cartesian pulse sequences that perform non-equispaced sampling of k-space further increase computational intensity of reconstruction, as they preclude direct use of the Fast Fourier Transform (FFT). Most iterative algorithms can be understood by considering the MRI reconstruction as an inverse problem, where measurements of un-observable parameters are made via an observation function that models the acquisition process. Traditional direct reconstruction methods attempt to invert this observation function, whereas iterative methods require its repeated computation and computation of its adjoint. As a result, na\"ive sequential implementations of iterative reconstructions produce unfeasibly long runtimes. Their computational intensity is a substantial barrier to their adoption in clinical MRI practice. A powerful new family of massively parallel microprocessor architectures has emerged simultaneously with the development of these new reconstruction techniques. Due to fundamental limitations in silicon fabrication technology, sequential microprocessors reached the power-dissipation limits of commodity cooling systems in the early 2000's. The techniques used by processor architects to extract instruction-level parallelism from sequential programs face ever-diminishing returns, and further performance improvement of sequential processors via increasing clock-frequency has become impractical. However, circuit density and process feature sizes still improve at Moore's Law rates. With every generation of silicon fabrication technology, a larger number of transistors are available to system architects. Consequently, all microprocessor vendors now exclusively produce multi-core parallel processors. Additionally, the move towards on-chip parallelism has allowed processor architects a larger degree of freedom in the design of multi-threaded pipelines and memory hierarchies. Many of the inefficiencies inherent in superscalar out-of-order design are being replaced by the high efficiency afforded by throughput-oriented designs. The move towards on-chip parallelism has resulted in a vast increase in the amount of computational power available in commodity systems. However, this move has also shifted the burden of computational performance towards software developers. In particular, the highly efficient implementation of MRI reconstructions on these systems requires manual parallelization and optimization. Thus, while ubiquitous parallelism provides a solution to the computational intensity of iterative MRI reconstructions, it also poses a substantial software productivity challenge. In this thesis, we propose that a principled approach to the design and implementation of reconstruction algorithms can ameliorate this software productivity issue. We draw much inspiration from developments in the field of computational science, which has faced similar parallelization and software development challenges for several decades. We propose a Software Architecture for the implementation of reconstruction algorithms, which composes two Design Patterns that originated in the domain of massively parallel scientific computing. This architecture allows for the most computationally intense operations performed by MRI reconstructions to be implemented as re-usable libraries. Thus the software development effort required to produce highly efficient and heavily optimized implementations of these operations can be amortized over many different reconstruction systems. Additionally, the architecture prescribes several different strategies for mapping reconstruction algorithms onto parallel processors, easing the burden of parallelization. We describe the implementation of a complete reconstruction, $\ell_1$-SPIRiT, according to these strategies. $\ell_1$-SPIRiT is a general reconstruction framework that seamlessly integrates all three of the scan acceleration techniques mentioned above. Our implementation achieves substantial performance improvement over baseline, and has enabled substantial clinical evaluation of its approach to combining Parallel Imaging and Compressive Sensing. Additionally, we include an in-depth description of the performance optimization of the non-uniform Fast Fourier Transform (nuFFT), an operation used in all non-Cartesian reconstructions. This discussion complements well our description of $\ell_1$-SPIRiT, which we have only implemented for Cartesian samplings.
Programming Massively Parallel Processors: A Hands-on Approach shows both students and professionals alike the basic concepts of parallel programming and GPU architecture. Concise, intuitive, and practical, it is based on years of road-testing in the authors' own parallel computing courses. Various techniques for constructing and optimizing parallel programs are explored in detail, while case studies demonstrate the development process, which begins with computational thinking and ends with effective and efficient parallel programs. The new edition includes updated coverage of CUDA, including the newer libraries such as CuDNN. New chapters on frequently used parallel patterns have been added, and case studies have been updated to reflect current industry practices. - Parallel Patterns Introduces new chapters on frequently used parallel patterns (stencil, reduction, sorting) and major improvements to previous chapters (convolution, histogram, sparse matrices, graph traversal, deep learning) - Ampere Includes a new chapter focused on GPU architecture and draws examples from recent architecture generations, including Ampere - Systematic Approach Incorporates major improvements to abstract discussions of problem decomposition strategies and performance considerations, with a new optimization checklist
This dissertation, "Advances in Parallel Imaging Reconstruction Techniques" by Peng, Qu, 瞿蓬, was obtained from The University of Hong Kong (Pokfulam, Hong Kong) and is being sold pursuant to Creative Commons: Attribution 3.0 Hong Kong License. The content of this dissertation has not been altered in any way. We have altered the formatting in order to facilitate the ease of printing and reading of the dissertation. All rights not granted by the above license are retained by the author. Abstract: Abstract of thesis entitled Advances in Parallel Imaging Reconstruction Techniques submitted by Qu Peng for the degree of Doctor of Philosophy at The University of Hong Kong in February 2006 In recent years, a new approach to magnetic resonance imaging (MRI), known as "parallel imaging," has revolutionized the field of fast MRI. By using sensitivity information from an RF coil array to perform some of the spatial encoding which is traditionally accomplished by magnetic field gradient, parallel imaging techniques allow reduction of phase encoding steps and consequently decrease the scan time. This thesis presents the author''s investigations in the reconstruction techniques of parallel MRI. After reviewing the conventional methods, such as the image-domain-based sensitivity encoding (SENSE), the k-space-based simultaneous acquisition of spatial harmonics (SMASH), generalized auto-calibrating partially parallel acquisition (GRAPPA), and the iterative SENSE method which is applicable to arbitrary k-space trajectories, the author proposes several advanced reconstruction strategies to enhance the performance of parallel imaging in terms of signal-to-noise (SNR), the power of aliasing artifacts, and computational efficiency. First, the conventional GRAPPA technique is extended in that the data interpolation scheme is tailored and optimized for each specific reconstruction. This novel approach extracts a subset of signal points corresponding to the most linearly independent base vectors in the coefficient matrix for the fit procedure, effectively preventing incorporating redundant signals which only bring noise into reconstruction with little contribution to the exactness of fit. Phantom and in vivo MRI experiments demonstrate that this subset selection strategy can reduce residual artifacts for GRAPPA reconstruction. Second, a novel discrepancy-based method for regularization parameter choice is introduced into GRAPPA reconstruction. By this strategy, adaptive regularization in GRAPPA can be realized which can automatically choose nearly optimal parameters for the reconstructions so as to achieve good compromise between SNR and artifacts. It is demonstrated by MRI experiments that the discrepancy-based parameter choice strategy significantly outperforms those based on the L-curve or on a fixed singular value threshold. Third, the convergence behavior of the iterative non-Cartesian SENSE reconstruction is analyzed, and two different strategies are proposed to make reconstructions more stable and robust. One idea is to stop the iteration process in due time so that artifacts and SNR are well balanced and fine overall image quality is achieved; as an alternative, the inner-regularization method, in combination with the Lanczos iteration process, is introduced into non-Cartesian SENSE to mitigate the ill-conditioning effect and improve the convergence behavior. Finally, a novel multi-resolution successive iteration (MRSI) algorithm for non-Cartesian parallel imaging is proposed. The conjugate gradient (CG) iteration is performed in several successive phases with increasing resolution. It is demonstrated by spiral MRI results that the total reconstruction time can be reduced by over 30% by using low resolution in initial stages of iteration. In sum, the author describes several developments in image reconstruction for sensitivity-encoded MRI. The great potential of parallel imaging in modern applications can be further enh
Regularization becomes an integral part of the reconstruction process in accelerated parallel magnetic resonance imaging (pMRI) due to the need for utilizing the most discriminative information in the form of parsimonious models to generate high quality images with reduced noise and artifacts. Apart from providing a detailed overview and implementation details of various pMRI reconstruction methods, Regularized image reconstruction in parallel MRI with MATLAB examples interprets regularized image reconstruction in pMRI as a means to effectively control the balance between two specific types of error signals to either improve the accuracy in estimation of missing samples, or speed up the estimation process. The first type corresponds to the modeling error between acquired and their estimated values. The second type arises due to the perturbation of k-space values in autocalibration methods or sparse approximation in the compressed sensing based reconstruction model. Features: Provides details for optimizing regularization parameters in each type of reconstruction. Presents comparison of regularization approaches for each type of pMRI reconstruction. Includes discussion of case studies using clinically acquired data. MATLAB codes are provided for each reconstruction type. Contains method-wise description of adapting regularization to optimize speed and accuracy. This book serves as a reference material for researchers and students involved in development of pMRI reconstruction methods. Industry practitioners concerned with how to apply regularization in pMRI reconstruction will find this book most useful.
GPU Computing Gems Emerald Edition offers practical techniques in parallel computing using graphics processing units (GPUs) to enhance scientific research. The first volume in Morgan Kaufmann's Applications of GPU Computing Series, this book offers the latest insights and research in computer vision, electronic design automation, and emerging data-intensive applications. It also covers life sciences, medical imaging, ray tracing and rendering, scientific simulation, signal and audio processing, statistical modeling, video and image processing. This book is intended to help those who are facing the challenge of programming systems to effectively use GPUs to achieve efficiency and performance goals. It offers developers a window into diverse application areas, and the opportunity to gain insights from others' algorithm work that they may apply to their own projects. Readers will learn from the leading researchers in parallel programming, who have gathered their solutions and experience in one volume under the guidance of expert area editors. Each chapter is written to be accessible to researchers from other domains, allowing knowledge to cross-pollinate across the GPU spectrum. Many examples leverage NVIDIA's CUDA parallel computing architecture, the most widely-adopted massively parallel programming solution. The insights and ideas as well as practical hands-on skills in the book can be immediately put to use. Computer programmers, software engineers, hardware engineers, and computer science students will find this volume a helpful resource. For useful source codes discussed throughout the book, the editors invite readers to the following website: ..." - Covers the breadth of industry from scientific simulation and electronic design automation to audio / video processing, medical imaging, computer vision, and more - Many examples leverage NVIDIA's CUDA parallel computing architecture, the most widely-adopted massively parallel programming solution - Offers insights and ideas as well as practical "hands-on" skills you can immediately put to use
The proposed method was validated on 25 MR data set from a GE MR scanner. Six image quality metrics were used to evaluate the performance. RMSE, normalized mutual information (NMI) and joint entropy (JE) relative to a reference image from a separate body coil scan were used to verify the fidelity of reconstruction to the reference. Region of interest (ROI) signal to noise ratio (SNR), two-data SNR and background noise were used to validate the quality of the reconstruction. The proposed method showed higher ROI SNR, two-data SNR, and lower background noise over conventional method with comparable RMSE, NMI and JE to the reference image at reduced computer resource requirement.
Programming Massively Parallel Processors: A Hands-on Approach, Second Edition, teaches students how to program massively parallel processors. It offers a detailed discussion of various techniques for constructing parallel programs. Case studies are used to demonstrate the development process, which begins with computational thinking and ends with effective and efficient parallel programs. This guide shows both student and professional alike the basic concepts of parallel programming and GPU architecture. Topics of performance, floating-point format, parallel patterns, and dynamic parallelism are covered in depth. This revised edition contains more parallel programming examples, commonly-used libraries such as Thrust, and explanations of the latest tools. It also provides new coverage of CUDA 5.0, improved performance, enhanced development tools, increased hardware support, and more; increased coverage of related technology, OpenCL and new material on algorithm patterns, GPU clusters, host programming, and data parallelism; and two new case studies (on MRI reconstruction and molecular visualization) that explore the latest applications of CUDA and GPUs for scientific research and high-performance computing. This book should be a valuable resource for advanced students, software engineers, programmers, and hardware engineers. - New coverage of CUDA 5.0, improved performance, enhanced development tools, increased hardware support, and more - Increased coverage of related technology, OpenCL and new material on algorithm patterns, GPU clusters, host programming, and data parallelism - Two new case studies (on MRI reconstruction and molecular visualization) explore the latest applications of CUDA and GPUs for scientific research and high-performance computing
Magnetic Resonance Imaging (MRI) is a powerful medical imaging modality used as a diagnostic tool. There is a steady rise in the imagining examination. Trends from 2000 - 2016 showed that nearly 16 million to 21 million patients had enrolled annually in various US health care systems. The number of MRIs per 1000 increased from 62 per 1000 to 139 per 1000 patients from 2000 to 2016. MR images are usually stored in Picture Archiving and Communication Systems (PACS) in Digital Imaging and Communication in Medicine (DICOM). DICOM format includes a header and imaging data. MRI k-space is the raw data obtained during the MR signal acquisition. The file size of complex MR data is huge. It is generally transformed into the anatomical imaging data, and raw data is discarded and not transferred to the PACS. The abundant DICOM data has the potential to be used for training neural networks. Deep Neural Network models depend on the extensive training datasets. DICOM images are magnitude images without the image phase. It is essential to understand the effect of missing image phase information to use the DICOM data for this training task effectively.My thesis attempts to compare a deep neural network's performance for accelerated MRI reconstruction using the k-space to DICOM only data. MR imaging offers a great deal of control to the user to acquire the data and reconstruct the clinical images. All this comes at the cost of an increase in the acquisition time. Typical scan times are between 30 to 40 mins. Scan times go up to 60 mins if a contrast agent needs to be administered. Such long acquisition times are not only expensive but a cause of inconvenience to the subject as it is impossible to stay motionless in the bore during the whole duration. Two areas are of interest to reduce the scan time, (i) accelerated acquisition and (ii) fast and efficient reconstruction.Methods like compressed sensing and parallel imaging are used to accelerate MRI acquisition. Compressed sensing achieves scan acceleration by overcoming the requirement of Nyquist sampling criteria. An undersampling pattern like the Poisson Disk undersampling pattern is used to acquire an incoherent random sparse signal instead of the full k-space. The "sigpy.mri" python library's "Poisson" API was used to simulate this undersampling. This Python API generates a variable-density Poisson-disc sampling pattern. Compressed Sensing theory mentions that image reconstruction would be possible using signals less than the number indicated by Nyquist as long as the k-space undersampling is done incoherently, which does not lead to structural aliasing when the anatomical image is constructed. This algorithm combines the undersampling with partial Fourier imaging. This API uses a fully sampled calibration region at the center of the k-space in addition to the acceleration factor. The acceleration factor is used for undersampling the region outside the fully sampled center region. Poisson disk undersampling does random sampling while constraining the maximum and minimum distance. This scheme leads to incoherent sampling and avoids structural artifacts.After the image acquisition comes, the reconstruction of the fully sampled k-space or the anatomical image with good SNR. A deep-learning neural network was trained to perform the reconstruction of the retrospectively undersampled data. The undersampled raw k-space data's training performance is compared with that of the undersampled k-space data obtained from the DICOM data.Our experiments have shown that the magnitude obtained from raw k-space data has consistently shown better initial training performance and faster convergence when compared to the magnitude image obtained from the DICOM image. It is also observed that after training enough epochs, the performance of the model trained using raw data is comparable to that of the DICOM images. The significance of this finding is in the fact that the abundantly available DICOM data can be used to train a deep neural network to perform reconstruction of the undersampled k-space.FastMRI is a research project from Facebook AI(FAIR) and NYU Langone Health. The dataset for this project is publicly available. This dataset has two types of scans, knee MRI and brain MRI. For this work, we have used single coil knee MRI data. For performing the training, 2D slices from these images are used from the training dataset's single-coil knee MRI volumes. The training dataset has 973 volumes and a total of 34,742 slices.
This revised and updated second edition – now with two new chapters - is the only book to give a comprehensive overview of computer algorithms for image reconstruction. It covers the fundamentals of computerized tomography, including all the computational and mathematical procedures underlying data collection, image reconstruction and image display. Among the new topics covered are: spiral CT, fully 3D positron emission tomography, the linogram mode of backprojection, and state of the art 3D imaging results. It also includes two new chapters on comparative statistical evaluation of the 2D reconstruction algorithms and alternative approaches to image reconstruction.
This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers from academia and industry up to date on the most recent developments in this field.