Download Free Applications Of Noisy Intermediate Scale Quantum Computing To Many Body Nuclear Physics Book in PDF and EPUB Free Download. You can read online Applications Of Noisy Intermediate Scale Quantum Computing To Many Body Nuclear Physics and write the review.

Many-body nuclear physics is the bridge that takes us from the fundamental laws governing individual nucleons to understanding how groups of them interact together to form the nuclei that lie at the heart of all atoms-the building blocks of our universe. Many powerful techniques of classical computation have been developed over the years in order to study ever more complex nuclear systems. However, we seem to be approaching the limits of such classical techniques as the complexity of many-body quantum systems grows exponentially. Yet, the recent development of quantum computers offers one hope as they are predicted to provide a significant advantage over classical computers when tackling problems such as the quantum many-body problem. In this thesis, we focus on developing and applying algorithms to tackle various many-body nuclear physics problems that can be run on the near-term quantum computers of the current noisy intermediate-scale quantum (NISQ) era. As these devices are small and noisy, we focus our algorithms on various many-body toy models in order to gain insight and create a foundation upon which future algorithms will be built to tackle the intractable problems of our time. In the first part, we tailor current quantum algorithms to efficiently run on NISQ devices and apply them to three pairing models of many-body nuclear physics, the Lipkin model, the Richardson pairing model, and collective neutrino oscillations. For the first two models, we solve for the ground-state energy while for the third, we simulate the time evolution and characterize the entanglement. In the second part, we develop novel algorithms to increase the efficiency and applicability of current algorithms on NISQ devices. These include an algorithm that compresses circuit depth to allow for less noisy computation and a variational method to prepare an important class of quantum states. Error mitigation techniques used to improve the accuracy of results are also discussed. All together, this work provides a road map for applications of the quantum computers of tomorrow to solve what nuclear phenomena mystify us today.
Quantum mechanics, the subfield of physics that describes the behavior of very small (quantum) particles, provides the basis for a new paradigm of computing. First proposed in the 1980s as a way to improve computational modeling of quantum systems, the field of quantum computing has recently garnered significant attention due to progress in building small-scale devices. However, significant technical advances will be required before a large-scale, practical quantum computer can be achieved. Quantum Computing: Progress and Prospects provides an introduction to the field, including the unique characteristics and constraints of the technology, and assesses the feasibility and implications of creating a functional quantum computer capable of addressing real-world problems. This report considers hardware and software requirements, quantum algorithms, drivers of advances in quantum computing and quantum devices, benchmarks associated with relevant use cases, the time and resources required, and how to assess the probability of success.
In the evolving landscape of quantum computing, the emergence of quantum computers in the Noisy Intermediate Scale Quantum (NISQ) regime marks a significant stride. Superconducting qubits have garnered popularity in both academic and industrial groups. However, the journey towards achieving a large-scale, fully error-corrected quantum computer faces challenges. This thesis addresses some of these challenges within an academic setup. One prominent challenge with superconducting qubits is Purcell decay. This work aims to tackle the issue by delving into the implementation of on-chip Purcell filters with Transmon qubits. The overarching goal is to pave the way for further scalability by ensuring compatibility of these designs with scalability plans. The thesis also introduces novel architectures for superconducting qudit processors, focusing on their already presented implementation in 3D cavities. Efforts are directed towards transitioning these processors to a planar platform for enhanced scalability. The coupling of these processors to environment is explored using coplanar waveguides, with the system's physics governed by the principles of circuit quantum electrodynamics. Finally, the thesis also delves into the packaging of planar qubit devices, aiming to facilitate easy scalability. This platform enables interfacing the devices with control equipment, shielding from stray fields, and offers the essential thermal link to the dilution refrigerator where they are housed. Each section of the thesis presents results emphasizing potential areas for improvement and refinement of the systems.
Quantum computation will likely provide significant advantages relative to classical architectures for certain computational problems in number theory and physics, and potentially in other areas such as optimization and machine learning. While some key theoretical and engineering problems remain to be solved, experimental advances in recent years have demonstrated the first beyond-classical quantum computation as well as the first experiments in error-corrected quantum computation. In this thesis, we focus on quantum computers with around one hundred qubits that can implement around one thousand operations, the so-called noisy-intermediate scale quantum (NISQ) regime or kilo-scale quantum (KSQ) regime, and develop algorithms tailored to these devices as well as techniques for error mitigation that require significantly less overhead than fault-tolerant quantum computation. In the first part, we develop quantum algorithms for diagonalizing quantum states (density matrices) and compiling quantum circuits. These algorithms use a quantum computer to evaluate a cost function which is classically hard to compute and a classical computer to adjust parameters of an ansatz circuit, similar to the variational principle in quantum mechanics and other variational quantum algorithms for chemistry and optimization. In the second part, we extend an error mitigation technique known as zero-noise extrapolation and introduce a new framework for error mitigation which we call logical shadow tomography. In particular, we adapt zero-noise extrapolation (ZNE) to the gate model and introduce new methods for noise scaling and (adaptive) extrapolation. Further, we analyze ZNE in the presence of time-correlated noise and experimentally show ZNE increases the effective quantum volume of various quantum computers. Finally, we develop a simple framework for error mitigation that enables (the composition of) several error mitigation techniques with significantly fewer resources than prior methods, and numerically show the advantages of our framework.
The field of atomic, molecular, and optical (AMO) science underpins many technologies and continues to progress at an exciting pace for both scientific discoveries and technological innovations. AMO physics studies the fundamental building blocks of functioning matter to help advance the understanding of the universe. It is a foundational discipline within the physical sciences, relating to atoms and their constituents, to molecules, and to light at the quantum level. AMO physics combines fundamental research with practical application, coupling fundamental scientific discovery to rapidly evolving technological advances, innovation and commercialization. Due to the wide-reaching intellectual, societal, and economical impact of AMO, it is important to review recent advances and future opportunities in AMO physics. Manipulating Quantum Systems: An Assessment of Atomic, Molecular, and Optical Physics in the United States assesses opportunities in AMO science and technology over the coming decade. Key topics in this report include tools made of light; emerging phenomena from few- to many-body systems; the foundations of quantum information science and technologies; quantum dynamics in the time and frequency domains; precision and the nature of the universe, and the broader impact of AMO science.
This book targets computer scientists and engineers who are familiar with concepts in classical computer systems but are curious to learn the general architecture of quantum computing systems. It gives a concise presentation of this new paradigm of computing from a computer systems' point of view without assuming any background in quantum mechanics. As such, it is divided into two parts. The first part of the book provides a gentle overview on the fundamental principles of the quantum theory and their implications for computing. The second part is devoted to state-of-the-art research in designing practical quantum programs, building a scalable software systems stack, and controlling quantum hardware components. Most chapters end with a summary and an outlook for future directions. This book celebrates the remarkable progress that scientists across disciplines have made in the past decades and reveals what roles computer scientists and engineers can play to enable practical-scale quantum computing.
Quantum technologies promise to revolutionize many fields, ranging from precise sensing to fast computation. The success of novel technologies based on quantum effects rests on engineering quantum systems robust to decoherence-the uncontrollable decay of quantum coherence, one of the very features that empowers quantum computation. To date, performance of quantum devices in the noisy intermediate-scale quantum (NISQ) era is still limited by decoherence. The long term solution is universal quantum computers that run on fault-tolerant quantum error corrected logical qubits which are immune to decoherence. However, the substantial overhead of qubits and quantum gates quantum error correction (QEC) imposes is thought to greatly limit its utility in NISQ devices. In this thesis, we address this challenge through a hardware-efficient approach-leveraging understanding of the quantum system towards more efficient and robust QEC protocols, which opens a potential avenue for useful QEC in near-term, pre-fault-tolerant devices. We are interested in the solid-state quantum register comprising the nitrogen-vacancy (NV) electronic spin and neighboring nitrogen and carbon nuclear spins. First, we developed techniques that provided us with precise knowledge of the system Hamiltonian and in turn high-fidelity and fast control. Next, we investigated and identified the decoherence mechanism of nuclear spins in the quantum register. The dominant noise turns out to be the thermal fluctuation of the NV electron. We demonstrated a dynamical decoupling approach to suppress the fluctuator noise and extended the nuclear spin coherence time. Furthermore, based on the precise knowledge of the system Hamiltonian and decoherence model, we customized a hardware-efficient QEC code for dephasing induced by a common fluctuator. This QEC code requires exponentially less overhead compared to the usual repetition code, and is robust to model imperfections. Finally, we developed experimental building blocks for near-term applications of the hardware-efficient QEC.
This book is designed for advanced undergraduate and graduate students in high energy heavy-ion physics. It is relevant for students who will work on topics being explored at RHIC and the LHC. In the first part, the basic principles of these studies are covered including kinematics, cross sections (including the quark model and parton distribution functions), the geometry of nuclear collisions, thermodynamics, hydrodynamics and relevant aspects of lattice gauge theory at finite temperature. The second part covers some more specific probes of heavy-ion collisions at these energies: high mass thermal dileptons, quarkonium and hadronization. The second part also serves as extended examples of concepts learned in the previous part. Both parts contain examples in the text as well as exercises at the end of each chapter. - Designed for students and newcomers to the field- Focuses on hard probes and QCD- Covers all aspects of high energy heavy-ion physics- Includes worked example problems and exercises
Rapid developments in experiments provide promising platforms for realising quantum computation and quantum simulation. This, in turn, opens new possibilities for developing useful quantum algorithms and explaining complex many-body physics. The advantages of quantum computation have been demonstrated in a small range of subjects, but the potential applications of quantum algorithms for solving complex classical problems are still under investigation. Deeper understanding of complex many-body systems can lead to realising quantum simulation to study systems which are inaccessible by other means.This thesis studies different topics of quantum computation and quantum simulation.The first one is improving a quantum algorithm in adiabatic quantum computing, which can be used to solve classical problems like combinatorial optimisation problems and simulated annealing. We are able to reach a new bound of time cost for the algorithm which has a potential to achieve a speed up over standard adiabatic quantum computing. The second topic is to understand the amplitude noise in optical lattices in the context of adiabatic state preparation and the thermalisation of the energy introduced to the system. We identify regimes where introducing certain type of noise in experiments would improve the final fidelity of adiabatic state preparation, and demonstrate the robustness of the state preparation to imperfect noise implementations. We also discuss the competition between heating and dephasing effects, the energy introduced by non-adiabaticity and heating, and the thermalisation of the system after an application of amplitude noise on the lattice. The third topic is to design quantum algorithms to solve classical problems of fluid dynamics. We develop a quantum algorithm based around phase estimation that can be tailored to specific fluid dynamics problems and demonstrate a quantum speed up over classical Monte Carlo methods. This generates new bridge between quantum physics and fluid dynamics engineering, can be used to estimate the potential impact of quantum computers and provides feedback on requirements for implementing quantum algorithms on quantum devices.