Download Free Impact Of Process Variations On Soft Error Sensitivity Of 32 Nm Vlsi Circuits In Near Threshold Region Book in PDF and EPUB Free Download. You can read online Impact Of Process Variations On Soft Error Sensitivity Of 32 Nm Vlsi Circuits In Near Threshold Region and write the review.

This monograph is motivated by the challenges faced in designing reliable VLSI systems in modern VLSI processes. The reliable operation of integrated circuits (ICs) has become increasingly dif?cult to achieve in the deep submicron (DSM) era. With continuouslydecreasing device feature sizes, combinedwith lower supply voltages and higher operating frequencies, the noise immunity of VLSI circuits is decreasing alarmingly. Thus, VLSI circuits are becoming more vulnerable to noise effects such as crosstalk, power supply variations, and radiation-inducedsoft errors. Among these noise sources, soft errors(or error caused by radiation particle strikes) have become an increasingly troublesome issue for memory arrays as well as c- binational logic circuits. Also, in the DSM era, process variations are increasing at a signi?cant rate, making it more dif?cult to design reliable VLSI circuits. Hence, it is important to ef?ciently design robust VLSI circuits that are resilient to radiation particle strikes and process variations. The work presented in this research mo- graph presents several analysis and design techniques with the goal of realizing VLSI circuits, which are radiation and process variation tolerant.
This book is intended for readers who are interested in the design of robust and reliable electronic digital systems. The authors cover emerging trends in design of today’s reliable electronic systems which are applicable to safety-critical applications, such as automotive or healthcare electronic systems. The emphasis is on modeling approaches and algorithms for analysis and mitigation of soft errors in nano-scale CMOS digital circuits, using techniques that are the cornerstone of Computer Aided Design (CAD) of reliable VLSI circuits. The authors introduce software tools for analysis and mitigation of soft errors in electronic systems, which can be integrated easily with design flows. In addition to discussing soft error aware analysis techniques for combinational logic, the authors also describe new soft error mitigation strategies targeting commercial digital circuits. Coverage includes novel Soft Error Rate (SER) analysis techniques such as process variation aware SER estimation and GPU accelerated SER analysis techniques, in addition to SER reduction methods such as gate sizing and logic restructuring based SER techniques.
ABSTRACT: The occurrence of transient faults like soft errors in computer circuits poses a significant challenge to the reliability of computer systems. Soft error, which occurs when the energetic neutrons coming from space or the alpha particles arising out of packaging materials hit the transistors, may manifest themselves as a bit flip in the memory element or as a transient glitch generated at any internal node of combinational logic, which may subsequently propagate to and be captured in a latch. Although the problem of soft errors was earlier only a concern for space applications, aggressive technology scaling trends have exacerbated the problem to modern VLSI systems even for terrestrial applications. In this dissertation, we explore techniques at all levels of the design flow to reduce the vulnerability of VLSI systems against soft errors without compromising on other design metrics like delay, area and power. We propose new models for estimating soft errors for storage structures and combinational logic. While soft errors in caches are estimated using the vulnerability metric, soft errors in logic circuits are estimated using two new metrics called the glitch enabling probability (GEP) and the cumulative probability of observability (CPO). These metrics, based on signal probabilities of nets, accurately model soft errors in radiation-aware synthesis algorithms and helps in efficient exploration of the design solution space during optimization. At the physical design level, we leverage the use of larger netlengths to provide larger RC ladders for effectively filtering out the transient glitches. Towards this, a new heuristic has been developed to selectively assign larger wirelengths to certain critical nets. This reduces the delay and area overhead while improving the immunity to soft errors. Based on this, we propose two placement algorithms based on simulated annealing and quadratic programming which significantly reduce the soft error rates of circuits. At the circuit level, we develop techniques for hardening circuit nodes using a novel radiation jammer technique. The proposed technique is based on the principles of a RC differentiator and is used to isolate the driven cell from the driving cell which is being hit by a radiation strike. Since the blind insertion of radiation blocker cells on all circuit nodes is expensive, candidate nodes are selected for insertion of these cells using a new metric called the probability of radiation blocker circuit insertion (PRI). We investigate a gate sizing algorithm, at the logic level, in which we simultaneously optimize both the soft error rate (SER) and the crosstalk noise besides the power and performance of circuits while considering the effect of process variations. The reliability centric gate sizing technique has been formulated as a mathematical program and is efficiently solved. At the architectural level, we develop solutions for the correction of multi-bit errors in large L2 caches by controlling or mining the redundancy in the memory hierarchy and methods to increase the amount of redundancy in the memory hierarchy by employing a redundancy-based replacement policy, in which the amount of redundancy is controlled using a user defined redundancy threshold. The novel architectures and the new reliability-centric synthesis algorithms proposed for the various design abstraction levels have been shown to achieve significant reduction of soft error rates in current nanometer circuits. The design techniques, algorithms and architectures can be integrated into existing design flows. A VLSI system implementation can leverage on the architectural solutions for the reliability of the caches while the custom hardware synthesized for the VLSI system can be protected against radiation strikes by utilizing the circuit level, logic level and layout level optimization algorithms that have been developed.
Soft errors are a multifaceted issue at the crossroads of applied physics and engineering sciences. Soft errors are by nature multiscale and multiphysics problems that combine not only nuclear and semiconductor physics, material sciences, circuit design, and chip architecture and operation, but also cosmic-ray physics, natural radioactivity issues, particle detection, and related instrumentation. Soft Errors: From Particles to Circuits addresses the problem of soft errors in digital integrated circuits subjected to the terrestrial natural radiation environment—one of the most important primary limits for modern digital electronic reliability. Covering the fundamentals of soft errors as well as engineering considerations and technological aspects, this robust text: Discusses the basics of the natural radiation environment, particle interactions with matter, and soft-error mechanisms Details instrumentation developments in the fields of environment characterization, particle detection, and real-time and accelerated tests Describes the latest computational developments, modeling, and simulation strategies for the soft error-rate estimation in digital circuits Explores trends for future technological nodes and emerging devices Soft Errors: From Particles to Circuits presents the state of the art of this complex subject, providing comprehensive knowledge of the complete chain of the physics of soft errors. The book makes an ideal text for introductory graduate-level courses, offers academic researchers a specialized overview, and serves as a practical guide for semiconductor industry engineers or application engineers.
This book explores near-threshold computing (NTC), a design-space using techniques to run digital chips (processors) near the lowest possible voltage. Readers will be enabled with specific techniques to design chips that are extremely robust; tolerating variability and resilient against errors. Variability-aware voltage and frequency allocation schemes will be presented that will provide performance guarantees, when moving toward near-threshold manycore chips. · Provides an introduction to near-threshold computing, enabling reader with a variety of tools to face the challenges of the power/utilization wall; · Demonstrates how to design efficient voltage regulation, so that each region of the chip can operate at the most efficient voltage and frequency point; · Investigates how performance guarantees can be ensured when moving towards NTC manycores through variability-aware voltage and frequency allocation schemes.
Variability has become one of the vital challenges that the designers of integrated circuits encounter. variability becomes increasingly important. Imperfect manufacturing process manifest itself as variations in the design parameters. These variations and those in the operating environment of VLSI circuits result in unexpected changes in the timing, power, and reliability of the circuits. With scaling transistor dimensions, process and environmental variations become significantly important in the modern VLSI design. A smaller feature size means that the physical characteristics of a device are more prone to these unaccounted-for changes. To achieve a robust design, the random and systematic fluctuations in the manufacturing process and the variations in the environmental parameters should be analyzed and the impact on the parametric yield should be addressed. This thesis studies the challenges and comprises solutions for designing robust VLSI systems in the presence of variations. Initially, to get some insight into the system design under variability, the parametric yield is examined for a small circuit. Understanding the impact of variations on the yield at the circuit level is vital to accurately estimate and optimize the yield at the system granularity. Motivated by the observations and results, found at the circuit level, statistical analyses are performed, and solutions are proposed, at the system level of abstraction, to reduce the impact of the variations and increase the parametric yield. At the circuit level, the impact of the supply and threshold voltage variations on the parametric yield is discussed. Here, a design centering methodology is proposed to maximize the parametric yield and optimize the power-performance trade-off under variations. In addition, the scaling trend in the yield loss is studied. Also, some considerations for design centering in the current and future CMOS technologies are explored. The investigation, at the circuit level, suggests that the operating temperature significantly affects the parametric yield. In addition, the yield is very sensitive to the magnitude of the variations in supply and threshold voltage. Therefore, the spatial variations in process and environmental variations make it necessary to analyze the yield at a higher granularity. Here, temperature and voltage variations are mapped across the chip to accurately estimate the yield loss at the system level. At the system level, initially the impact of process-induced temperature variations on the power grid design is analyzed. Also, an efficient verification method is provided that ensures the robustness of the power grid in the presence of variations. Then, a statistical analysis of the timing yield is conducted, by taking into account both the process and environmental variations. By considering the statistical profile of the temperature and supply voltage, the process variations are mapped to the delay variations across a die. This ensures an accurate estimation of the timing yield. In addition, a method is proposed to accurately estimate the power yield considering process-induced temperature and supply voltage variations. This helps check the robustness of the circuits early in the design process. Lastly, design solutions are presented to reduce the power consumption and increase the timing yield under the variations. In the first solution, a guideline for floorplaning optimization in the presence of temperature variations is offered. Non-uniformity in the thermal profiles of integrated circuits is an issue that impacts the parametric yield and threatens chip reliability. Therefore, the correlation between the total power consumption and the temperature variations across a chip is examined. As a result, floorplanning guidelines are proposed that uses the correlation to efficiently optimize the chip's total power and takes into account the thermal uniformity. The second design solution provides an optimization methodology for assigning the power supply pads across the chip for maximizing the timing yield. A mixed-integer nonlinear programming (MINLP) optimization problem, subject to voltage drop and current constraint, is efficiently solved to find the optimum number and location of the pads.
This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems.
This book provides a comprehensive presentation of the most advanced research results and technological developments enabling understanding, qualifying and mitigating the soft errors effect in advanced electronics, including the fundamental physical mechanisms of radiation induced soft errors, the various steps that lead to a system failure, the modelling and simulation of soft error at various levels (including physical, electrical, netlist, event driven, RTL, and system level modelling and simulation), hardware fault injection, accelerated radiation testing and natural environment testing, soft error oriented test structures, process-level, device-level, cell-level, circuit-level, architectural-level, software level and system level soft error mitigation techniques. The book contains a comprehensive presentation of most recent advances on understanding, qualifying and mitigating the soft error effect in advanced electronic systems, presented by academia and industry experts in reliability, fault tolerance, EDA, processor, SoC and system design, and in particular, experts from industries that have faced the soft error impact in terms of product reliability and related business issues and were in the forefront of the countermeasures taken by these companies at multiple levels in order to mitigate the soft error effects at a cost acceptable for commercial products. In a fast moving field, where the impact on ground level electronics is very recent and its severity is steadily increasing at each new process node, impacting one after another various industry sectors (as an example, the Automotive Electronics Council comes to publish qualification requirements on soft errors), research and technology developments and industrial practices have evolve very fast, outdating the most recent books edited at 2004.
As semiconductor industry continues to follow Moore's Law of doubled device count every 18 months, it is challenged by the rising uncertainties in the manufacturing process for nanometer technologies. Manufacturing defects lead to a random variation in physical parameters like the dopant density, critical dimensions and oxide thickness. These physical defects manifest themselves as variations in device process parameters like threshold voltage and effective channel length of transistors. The randomness in process parameters affect the performance of VLSI circuits which leads to a loss in parametric yield. Conventional design methodologies, with corner case based analysis techniques fail to predict the performance of circuits reliably in the presence of random process variations. Moreover, the analysis techniques for detection of defects in the later stages of the design cycle result in significant overhead in cost due to re-spins. In recent times, VLSI computer aided design methodologies have shifted to statistical analysis techniques for performance measurements with specific yield targets. However, the adoption of statistical techniques in commercial design flows has been limited by the complexity of their usage and the need for generating specially characterized models. This also makes them unsuitable in repeated loops during the synthesis process. In this dissertation, we present an alternate approach to model and optimize the performance of digital and analog circuits in the presence of random process variations. Our work is targeted for a bottom-up methodology providing incremental tolerance to the circuits under the impact of random process variations. The methodologies presented, can be used to generate fast evaluating accurate macromodels to compute the bounds of performance due to the underlying variations in device parameters. The primary goal of our methodology is to capture the statistical aspects of variation in the lower levels of abstraction, while aiding deterministic analysis during the top level design optimization. We also attempt to build our solutions as a wrapper around a conventional design flow, without the requirement for special characterization. The modeling and optimization techniques are perfectly scalable across technology generations and can find practical usage during variation-tolerant synthesis of VLSI circuit performance.