Download Free Mixed Integer Second Order Cone Optimization Disjunctive Conic Cuts Theory And Experiments Book in PDF and EPUB Free Download. You can read online Mixed Integer Second Order Cone Optimization Disjunctive Conic Cuts Theory And Experiments and write the review.

Mixed Integer Second Order Cone Optimization (MISOCO) problems allow practitioners to mathematically describe a wide variety of real world engineering problems including supply chain, finance, and networks design. A MISOCO problem minimizes a linear function over the set of solutions of a system of linear equations and the Cartesian product of second order cones of various dimensions, where a subset of the variables is constrained to be integer. This thesis presents a technique to derive inequalities that help to obtain a tighter mathematical description of the feasible set of a MISOCO problem. This improved description of the problem usually leads to accelerate the process of finding its optimal solution. In this work we extend the ideas of disjunctive programming, originally developed for mixed integer linear optimization, to the case of MISOCO problems. The extension presented here results in the derivation of a novel methodology that we call \emph{disjunctive conic cuts} for MISOCO problems. The analysis developed in this thesis is separated in three parts. In the first part, we introduce the formal definition of disjunctive conic cuts. Additionally, we show that under some mild assumptions there is a necessary and sufficient condition that helps to identify a disjunctive conic cut for a given convex set. The main appeal of this condition is that it can be easily verified in the case of MISOCO problems. In the second part, we study the geometry of sets defined by a single quadratic inequality. We show that for some of these sets it is possible to derive a close form to build a disjunctive conic cut. In the third part, we show that the feasible set of a MISOCO problem with a single cone can be characterized using sets that are defined by a single quadratic inequality. Then, we present the results that provide the criteria for the derivation of disjunctive conic cuts for MISOCO problems. Preliminary numerical experiments with our disjunctive conic cuts used in a branch-and-cut framework provide encouraging results where this novel methodology helped to solve MISOCO problems more efficiently. We close our discussion in this thesis providing some highlights about the questions that we consider worth pursuing for future research.
This book features a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in B ethlehem, Pennsylvania, USA between August 16-18, 2017. The conference brought together a diverse group of researchers and practitioners working on both theoretical and practical aspects of continuous and discrete optimization. Topics covered include algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and address the application of deterministic andstochastic optimization techniques in energy, finance, logistics, analytics, health, and other important fields. The selected contributions in this book illustrate the broad diversity of ideas discussed at the meeting.
Many engineering, operations, and scientific applications include a mixture of discrete and continuous decision variables and nonlinear relationships involving the decision variables that have a pronounced effect on the set of feasible and optimal solutions. Mixed-integer nonlinear programming (MINLP) problems combine the numerical difficulties of handling nonlinear functions with the challenge of optimizing in the context of nonconvex functions and discrete variables. MINLP is one of the most flexible modeling paradigms available for optimization; but because its scope is so broad, in the most general cases it is hopelessly intractable. Nonetheless, an expanding body of researchers and practitioners — including chemical engineers, operations researchers, industrial engineers, mechanical engineers, economists, statisticians, computer scientists, operations managers, and mathematical programmers — are interested in solving large-scale MINLP instances.
Disjunctive Programming is a technique and a discipline initiated by the author in the early 1970's, which has become a central tool for solving nonconvex optimization problems like pure or mixed integer programs, through convexification (cutting plane) procedures combined with enumeration. It has played a major role in the revolution in the state of the art of Integer Programming that took place roughly during the period 1990-2010. The main benefit that the reader may acquire from reading this book is a deeper understanding of the theoretical underpinnings and of the applications potential of disjunctive programming, which range from more efficient problem formulation to enhanced modeling capability and improved solution methods for integer and combinatorial optimization. Egon Balas is University Professor and Lord Professor of Operations Research at Carnegie Mellon University's Tepper School of Business.
In this thesis, we study mixed-integer convex optimization, or mixed-integer convex programming (MICP), the class of optimization problems where one seeks to minimize a convex objective function subject to convex constraints and integrality restrictions on a subset of the variables. We focus on two broad and complementary questions on MICP. The first question we address is, "what are efficient methods for solving MICP problems?" The methodology we develop is based on outer approximation, which allows us, for example, to reduce MICP to a sequence of mixed-integer linear programming (MILP) problems. By viewing MICP from the conic perspective of modern convex optimization as defined by Ben-Tal and Nemirovski, we obtain significant computational advances over the state of the art, e.g., by automating extended formulations by using disciplined convex programming. We develop the first finite-time outer approximation methods for problems in general mixed-integer conic form (which includes mixed-integer second-order-cone programming and mixed-integer semidefinite programming) and implement them in an open-source solver, Pajarito, obtaining competitive performance with the state of the art. The second question we address is, "which nonconvex constraints can be modeled with MICP?" This question is important for understanding both the modeling power gained in generalizing from MILP to MICP and the potential applicability of MICP to nonconvex optimization problems that may not be naturally represented with integer variables. Among our contributions, we completely characterize the case where the number of integer assignments is bounded (e.g., mixed-binary), and to address the more general case we develop the concept of "rationally unbounded" convex sets. We show that under this natural restriction, the projections of MICP feasible sets are well behaved and can be completely characterized in some settings.
Many real world problems from various application areas such as engineering, finance and operation research can be cast as optimization problems. Generally, the goal is to optimize an objective function under a set of constraints. Traditionally, convex optimization problems are solved by an interior point method (IPM). Interior point methods proved to achieve high accuracy for moderate size problems. However, the computation cost of iterations of these iterative algorithms grows non-linearly with the dimension of the problem. Although interior-point methods are robust and theoretically sound, they do not scale well for very large conic optimization programs. Computational cost, memory issues, and incompatibility with distributed platforms are among the major impediments for interior point methods in solving large-scale and practical conic optimization programs. The rapid growth of problem size in application areas such as power systems, finance, signal processing and machine learning motivated researchers to develop computationally efficient optimization solvers. In recent years, first orders methods have received a particular attention for solving large convex optimization problems. Various optimization solvers based on first order numerical algorithms have been developed in the past decade. Although first order methods provide low accuracy solutions, but inexpensive iterations and low computational cost makes them attractive mathematical tools for handling large-scale problems. One of the major shortcomings of first order methods to achieve a higher accuracy is their slow tail convergence behavior. The first part of this work is an attempt to remedy the problem of slow convergence for first-order numerical algorithms by proposing an adaptive conditioning heuristic policy. First, a parallelizable numerical algorithm is proposed that is capable of dealing with large-scale conic programs on distributed platforms such as graphics processing unit (GPU) with orders-of-magnitude time improvement. The mathematical proof for global convergence of proposed numerical algorithm is provided. In the past decade, several preconditioning methods have been applied to improve the condition number and convergence of first order methods. Diagonal preconditioning and matrix equilibration techniques are most commonly used for this purpose. In contrary to the existing techniques, in this work, it is argued that the condition number of the problem data is not a reliable predictor of convergence speed. In light of this observation, an adaptive conditioning heuristic is proposed which enables higher accuracy compared to other first-order numerical algorithms. A wide range of experiments are conducted on a variety of large-scale linear programming and second-order cone programming problems to demonstrate the scalability and computational advantages of the proposed algorithm compared to commercial and open-source solvers. Solving the linear system is probably the most computationally expensive part in first order methods. The existing methods rely on direct and indirect methods for solving the linear systems of equations. Direct methods rely on factorization techniques which usually destroy the sparsity structure of original sparse problems and hence become computationally prohibitive. Alternatively, indirect methods are iterative and various preconditioning variants of indirect or iterative methods have been studied in the literature to improve accuracy, but again the preconditioners do not necessarily retain the sparsity patterns of original problems. In the second part of this work, a matrix-free first order approach is proposed for solving large-scales parse conic optimization problems. This method is based on an easy-to-compute decomposition of large sparse matrices into two factors. The proposed numerical algorithm is based on matrix-free decomposition and alternating direction method of multipliers. The iterations of the designed algorithm are computationally cheap, highly parallelizable and enjoy closed form solutions. The algorithm can easily be implemented on distributed platforms such as graphics processing units with orders of-magnitude time improvements. The performance of the proposed algorithm is demonstrated on a variety of conic problems and the performance gain is compared with competing first-order solvers.