Download Free Adaptive Maximum Penalized Likelihood Estimation Book in PDF and EPUB Free Download. You can read online Adaptive Maximum Penalized Likelihood Estimation and write the review.

This book deals with parametric and nonparametric density estimation from the maximum (penalized) likelihood point of view, including estimation under constraints. The focal points are existence and uniqueness of the estimators, almost sure convergence rates for the L1 error, and data-driven smoothing parameter selection methods, including their practical performance. The reader will gain insight into technical tools from probability theory and applied mathematics.
Unique blend of asymptotic theory and small sample practice through simulation experiments and data analysis. Novel reproducing kernel Hilbert space methods for the analysis of smoothing splines and local polynomials. Leading to uniform error bounds and honest confidence bands for the mean function using smoothing splines Exhaustive exposition of algorithms, including the Kalman filter, for the computation of smoothing splines of arbitrary order.
The multinomial logit model with random coefficients is widely used in applied research. This paper is concerned with estimating a random coefficients logit model in which the distribution of each coefficient is characterized by finitely many parameters. Some of these parameters may be zero. The paper gives conditions under which with probability approaching 1 as the sample size approaches infinity, penalized maximum likelihood (PML) estimation with the adaptive LASSO (AL) penalty function distinguishes correctly between zero and non-zero parameters in a random coefficients logit model. If one or more parameters are zero, then PML with the AL penalty function often reduces the asymptotic mean-square estimation error of any continuously differentiable function of the model’s parameters, such as a market share or an elasticity. The paper describes a method for computing the PML estimates of a random coefficients logit model. It also presents the results of Monte Carlo experiments that illustrate the numerical performance of the PML estimates. Finally, it presents the results of PML estimation of a random coefficients logit model of choice among brands of butter and margarine in the British groceries market.
Contributed in honour of Lucien Le Cam on the occasion of his 70th birthday, the papers reflect the immense influence that his work has had on modern statistics. They include discussions of his seminal ideas, historical perspectives, and contributions to current research - spanning two centuries with a new translation of a paper of Daniel Bernoulli. The volume begins with a paper by Aalen, which describes Le Cams role in the founding of the martingale analysis of point processes, and ends with one by Yu, exploring the position of just one of Le Cams ideas in modern semiparametric theory. The other 27 papers touch on areas such as local asymptotic normality, contiguity, efficiency, admissibility, minimaxity, empirical process theory, and biological medical, and meteorological applications - where Le Cams insights have laid the foundations for new theories.
Chapter 2 proposes a penalized maximum likelihood approach with adaptive Lasso penalty to estimate SARAR models. It allows for simultaneous model selection and parameter estimation. With appropriately chosen tuning parameter, the resulting estimators enjoy the oracle properties, in other words, zero parameters are estimated as zeros with probability approaching one and nonzero parameters possess the same asymptotic distribution as if the true model is known. We extend Zhu, Huang and Ryes (2010)’s work to account for models with spatial lags. We also allow the number of parameters to grow with sample size at a relatively slow rate. As maximum likelihood estimation is computationally demanding, we generalize the least squares approximation (LSA) algorithm (Wang and Leng, 2010) to spatial linear models and prove that the LSA estimators perform as efficiently as the oracle as long as a consistent initial estimator with proper convergence rate is adopted in the algorithm. By using the LSA algorithm with a computationally simple initial estimator, we can perform penalized maximum likelihood estimation of SARAR models much faster than Zhu, Huang and Ryes (2010) without sacrificing efficiency.