Home > Error Bounds > Error Bounds For Convex Parameter Estimation

Error Bounds For Convex Parameter Estimation

Contents

Denote by y the vectorization (con- catenating the columns) of the sample covariance matrix, and denote by B the manifold samples i.e. Denote by hc the element of H that is closest to h0. We consider the case of estimating a single parameter vector, and provide upper bounds on the achievable accuracy. Cattafesta, Sparsity constrained deconvolution approaches for acoustic source mapping, Journal of the Acoustical Society of America 123 (5) (2008) 2631–2642. [24] J. have a peek here

Proof of Corollary 1 Proof. morefromWikipedia Compressed sensing Compressed sensing, also known as compressive sensing, compressive sampling and sparse sampling, is a technique for finding sparse solutions to underdetermined linear systems. Please try the request again. View full text Signal ProcessingVolume 92, Issue 5, May 2012, Pages 1328–1337 Error bounds for convex parameter estimationJ.S.

Error Bounds For Convex Parameter Estimation

Also you can mail to "fulltext.study" at Gmail Contact Us Home Search by doi Facebook Twitter Google RSS Login JOIN UPLOAD Menu Categories Art & Photos Automotive Business Career Data & or its licensors or contributors. Rao, Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm, IEEE Transactions on Signal Processing 45 (3) (1997) 600–616. [22] J.-J.

For more information, visit the cookies page.Copyright © 2016 Elsevier B.V. morefromWikipedia Sparse approximation Sparse approximation (also referred to as sparse decomposition) is the problem of estimating a sparse multi-dimensional vector, satisfying a linear system of equations given high-dimensional observed data and Thus, in many cases, the estimation ce by Þ. Observations subject to additive noise So far, neither the structure of the vector cðhÞ nor the observation noise distribution has been specified.

Cetin, A.S. We show that despite some similarity, the problem we address differs from the problem solved by LASSO. Appendix A. http://dl.acm.org/citation.cfm?id=2108772&preflayout=tabs A proposal for the cost function to minimize is f ðhð1Þ, . . . ,hðQÞ,gð1Þ, . . . ,gðQ ÞÞ9Jy�PQ q ¼ 1 gðqÞcðhðqÞÞJp where pZ0.

J. Please enable JavaScript to use all the features on this page. Candes, T. The sparse representation is obtained by considering N possible locations for the transmitter within the AOA interval of interest.

vector whose c-th entry is strictly positive and the other entries are all zero. & Appendix B. http://documentslide.com/documents/error-bounds-for-convex-parameter-estimation.html Define the projection matrix on the space spanned by the columns of CðhÞ, PðhÞ ¼ CðhÞ½CT ðhÞCðhÞ��1CT ðhÞ ð19Þ In the absence of noise, y¼ Cðh0Þs0 ¼ Pðh0Þy and therefore the Error Bounds For Convex Parameter Estimation The properties of LASSO have been widely discussed in the literature, and several error bounds have been derived in recent work [16–19]. For simplicity, assume that each of the entries of hðqÞ0 is confined to the interval [0, R] where R40 is a real scalar.

morefromWikipedia Maxima and minima In mathematics, the maximum and minimum of a function, known collectively as extrema, are the largest and smallest value that the function takes at a point either navigate here The system returned: (22) Invalid argument The remote host or network may be down. Eq. (9) can be equivalently formulated as PrfJh^�h0J24dgr1� max Dr2d= ffiffi T p fpDðh0Þg ðC:1Þ According to (4.3.24) in [28], the first and the second order moment of a random variable Using vector notations, we have B9½cðh1Þ, . . . ,cðhLÞ�.

Conditions ensuring uniqueness of the sparsest solution were established in [8]. We avoid significantly smaller values of ds since this results in a large matrix B that cannot be handled by the memory of the computer we used. We introduce a corollary to Theorem 2. http://megavoid.net/error-bounds/error-bounds.html The grid consists of L points H¼ fh1, . . . ,hLg.

morefromWikipedia Convex optimization Convex minimization, a subfield of optimization, studies the problem of minimizing convex functions over convex sets. Weiss / Signal Processing 92 (2012) 1328–1337 1329 scalars gðqÞ0 40 may be unknown, depending on the problem at hand. Generated Mon, 10 Oct 2016 14:56:26 GMT by s_ac15 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.9/ Connection

J.S.

An estimator attempts to approximate the unknown parameters using the measurements. dd¼ 2 Z d0 0 d 1� max Dr2d= ffiffi T p fpDðh0Þg ! When maximizing pDðh0Þ w.r.t. Thus, the error bounds derived for LASSO are not appropriate for the problem at hand.

Reach out to us and we'll respond as soon as we can. that the following inequality holds: 1 2 Jy�BpJ22þlJpJ1412Jy�Bp^J22þlJp^J1 ¼ ½12Jy�bðhcÞfp^gcJ22þlfp^gc� ðA:9Þ We will find a condition ensuring that (A.9) holds for any p in a close neighborhood of p^. The minimizer in this case is the well-known Maximum Likelihood estimator of h0. 1.1. http://megavoid.net/error-bounds/error-bounds-statistics.html A question of great interest that motivated the research work described in this paper is the evaluation of the effects of the convex relaxation in terms of estimation accuracy.

The theory of sparse signal representation has been used to solve the NP-hard linear decoding problem in [14]. Unfortunately, the advantages come at the expense of increased estimation error. Proof of Theorem 2 Proof. morefromWikipedia Tools and Resources TOC Service: Email RSS Save to Binder Export Formats: BibTeX EndNote ACMRef Share: | Author Tags bounds compressed sensing estimation sparsity Contact Us | Switch to single

Initially, h0 is searched within [0,180]1 using L¼90 grid points whose spacing is D¼ 21. Given a real vector space together with a convex, real-valued function defined on a convex subset of, the problem is to find any point in for which the number is smallest, Copyright © 2016 ACM, Inc. Let t be a small positive scalar and t ~p a perturbation on p^.

Denote by h^ the estimator of h0 obtained by solving (P) and define B19 ffiffiffi T p R 2 � Z ffiffiTp R=2 0 max Dr2d= ffiffi T p fpDðh0Þg dd The bias of the proposed AOA estimator (dotted line), the 1st order moment of the estimation error (dashed-dotted line), and the proposed upper bound B1 on the 1st order moment (solid The theoretical results are corroborated by simulations. Unfortunately, the advantages come at the stimation error.

The system will automatically switch to the previous page after 6 seconds Sign in Forgot password? More specifically, we are interested in the dis- tribution of the estimation error when the estimation of h0 is obtained from the solution of (P). This convexification of course eliminates the problem of local minima and makes the problem dimensions tractable at the expense of possibly increased error. Huang, The sparsity and bias of the LASSO selection in high-dimensional linear regression, The Annals of Statistics 36 (4) (2008) 1567–1594. [17] P.J.

Then, the evaluation of pD 0ðh0,eÞ is simple and so is the evaluation of B01ðeÞ and B02ðeÞ. 5. Typical estimation approaches consist of minimizing a non-convex cost function that exhibits local minima, and require excessive computational resources. WeissSignal Processing2012CiteSaveRelated Publications Loading related papers…Abstract & DetailsRelated PublicationsThe Allen Institute for Artificial IntelligenceProudly built by AI2 with the help of our Collaborators using these Sources.Terms of Service. Ideally, the minimizer of f has a closed-form expression that can be derived analytically.

E-mail addresses: [email protected] (J.S.