分数的导数公式Compressive sampling
Emamnuel J.Candès∗
Abstract.Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image,the number of Fourier samples we need to acquire must match the desired resolution of the he number of pixels in the image. This paper surveys an emerging theory which goes by the name of“compressive sampling”or “compresd nsing,”and which says that this conventional wisdom is inaccurate.Perhaps surprisingly,it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the image/he number of pixels in the image.
It is believed that compressive sampling has far reaching implications.For example,it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer nsors than what was considered necessary.This new sampling theory may come to underlie procedures for sampling and compressing data simultaneously.
In this short survey,we provide some of the key mathematical insights underlying this new theory,and explain some of the interactions between compressive sampling and otherfields such as statistics,information theory,coding theory,and theoretical computer science. Mathematics Subject Classification(2000).Primary00A69,41-02,68P30;Secondary62C65.
牛肝菌怎么做好吃Keywords.Compressive sampling,sparsity,uniform uncertainty principle,underdertermined systems of linear equations, 1-minimization,linear programming,signal recovery,error cor-rection.
1.Introduction
One of the central tenets of signal processing is the Nyquist/Shannon sampling theory: the number of samples needed to reconstruct a signal without error is dictated by its bandwidth–the length of the shortest interval which contains the support of the spectrum of the signal under study.In the last two years or so,an alternative theory of“compressive sampling”has emerged which shows that super-resolved signals and images can be reconstructed from far fewer data/measurements than what is usually considered necessary.The purpo of this paper is to survey and provide some of the key mathematical insights underlying this new theory.An enchanting aspect of compressive sampling it that it has significant interactions and bearings on somefields in the applied sciences and engineerin
g such as statistics,information theory,coding ∗The author is partially supported by an NSF grant CCF–515362.
Proceedings of the International Congress
of Mathematicians,Madrid,Spain,2006
©2006European Mathematical Society
2Emmanuel J.Candès theory,theoretical computer science,and others as well.We will try to explain the connections via a few lected examples.
From a general viewpoint,sparsity and,more generally,compressibility has played and continues to play a fundamental role in manyfields of science.Sparsity leads to efficient estimations;for example,the quality of estimation by thresholding or shrink-age algorithms depends on the sparsity of the signal we wish to estimate.Sparsity leads to efficient compression;for example,the precision of a transform coder depends on the sparsity of the signal we wish to encode[24].Sparsity leads to dimensionality reduction and efficient modeling.The novelty here is that sparsity has bearings on the data acquisition process itlf,and leads to efficient data acquisition protocols.
In fact,compressive sampling suggests ways to economically translate analog data into already compresd digital form[20],[7].The key word here is“economically.”Everybody knows that becau typical signals have some structure,they can be com-presd efficiently without much perceptual loss.For instance,modern transform coders such as JPEG2000exploit the fact that many signals have a spar repren-tation in afixed basis,meaning that one can store or transmit only a small number of adaptively chon transform coefficients rather than all the signal samples.The way this typically works is that one acquires the full signal,computes the complete t of transform coefficients,encode the largest coefficients and discard all the others.This process of massive data acquisition followed by compression is extremely wasteful (one can think about a digital camera which has millions of imaging nsors,the pixels,but eventually encodes the picture on a few hundred kilobytes).This rais a fundamental question:becau most signals are compressible,why spend so much ef-fort acquiring all the data when we know that most of it will be discarded?Wouldn’t it be possible to acquire the data in already compresd form so that one does not need to throw away anything?“Compressive sampling”also known as“compresd nsing”[20]shows that this is indeed possible.
This paper is by no means an exhaustive survey of the literature on compressive sampling.Rather thi
s is merely an account of the author’s own work and thinking in this area which also includes a fairly large number of references to other people’s work and occasionally discuss connections with the works.We have done our best to organize the ideas into a logical progression starting with the early papers which launched this subject.Before we begin,we would like to invite the interested reader to also check the article[17]by Ronald DeV ore–also in the proceedings–for a complementary survey of thefield(Section5).
2.Undersampled measurements
Consider the general problem of reconstructing a vector x∈R N from linear mea-surements y about x of the form
y k= x,ϕk ,k=1,...,K,or y= x.(2.1)
Compressive sampling3 That is,we acquire information about the unknown signal by nsing x against K vectorsϕk∈R N.We are interested in the“underdetermined”ca K N,where we have many fewer measurements than unknown signal values.Problems of this type ari in a countless number of applications.In radiology and biomedical imaging for instance,one is typically able to collect far fewer measurements about an image of interest than the number of unknown pixels.In wideband rad
io frequency signal analysis,one may only be able to acquire a signal at a rate which is far lower than the Nyquist rate becau of current limitations inAnalog-to-Digital Converter technology. Finally,gene expression studies also provide examples of this kind.Here,one would like to infer the gene expression level of thousands of genes–that is,the dimension N of the vector x is in the thousands–from a low number of obrvations,typically in the tens.
Atfirst glance,solving the underdertermined system of equations appears hopeless, as it is easy to make up examples for which it clearly cannot be done.But suppo now that the signal x is compressible,meaning that it esntially depends on a number of degrees of freedom which is smaller than N.For instance,suppo our signal is spar in the n that it can be written either exactly or accurately as a superposition of a small number of vectors in somefixed basis.Then this premi radically changes the problem,making the arch for solutions feasible.In fact,accurate and sometimes exact recovery is possible by solving a simple convex optimization problem.
2.1.A nonlinear sampling theorem.It might be best to consider a concrete example first.Suppo here that one collects an incomplete t of frequency samples of a discrete signal x of length N.(To ea the exposition,we consider a model problem in one dimension.The theory extends easily to higher dimensions.For instance,we could be equally interested in the reconstruction of2-or3-dimensi
onal objects from undersampled Fourier data.)The goal is to reconstruct the full signal f given only K samples in the Fourier domain
三字童谣y k=
1
√
N
N−1
t=0
x t e−i2πωk t/N,(2.2)描写西湖的古诗
where the‘visible’frequenciesωk are a subt (of size K)of the t of all frequencies {0,...,N−1}.Sensing an object by measuring lected frequency coefficients is the principle underlying Magnetic Resonance Imaging,and is common in manyfields of science,including Astrophysics.In the
language of the general problem(2.1),the nsing matrix is obtained by sampling K rows of the N by N discrete Fourier transform matrix.
We will say that a vector x is S-spar if its support{i:x i=0}is of cardinality less or equal to S.Then Candès,Romberg and Tao[6]showed that one could almost always
recover the signal x exactly by solving the convex program1( ˜x
1:=
N
i=1
|˜x i|)
(P1)min
˜x∈R N ˜x
海盗标志1
subject to ˜x=y.(2.3)
1(P1)can even be recast as a linear program[3],[15].
4Emmanuel J.Candès Theorem2.1([6]).Assume that x is S-spar and that we are given K Fourier
coefficients with frequencies lected uniformly at random.Suppo that the number of obrvations obeys
K≥C·S·log N.(2.4) Then minimizing 1reconstructs x exactly with overwhelming probability.In details, if the constant C is of the form22(δ+1)in(2.4),then the probability of success exceeds1−O(N−δ).
Thefirst conclusion is that one suffers no information loss by measuring just about
any t of K frequency coefficients.The cond is that the signal x can be exactly
recovered by minimizing a convex functional which does not assume any knowledge
about the number of nonzero coordinates of x,their locations,and their amplitudes
which we assume are all completely unknown a priori.
While this ems to be a great feat,one could still ask whether this is optimal,
or whether one could do with even fewer samples.The answer is that in general,
we cannot reconstruct S-spar signals with fewer samples.There are examples
for which the minimum number of samples needed for exact reconstruction by any
method,no matter how intractable,must be about S log N.Hence,the theorem is
tight and 1-minimization succeeds nearly as soon as there is any hope to succeed by
爱国卫生工作计划any algorithm.
The reader is certainly familiar with the Nyquist/Shannon sampling theory and one
can reformulate our result to establish simple connections.By reversing the roles of
time and frequency in the above example,we can recast Theorem1as a new nonlinear
sampling theorem.Suppo that a signal x has support in the frequency domain
with B=| |.If is a connected t,we can think of B as the bandwidth of x.If
in addition the t is known,then the classical Nyquist/Shannon sampling theorem
states that x can be reconstructed perfectly from B equally spaced samples in the time
domain2.The reconstruction is simply a linear interpolation with a“sinc”kernel.
Now suppo that the t ,still of size B,is unknown and not necessarily con-
nected.In this situation,the Nyquist/Shannon theory is unhelpful–we can only
assume that the connected frequency support is the entire domain suggesting that
all N time-domain samples are needed for exact reconstruction.However,Theo-rem2.1asrts that far fewer samples are necessary.Solving(P1)will recover x perfectly from about B log N time samples.What is more,the samples do not have
出塞古诗王昌龄
to be carefully chon;almost any sample t of this size will work.Thus we have a
致远方
nonlinear analog(described as such since the reconstruction procedure(P1)is non-linear)to Nyquist/S
hannon:we can reconstruct a signal with arbitrary and unknown frequency support of size B from about B log N arbitrarily chon samples in the time domain.
Finally,we would like to emphasize that our Fourier sampling theorem is only
a special instance of much more general statements.As a matter of fact,the results
2For the sake of convenience,we make the assumption that the bandwidth B divides the signal length N evenly.
Compressive sampling5 extend to a variety of other tups and higher dimensions.For instance,[6]shows how one can reconstruct a piecewi constant(one or two-dimensional)object from incomplete frequency samples provided that the number of jumps(discontinuities) obeys the condition above by minimizing other convex functionals such as the total variation.
2.2.Background.Now for some background.In the mid-eighties,Santosa and Symes[44]had suggested the minimization of 1-norms to recover spar spike trains, e also[25],[22]for early results.In the last four years or so,a ries of papers[26], [27],[28],[29],[33],[30]explained why 1could recover spar signals in some special tups.We note though that the results in this body of work a
re very different than the sampling theorem we just introduced.Finally,we would like to point out important connections with the literature of theoretical computer science.Inspired by[37],Gilbert and her colleagues have shown that one could recover an S-spar signal with probability exceeding1−δfrom S·poly(log N,logδ)frequency samples placed on special equispaced grids[32].The algorithms they u are not bad on optimization but rather on ideas from the theory of computer science such as isolation, and group testing.Other points of connection include situations in which the t of spikes are spread out in a somewhat even manner in the time domain[22],[51].
2.3.Undersampling structured signals.The previous example showed that the structural content of the signal allows a drastic“undersampling”of the Fourier trans-form while still retaining enough information for exact recovery.In other words,if one wanted to n a spar object by taking as few measurements as possible,then one would be well-advid to measure randomly lected frequency coefficients.In truth, this obrvation triggered a massive literature.To what extent can we recover a com-pressible signal from just a few measurements.What are good nsing mechanisms? Does all this extend to object that are perhaps not spar but well-approximated by spar signals?In the remainder of this paper,we will provide some answers to the fundamental questions.
3.The Mathematics of compressive sampling
3.1.Sparsity and incoherence.In all what follows,we will adopt an abstract and general point of view when discussing the recovery of a vector x∈R N.In practical instances,the vector x may be the coefficients of a signal f∈R N in an orthonormal
basis
f(t)=
N
i=1
x iψi(t),t=1,...,N.(3.1)
For example,we might choo to expand the signal as a superposition of spikes(the canonical basis of R N),sinusoids,B-splines,wavelets[36],and so on.As a side