# Nyquist–Shannon sampling theorem

Nyquist–Shannon sampling theorem Example of magnitude of the Fourier transform of a bandlimited function The Nyquist–Shannon sampling theorem is a theorem in the field of signal processing which serves as a fundamental bridge between continuous-time signals and discrete-time signals. It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal of finite bandwidth.

Strictly speaking, the theorem only applies to a class of mathematical functions having a Fourier transform that is zero outside of a finite region of frequencies. Intuitively we expect that when one reduces a continuous function to a discrete sequence and interpolates back to a continuous function, the fidelity of the result depends on the density (or sample rate) of the original samples. The sampling theorem introduces the concept of a sample rate that is sufficient for perfect fidelity for the class of functions that are band-limited to a given bandwidth, such that no actual information is lost in the sampling process. It expresses the sufficient sample rate in terms of the bandwidth for the class of functions. The theorem also leads to a formula for perfectly reconstructing the original continuous-time function from the samples.

Perfect reconstruction may still be possible when the sample-rate criterion is not satisfied, provided other constraints on the signal are known (see § Sampling of non-baseband signals below and compressed sensing). In some cases (when the sample-rate criterion is not satisfied), utilizing additional constraints allows for approximate reconstructions. The fidelity of these reconstructions can be verified and quantified utilizing Bochner's theorem.[1] The name Nyquist–Shannon sampling theorem honours Harry Nyquist and Claude Shannon, but the theorem was also previously discovered by E. T. Whittaker (published in 1915) and Shannon cited Whittaker's paper in his work. The theorem is thus also known by the names Whittaker–Shannon sampling theorem, Whittaker–Shannon, and Whittaker–Nyquist–Shannon, and may also be referred to as the cardinal theorem of interpolation.

Contents 1 Introduction 2 Aliasing 3 Derivation as a special case of Poisson summation 4 Shannon's original proof 4.1 Notes 5 Application to multivariable signals and images 6 Critical frequency 7 Sampling of non-baseband signals 8 Nonuniform sampling 9 Sampling below the Nyquist rate under additional restrictions 10 Historical background 10.1 Other discoverers 10.2 Why Nyquist? 11 See also 12 Notes 13 References 14 Further reading 15 External links Introduction Sampling is a process of converting a signal (for example, a function of continuous time or space) into a sequence of values (a function of discrete time or space). Shannon's version of the theorem states:[2] If a function {displaystyle x(t)} contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced {displaystyle 1/(2B)} seconds apart.

A sufficient sample-rate is therefore anything larger than {displaystyle 2B} samples per second. Equivalently, for a given sample rate {displaystyle f_{s}} , perfect reconstruction is guaranteed possible for a bandlimit {displaystyle Bf_{s}-B.end{cases}}} The sampling theorem is proved since {displaystyle X(f)} uniquely determines {displaystyle x(t).} All that remains is to derive the formula for reconstruction. {displaystyle H(f)} need not be precisely defined in the region {displaystyle [B, f_{s}-B]} because {displaystyle X_{s}(f)} is zero in that region. However, the worst case is when {displaystyle B=f_{s}/2,} the Nyquist frequency. A function that is sufficient for that and all less severe cases is: {displaystyle H(f)=mathrm {rect} left({frac {f}{f_{s}}}right)={begin{cases}1&|f|<{frac {f_{s}}{2}}\0&|f|>{frac {f_{s}}{2}},end{cases}}} where rect(•) is the rectangular function.  Therefore: {displaystyle X(f)=mathrm {rect} left({frac {f}{f_{s}}}right)cdot X_{s}(f)} {displaystyle =mathrm {rect} (Tf)cdot sum _{n=-infty }^{infty }Tcdot x(nT) e^{-i2pi nTf}}       (from  Eq.1, above). {displaystyle =sum _{n=-infty }^{infty }x(nT)cdot underbrace {Tcdot mathrm {rect} (Tf)cdot e^{-i2pi nTf}} _{{mathcal {F}}left{mathrm {sinc} left({frac {t-nT}{T}}right)right}}.}      [A] The inverse transform of both sides produces the Whittaker–Shannon interpolation formula: {displaystyle x(t)=sum _{n=-infty }^{infty }x(nT)cdot mathrm {sinc} left({frac {t-nT}{T}}right),} which shows how the samples, {displaystyle x(nT),} can be combined to reconstruct {displaystyle x(t).} Larger-than-necessary values of fs (smaller values of T), called oversampling, have no effect on the outcome of the reconstruction and have the benefit of leaving room for a transition band in which H(f) is free to take intermediate values. Undersampling, which causes aliasing, is not in general a reversible operation. Theoretically, the interpolation formula can be implemented as a low-pass filter, whose impulse response is sinc(t/T) and whose input is {displaystyle textstyle sum _{n=-infty }^{infty }x(nT)cdot delta (t-nT),} which is a Dirac comb function modulated by the signal samples. Practical digital-to-analog converters (DAC) implement an approximation like the zero-order hold. In that case, oversampling can reduce the approximation error. Shannon's original proof Poisson shows that the Fourier series in Eq.1 produces the periodic summation of {displaystyle X(f)} , regardless of {displaystyle f_{s}} and {displaystyle B} . Shannon, however, only derives the series coefficients for the case {displaystyle f_{s}=2B} . Virtually quoting Shannon's original paper: Let {displaystyle X(omega )} be the spectrum of {displaystyle x(t).}   Then {displaystyle x(t)={1 over 2pi }int _{-infty }^{infty }X(omega )e^{iomega t};{rm {d}}omega ={1 over 2pi }int _{-2pi B}^{2pi B}X(omega )e^{iomega t};{rm {d}}omega ,} because {displaystyle X(omega )} is assumed to be zero outside the band {displaystyle left|{tfrac {omega }{2pi }}right|2B} , consider the family of sinusoids generated by different values of {displaystyle theta } in this formula: {displaystyle x(t)={frac {cos(2pi Bt+theta )}{cos(theta )}} = cos(2pi Bt)-sin(2pi Bt)tan(theta ),quad -pi /2

Si quieres conocer otros artículos parecidos a Nyquist–Shannon sampling theorem puedes visitar la categoría Data compression.

Subir

Utilizamos cookies propias y de terceros para mejorar la experiencia de usuario Más información