Nyquist–Shannon sampling theorem

Nyquist–Shannon sampling theorem Example of magnitude of the Fourier transform of a bandlimited function The Nyquist–Shannon sampling theorem is a theorem in the field of signal processing which serves as a fundamental bridge between continuous-time signals and discrete-time signals. It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal of finite bandwidth.

Strictly speaking, the theorem only applies to a class of mathematical functions having a Fourier transform that is zero outside of a finite region of frequencies. Intuitively we expect that when one reduces a continuous function to a discrete sequence and interpolates back to a continuous function, the fidelity of the result depends on the density (or sample rate) of the original samples. The sampling theorem introduces the concept of a sample rate that is sufficient for perfect fidelity for the class of functions that are band-limited to a given bandwidth, such that no actual information is lost in the sampling process. It expresses the sufficient sample rate in terms of the bandwidth for the class of functions. The theorem also leads to a formula for perfectly reconstructing the original continuous-time function from the samples.

Perfect reconstruction may still be possible when the sample-rate criterion is not satisfied, provided other constraints on the signal are known (see § Sampling of non-baseband signals below and compressed sensing). In some cases (when the sample-rate criterion is not satisfied), utilizing additional constraints allows for approximate reconstructions. The fidelity of these reconstructions can be verified and quantified utilizing Bochner's theorem.[1] The name Nyquist–Shannon sampling theorem honours Harry Nyquist and Claude Shannon, but the theorem was also previously discovered by E. T. Whittaker (pubblicato in 1915) and Shannon cited Whittaker's paper in his work. The theorem is thus also known by the names Whittaker–Shannon sampling theorem, Whittaker–Shannon, and Whittaker–Nyquist–Shannon, and may also be referred to as the cardinal theorem of interpolation.

Contenuti 1 introduzione 2 Aliasing 3 Derivation as a special case of Poisson summation 4 Shannon's original proof 4.1 Appunti 5 Application to multivariable signals and images 6 Critical frequency 7 Sampling of non-baseband signals 8 Nonuniform sampling 9 Sampling below the Nyquist rate under additional restrictions 10 Historical background 10.1 Other discoverers 10.2 Why Nyquist? 11 Guarda anche 12 Appunti 13 Riferimenti 14 Ulteriori letture 15 External links Introduction Sampling is a process of converting a signal (Per esempio, a function of continuous time or space) into a sequence of values (a function of discrete time or space). Shannon's version of the theorem states:[2] If a function {stile di visualizzazione x(t)} contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced {stile di visualizzazione 1/(2B)} seconds apart.

A sufficient sample-rate is therefore anything larger than {stile di visualizzazione 2B} samples per second. Equivalentemente, for a given sample rate {stile di visualizzazione f_{S}} , perfect reconstruction is guaranteed possible for a bandlimit {stile di visualizzazione Bf_{S}-B.end{casi}}} The sampling theorem is proved since {stile di visualizzazione X(f)} uniquely determines {stile di visualizzazione x(t).} All that remains is to derive the formula for reconstruction. {stile di visualizzazione H(f)} need not be precisely defined in the region {stile di visualizzazione [B, f_{S}-B]} perché {stile di visualizzazione X_{S}(f)} is zero in that region. Tuttavia, the worst case is when {displaystyle B=f_{S}/2,} the Nyquist frequency. A function that is sufficient for that and all less severe cases is: {stile di visualizzazione H(f)= matematica {rect} sinistra({frac {f}{f_{S}}}Giusto)={inizio{casi}1&|f|<{frac {f_{s}}{2}}\0&|f|>{frac {f_{S}}{2}},fine{casi}}} where rect() is the rectangular function. Perciò: {stile di visualizzazione X(f)= matematica {rect} sinistra({frac {f}{f_{S}}}Giusto)cdot X_{S}(f)} {displaystyle =mathrm {rect} (Tf)cdot sum _{n=-infty }^{infty }Tcdot x(nT) e^{-i2pi nTf}} (from Eq.1, sopra). {displaystyle =sum _{n=-infty }^{infty }X(nT)cdot underbrace {Tcdot mathrm {rect} (Tf)cdot e^{-i2pi nTf}} _{{matematico {F}}sinistra{matematica {sinc} sinistra({frac {t-nT}{T}}Giusto)Giusto}}.} [UN] The inverse transform of both sides produces the Whittaker–Shannon interpolation formula: {stile di visualizzazione x(t)=somma _{n=-infty }^{infty }X(nT)cdot mathrm {sinc} sinistra({frac {t-nT}{T}}Giusto),} which shows how the samples, {stile di visualizzazione x(nT),} can be combined to reconstruct {stile di visualizzazione x(t).} Larger-than-necessary values of fs (smaller values of T), called oversampling, have no effect on the outcome of the reconstruction and have the benefit of leaving room for a transition band in which H(f) is free to take intermediate values. Undersampling, which causes aliasing, is not in general a reversible operation. Theoretically, the interpolation formula can be implemented as a low-pass filter, whose impulse response is sinc(t/T) and whose input is {displaystyle textstyle sum _{n=-infty }^{infty }X(nT)cdot delta (t-nT),} which is a Dirac comb function modulated by the signal samples. Practical digital-to-analog converters (DAC) implement an approximation like the zero-order hold. In quel caso, oversampling can reduce the approximation error. Shannon's original proof Poisson shows that the Fourier series in Eq.1 produces the periodic summation of {stile di visualizzazione X(f)} , regardless of {stile di visualizzazione f_{S}} e {stile di visualizzazione B} . Shannon, però, only derives the series coefficients for the case {stile di visualizzazione f_{S}=2B} . Virtually quoting Shannon's original paper: Permettere {stile di visualizzazione X(omega )} essere lo spettro di {stile di visualizzazione x(t).} Quindi {stile di visualizzazione x(t)={1 over 2pi }int _{-infty }^{infty }X(omega )e^{iomega t};{rm {d}}omega ={1 over 2pi }int _{-2pi B}^{2pi B}X(omega )e^{iomega t};{rm {d}}omega ,} perché {stile di visualizzazione X(omega )} is assumed to be zero outside the band {stile di visualizzazione a sinistra|{tfrac {omega }{2pi }}Giusto|2B} , consider the family of sinusoids generated by different values of {stile di visualizzazione theta } in this formula: {stile di visualizzazione x(t)={frac {cos(2pi Bt+theta )}{cos(teta )}} = cos(2pi Bt)-peccato(2pi Bt)tan(teta ),quad -pi /2

Se vuoi conoscere altri articoli simili a Nyquist–Shannon sampling theorem puoi visitare la categoria Data compression.

lascia un commento

L'indirizzo email non verrà pubblicato.

Vai su

Utilizziamo cookie propri e di terze parti per migliorare l'esperienza dell'utente Maggiori informazioni