# Nyquist–Shannon sampling theorem

Nyquist–Shannon sampling theorem Example of magnitude of the Fourier transform of a bandlimited function The Nyquist–Shannon sampling theorem is a theorem in the field of signal processing which serves as a fundamental bridge between continuous-time signals and discrete-time signals. It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal of finite bandwidth.

Strictly speaking, the theorem only applies to a class of mathematical functions having a Fourier transform that is zero outside of a finite region of frequencies. Intuitively we expect that when one reduces a continuous function to a discrete sequence and interpolates back to a continuous function, the fidelity of the result depends on the density (or sample rate) of the original samples. The sampling theorem introduces the concept of a sample rate that is sufficient for perfect fidelity for the class of functions that are band-limited to a given bandwidth, such that no actual information is lost in the sampling process. It expresses the sufficient sample rate in terms of the bandwidth for the class of functions. The theorem also leads to a formula for perfectly reconstructing the original continuous-time function from the samples.

Perfect reconstruction may still be possible when the sample-rate criterion is not satisfied, provided other constraints on the signal are known (see § Sampling of non-baseband signals below and compressed sensing). In some cases (when the sample-rate criterion is not satisfied), utilizing additional constraints allows for approximate reconstructions. The fidelity of these reconstructions can be verified and quantified utilizing Bochner's theorem.[1] The name Nyquist–Shannon sampling theorem honours Harry Nyquist and Claude Shannon, but the theorem was also previously discovered by E. T. Whittaker (published in 1915) and Shannon cited Whittaker's paper in his work. The theorem is thus also known by the names Whittaker–Shannon sampling theorem, Whittaker–Shannon, and Whittaker–Nyquist–Shannon, and may also be referred to as the cardinal theorem of interpolation.

Contents 1 Introduction 2 Aliasing 3 Derivation as a special case of Poisson summation 4 Shannon's original proof 4.1 Notes 5 Application to multivariable signals and images 6 Critical frequency 7 Sampling of non-baseband signals 8 Nonuniform sampling 9 Sampling below the Nyquist rate under additional restrictions 10 Historical background 10.1 Other discoverers 10.2 Why Nyquist? 11 See also 12 Notes 13 References 14 Further reading 15 External links Introduction Sampling is a process of converting a signal (for example, a function of continuous time or space) into a sequence of values (a function of discrete time or space). Shannon's version of the theorem states:[2] If a function {displaystyle x(t)} contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced {displaystyle 1/(2B)} seconds apart.

A sufficient sample-rate is therefore anything larger than {displaystyle 2B} samples per second. Equivalently, for a given sample rate {displaystyle f_{s}} , perfect reconstruction is guaranteed possible for a bandlimit {displaystyle B

Si quieres conocer otros artículos parecidos a **Nyquist–Shannon sampling theorem** puedes visitar la categoría **Data compression**.

Deja una respuesta