Internet Windows Android

Spectral power. Examples of determining the spectral density of signals

Meaning a set (ensemble) of functions of time by a random process, it should be borne in mind that functions with different shapes correspond to different spectral characteristics. Averaging the complex spectral density determined by (1.47) over all functions leads to a zero spectrum of the process (at M [x(t)]=0 ) due to the randomness and independence of the phases of the spectral components in different implementations.

However, it is possible to introduce the concept of the spectral density of the mean square of a random function, since the mean square value does not depend on the phase ratio of the summed harmonics. If under the random function x (t) electric voltage or current is meant, then the mean square of this function can be considered as the average power released in a resistance of 1 ohm. This power is distributed over frequencies in a certain band, depending on the mechanism of the formation of a random process.

Average power spectral density is the average power per Hz at a given frequency. ω ... Function dimension W(ω) , which is the ratio of power to bandwidth, is

The spectral density of a random process can be found if the mechanism of the formation of a random process is known. With regard to the noise associated with the atomic structure of matter and electricity, this problem will come later. Here we restrict ourselves to a few general definitions.

Having selected some implementation from the ensemble xk(t) and limiting its duration to a finite interval T, we can apply the usual Fourier transform to it and find the spectral density X kT (ω). Then the energy of the considered implementation segment can be calculated using the formula:

(1.152)

Dividing this energy into T, we get the average power k-th implementation on the segment T

(1.153)

When increasing T energy NSkT increases, but the ratio tends to a certain limit. Having made the passage to the limit, we get:

G
de

represents average power spectral density under consideration k-th implementation.

In the general case, the quantity W k (ω) should be averaged over multiple implementations. Restricting ourselves in this case to considering a stationary and ergodic process, we can assume that the function found by averaging over one implementation W k (ω) characterizes the entire process as a whole. Omitting the index k, we obtain the final expression for the average power of a random process

For a zero mean process

(1.156)

It is obvious from the definition of spectral density (1.155) that W NS (ω) is an even and non-negative function ω.

1.5.3 Relationship between spectral density and covariance function of a random process

On the one hand, the rate of change NS(t) in time determines the width of the spectrum. On the other side, rate of change x (t) determines the course of the covariance function. It is obvious that betweenW NS (ω) and K NS(τ) there is a close relationship.

The Wiener - Khinchin theorem states that TO NS (τ) and W x (ω) are related by Fourier transforms:

(1.157)

(1.158)

For random processes with zero mean, similar expressions are:

A property similar to the properties of Fourier transforms for deterministic signals follows from these expressions: the wider the spectrum of a random process, the smaller the correlation interval, and, accordingly, the larger the correlation interval, the narrower the spectrum of the process (see Figure 1.20).

Figure 1.20. Broadband and narrowband spectra of a random process; the boundaries of the central strip: ± F 1

White noise is of great interest when the spectrum is uniform at all frequencies.

If the expression 1.158 is substituted Wx(ω) = W 0 = const, we get

where δ (τ) is the delta function.

For white noise with an infinite and uniform spectrum, the correlation function is zero for all values ​​of τ except τ = 0 at which R x (0) goes to infinity. Such noise, which has a needle-like structure with infinitely thin random outliers, is sometimes called a delta-correlated process. The dispersion of white noise is infinitely large.

Self-test questions

    What are the main characteristics of a random signal?

    How the correlation function and the energy spectrum of a random signal are mathematically related.

    Which random process is called stationary.

    Which random process is called ergodic.

    How the envelope, phase and frequency of a narrowband signal are determined

    What signal is called analytical.

Mutual Power Spectral Density (Cross Power Spectrum) two realizations and stationary ergodic random processes and is defined as the direct Fourier transform over their mutual covariance function

or, taking into account the ratio between the circular and cyclic frequencies,

The inverse Fourier transform relates the mutual covariance function and power spectral density:

Similarly to (1.32), (1.33), we introduce power spectral density (power spectrum) random process

The function has the parity property:

For the mutual spectral density, the following relation is valid:

where is the function complex conjugate to.

The above formulas for spectral densities are defined for both positive and negative frequencies and are called two-sided spectral densities ... They are useful for the analytical study of systems and signals. In practice, they use spectral densities determined only for non-negative frequencies and called unilateral (Figure 1.14):

Figure 1.14 - One-sided and two-sided

spectral densities

Let us derive an expression that connects the one-sided spectral density of a stationary LB with its covariance function:

Let us take into account the parity property for the covariance function of the stationary SP and the cosine function, the oddness property for the sine function, as well as the symmetry of the integration limits. As a result, the second integral in the expression obtained above vanishes, and in the first integral, the limits of integration can be halved, doubling the coefficient:

Obviously, the power spectral density of a random process is a real function.

Similarly, you can get the inverse ratio:

From expression (1.42) at it follows that

This means that the total area under the one-sided spectral density plot is equal to the mean square of the random process. In other words, one-sided spectral density is interpreted as the distribution of the mean square of the process over frequencies.

The area under the one-sided density plot, enclosed between two arbitrary values ​​of frequency and, is equal to the mean square of the process in this frequency band of the spectrum (Figure 1.15):

Figure 1.15 - Spectral density property

The mutual spectral power density is a complex quantity, therefore it can be represented in exponential notation through module and phase angle :


where is the module;

- phase angle;

, Are the real and imaginary parts of the function, respectively.

The mutual spectral density modulus is included in the important inequality

This inequality allows us to determine coherence function (square of coherence), which is similar to the square of the normalized correlation function:

The second way of introducing spectral densities is the direct Fourier transform of random processes.

Let and be two stationary ergodic random processes for which finite Fourier transforms -x length implementations are defined as

The two-sided mutual spectral density of these random processes is introduced using the product through the relation

where the expectation operator means the index averaging operation.

The calculation of the two-sided spectral density of a random process is carried out according to the ratio

One-sided spectral densities are introduced similarly:

The functions defined by formulas (1.49), (1.50) are identical to the corresponding functions defined by relations (1.32), (1.33) as Fourier transforms over covariance functions. This statement is called the Wiener-Khinchin theorem.

Control questions

1. Give the classification of deterministic processes.

2. What is the difference between polyharmonic and almost periodic processes?

3. Formulate the definition of a stationary stochastic process.

4. What method of averaging the characteristics of an ergodic random process is preferable - averaging over an ensemble of sample functions or averaging over the observation time of one realization?

5. Formulate the definition of the probability density of a random process.

6. Write down the expression connecting the correlation and covariance functions of a stationary random process.

7. In what case are two random processes considered uncorrelated?

8. Indicate the methods for calculating the mean square of a stationary random process.

9. What transformation is related to the spectral density and covariance functions of a random process?

10. To what extent do the values ​​of the coherence function of two random processes change?

Literature

1. Sergienko, A.B. Digital signal processing / A.B. Sergienko. - M: Peter, 2002. - 604 p.

2. Sadovsky, G.A. Theoretical foundations of information and measuring technology / G.A. Sadovsky. - M .: Higher school, 2008 .-- 480 p.

3. Bendat, D. Application of correlation and spectral analysis / D. Bendat, A. Pirsol. - M .: Mir, 1983 .-- 312 p.

4. Bendat, D. Measurement and analysis of random processes / D. Bendat, A. Pirsol. - M .: Mir, 1974 .-- 464 p.

International educational corporation

Faculty of Applied Sciences

abstract

on the topic"Power density spectrum and its relationship with the correlation function"

By discipline"Theory of electrical communication »

Performed: group student

FPN-REiT (s) -4S *

Dzhumageldin D

Checked: Glukhova N.V.

Almaty, 2015

I Introduction

II Main part

1. Power spectral density

1.1 Random variables

1.2 Probability density of a function of a random variable

2. Random process

3. Method for determining the spectral power density by the correlation function

III Conclusion

IV List of used literature

Introduction

Probability theory considers random variables and their characteristics in "statics". The problems of describing and studying random signals "in dynamics", as displays of random phenomena developing in time or in any other variable, are solved by the theory of random processes.

As a universal coordinate for the distribution of random variables over the independent variable, we will use, as a rule, the variable "t" and treat it, purely for convenience, as a time coordinate. The distribution of random variables in time, as well as signals displaying them in any mathematical form, are usually called random processes. In the technical literature, the terms "random signal" and "random process" are used synonymously.

In the process of processing and analyzing physical and technical data, one usually has to deal with three types of signals described by statistical methods. First, these are information signals that reflect physical processes that are probabilistic in nature, such as, for example, the acts of registration of particles of ionizing radiation during the decay of radionuclides. Secondly, information signals dependent on certain parameters of physical processes or objects, the values ​​of which are not known in advance, and which are usually subject to determination from these information signals. And thirdly, these are noise and interference, chaotically changing in time, which accompany information signals, but, as a rule, are statistically independent of them both in terms of their values ​​and changes in time.



Power spectral density

The power spectral density makes it possible to judge the frequency properties of a random process. It characterizes its intensity at different frequencies, or, in other words, the average power per unit of frequency band.

The pattern of the distribution of average power over frequencies is called the power spectrum. The instrument that measures the power spectrum is called a spectrum analyzer. The spectrum found as a result of measurements is called the instrumental spectrum.

The spectrum analyzer is based on the following measurement methods:

· Filtration method;

· Transformation method according to the Wiener-Hinchen theorem;

· Method of Fourier transform;

· Method using sign functions;

· The method of hardware application of orthogonal functions.

The peculiarity of measuring the power spectrum is the considerable duration of the experiment. Often it exceeds the duration of the existence of the realization, or the time during which the stationarity of the process under study is maintained. The power spectrum estimates obtained from one realization of a stationary ergodic process are not always acceptable. It is often necessary to perform numerous measurements, since it is necessary to average the realizations both over time and over the ensemble. In many cases, the implementation of the investigated random processes is pre-memorized, which allows the experiment to be repeated many times with a change in the duration of the analysis, using various processing algorithms and equipment.

In the case of a preliminary recording of realizations of a random process, the hardware errors can be reduced to values ​​determined by the finite duration of the implementation and nonstationarity.

Memorizing the analyzed implementations makes it possible to speed up the instrumental analysis and automate it.

Random variables

A random variable is described by probabilistic laws. The probability that a continuous quantity NS when measured will fall into any interval x 1<х <х 2 , is defined by the expression:

, where p (x) is the probability density, and. For a discrete random variable x i P (x = x i) = P i, where P i is the probability corresponding to the i-th level of the quantity NS.

The most important characteristic of stationary random processes is the power spectral density, which describes the distribution of noise power over the frequency spectrum. Consider a stationary random process, which can be represented by a random sequence of voltage or current pulses following each other at random intervals. The process with a random sequence of pulses is non-periodic. Nevertheless, we can talk about the spectrum of such a process, understanding in this case the spectrum as the power distribution over frequencies.

To describe noise, the concept of noise power spectral density (PSD) is introduced, also called in the general case the noise spectral density (SP), which is determined by the relation:

where  P(f) is the time-averaged noise power in the frequency band f at the measurement frequency f.

As follows from relation (2.10), the SD of the noise has the dimension of W / Hz. In general, SP is a function of frequency. The dependence of the SD of noise on frequency is called energy spectrum, which carries information about the dynamic characteristics of the system.

If a random process is ergodic, then you can find the energy spectrum of such a process by its only implementation, which is widely used in practice.

When considering the spectral characteristics of a stationary random process, it often turns out to be necessary to use the concept of the width of the noise spectrum. The area under the curve of the energy spectrum of a random process referred to the SD of the noise at a certain characteristic frequency f 0 is called effective spectrum width, which is determined by the formula:

(2.11)

This value can be interpreted as the width of the uniform energy spectrum of a random process in a strip
, equivalent in average power to the process under consideration.

Noise power P enclosed in the frequency band f 1 …f 2 is equal to

(2.12)

If the SP of the noise in the frequency band f 1 ...f 2 is constant and equal S 0, then for the noise power in a given frequency band we have:
where f=f 2 -f 1 - frequency band passed by a circuit or a measuring device.

An important case of a stationary random process is white noise, for which the spectral density does not depend on frequency over a wide frequency range (theoretically, in an infinite frequency range). Energy spectrum of white noise in the frequency range -∞< f < + ∞ is given by:

= 2S 0 = const, (2.13)

The white noise model describes a random process without memory (without aftereffect). White noise arises in systems with a large number of simple homogeneous elements and is characterized by the distribution of the amplitude of fluctuations according to the normal law. The properties of white noise are determined by the statistics of independent single events (for example, thermal motion of charge carriers in a conductor or semiconductor). However, true white noise with infinite bandwidth does not exist because it has infinite power.

In fig. 2.3. shows a typical oscillogram of white noise (dependence of instantaneous voltage values ​​on time) (Fig.2.3a) and the probability distribution function of instantaneous voltage values e, which is a normal distribution (Fig. 2.3b). The shaded area under the curve corresponds to the probability of occurrence of instantaneous voltage values e exceeding the value e 1 .

Rice. 2.3. A typical oscillogram of white noise (a) and the probability density function of instantaneous values ​​of the amplitude of the noise voltage (b).

In practice, when evaluating the magnitude of the noise of an element or semiconductor device, the rms noise voltage is usually measured in units of V 2 or rms current in units of A 2. In this case, the SD of the noise is expressed in units of V 2 / Hz or A 2 / Hz, and the spectral densities of voltage fluctuations S u (f) or current S I (f) are calculated by the following formulas:

(2.14)

where
and are the time-averaged noise voltage and current in the frequency band f respectively. The bar above denotes averaging over time.

In practical problems, when considering fluctuations of various physical quantities, the concept of a generalized spectral density of fluctuations is introduced. In this case, the SD of fluctuations, for example, for the resistance R expressed in units of Ohm 2 / Hz; fluctuations in magnetic induction are measured in units of T 2 / Hz, and fluctuations in the frequency of the oscillator - in units of Hz 2 / Hz = Hz.

When comparing noise levels in linear two-port networks of the same type, it is convenient to use the relative spectral noise density, which is defined as

=
, (2.15)

where u- DC voltage drop across a linear two-port network.

As can be seen from expression (2.15), the relative spectral noise density S(f) is expressed in units of Hz -1.

Estimating power spectral density is a known problem for random processes. Examples of random processes are noise and information-carrying signals. You usually want to find a statistically robust estimate. Signal analysis is discussed in detail in the Digital Signal Processing course. Initial information is provided in.

For signals with known statistical characteristics, the spectral content can be determined from a finite interval of this signal. If the statistical characteristics of the signal are unknown for a segment of the signal, only an estimate of its spectrum can be obtained. Different methods use different assumptions and therefore give different estimates.

When choosing an estimate, it is assumed that, in the general case, the analyzed signal is a random process. And it is required to select an unbiased estimate with low variance, which allows the signal spectrum to be averaged. Bias is the difference between the mean of the estimate and the true value of the quantity. An unbiased estimate is a zero-offset estimate. An estimate with low variance localizes the sought-for values ​​well, i.e. the probability density is concentrated around the mean. It is desirable to have a consistent rating, i.e. an estimate that tends to the true value with an increase in the sample size (bias and variance tend to zero). Distinguish between parametric estimates, using only information about the signal itself, and nonparametric, using a statistical model of a random signal, and selecting the parameters of this model.

When evaluating random processes, the use of correlation functions is widespread.

For an ergodic process, it is possible to determine the statistical parameters of the process by averaging over one implementation.

For stationary random process the correlation function R x (t) depends on the time interval for which it is determined. This value characterizes the relationship between the values ​​x (t) separated by the interval t. The slower R (t) decreases, the longer the interval during which there is a statistical relationship between the values ​​of the random process.

where is the mathematical expectation x (t).

The relationship between the correlation function R (t) and the spectral power density W (w) for a random process is determined by the Wiener-Khinchin theorem

For discrete processes, the Wiener-Khinchin theorem establishes a connection between the spectrum of a discrete random process W (w) and its correlation function R x (n)

W (w) = R x (n) exp (-j w n T)

To estimate the signal energy in the time and frequency domains, Parseval's equality is used



One of the most common ways to obtain an estimate of spectral density is to use the periodogram method.

Periodogram In this method, a discrete Fourier transform is performed for the signal x (n) specified at discrete sampling points of length N samples and its statistical averaging. The actual calculation of the spectrum, X (k), is performed only at a finite number of frequency points N. The Fast Fourier Transform (FFT) is applied. The spectral power density per sample sample is calculated:

P xx (X k) = | X (k) | 2 / N, X (k) =, k = 0,1, ..., N-1.

To obtain a statistically robust estimate, the available data are divided into overlapping samples, followed by averaging the spectra obtained for each sample. The number of samples per sample N and the shift of the beginning of each subsequent sample relative to the beginning of the previous N t are set. The smaller the number of samples in the sample, the more samples and the lower the variance of the estimates. But since the sample length N is related to the frequency resolution (2.4), then a decrease in the sample length leads to a decrease in the frequency resolution.

Thus, the signal is viewed through the window, and the data that does not fall into the window is taken equal to zero. The final signal x (n), consisting of N samples, is usually represented as the result of multiplying an infinite signal in time (n) on a rectangular window with a finite length w R (n):

x (n) = (n)∙ w R (n),

and the continuous spectrum X N (f) of the observed signals x (n) is defined as the convolution of the Fourier images X (f), W R (f) of the time infinite signal (n)∙ and windows w R (n)



X N (f) = X (f) * W R (f) =

The spectrum of a continuous rectangular window (rect) is in the form of an integral sine sinc (x) = sin (x) / x. It contains a main “lobe” and several side lobes, of which the largest is approximately 13 dB below the main peak (see Fig. 15).

The Fourier transform (spectrum) of a discrete sequence obtained by N-point sampling of a continuous rectangular window is shown in Fig. 32. It can be calculated by summing the shifted integral sines (2.9), resulting in the Dirichlet kernel

Rice. 32. Spectrum of a discrete rectangular window

While an infinite signal will concentrate its power at exactly the discrete frequency f k, a rectangular sample of the signal has a distributed power spectrum. The shorter the sample, the more distributed the spectrum.

In spectral analysis, the data are weighted using window functions, thereby reducing the influence of side “lobes” on spectral estimates.

To detect two harmonics f 1 and f 2 with close frequencies, it is necessary that for the time window T the width of the main “lobe” Df -3 ≈ Df L = 0 = 1 / T, determined at a value of -3 dB, was less than the difference of the sought frequencies

Df = f 1 -f 2> Df -3

The width of the time window T is related to the sampling frequency f s and the number of sampling samples by formula (2.4).

Harmonic Analysis Tools... To study signals, it is very convenient to use the MATLAB package, in particular, its Signal Processing application (Toolbox).

Modified periodograms use non-rectangular windowing functions that reduce the Gibbs effect. An example is the use of the Hamming window. But at the same time, the width of the main lobe of the spectrogram approximately doubles. The Kaiser window has been slightly more optimized. Increasing the width of the main lobes when creating low-pass filters leads to an increase in the transition band (between the pass and stop bands).

Welch's Estimated Function... The method consists of dividing sequential time data into segments (possibly overlapping), then processing each segment, and then estimating the spectrum by averaging the processing of the segments. Non-rectangular windowing functions such as the Hamming window can be used to improve the score. Increasing the number of segments decreases the variance, but at the same time, the frequency resolution of the method decreases. The method gives good results with a small excess of the useful signal over the noise and is often used in practice.

Figure 33 shows estimates of the harmonic composition for data containing narrow-band wanted signals and white noise, for different samples (N = 100, N = 67), and using different methods.

Rice. 33. Estimation of signal harmonics for 1024-point FFT-transform

Parametric methods use autoregressive (AR) models. The methods build filter models and use them to estimate the signal spectra. All methods give biased estimates in the presence of noise in the signal. Methods for processing signals with harmonic components against a background of noise are intended. The order of the method (filter) is set to twice the number of harmonics present in the signal. Several parametric methods have been proposed.

The Burg method gives high frequency resolution for short samples. With a large filter order, the spectral peaks are split. The position of the spectral peaks depends on the initial harmonic phases.

The covariance method allows you to estimate the spectrum of a signal containing the sum of harmonic components.

The Yule-Walker method gives good results on long samples and is not recommended for short samples.

Correlation methods... The MISIC (Multiple Signal Classification) and EV (eigenvectors) methods produce results in the form of a pseudospectrum. The methods are based on the analysis of the vectors of the signal correlation matrix. These methods give slightly better frequency resolution than autocorrelation methods.