Internet Windows Android

Correlation analysis of two signals si sharp example. Correlation functions of deterministic signals

Correlation - a mathematical operation, similar to convolution, allows you to get a third from two signals. It happens: autocorrelation (autocorrelation function), cross-correlation (cross-correlation function, cross-correlation function). Example:

[Cross correlation function]

[Autocorrelation function]

Correlation is a technique for detecting previously known signals against a background of noise, also called optimal filtering. Although correlation is very similar to convolution, they are calculated in different ways. Their areas of application are also different (c (t) = a (t) * b (t) - convolution of two functions, d (t) = a (t) * b (-t) - cross correlation).

Correlation is the same convolution, only one of the signals is inverted from left to right. Autocorrelation (autocorrelation function) characterizes the degree of connection between the signal and its copy shifted by τ. The cross-correlation function characterizes the degree of connection between 2 different signals.

Autocorrelation function properties:

  • 1) R (τ) = R (-τ). The function R (τ) is even.
  • 2) If x (t) is a sinusoidal function of time, then its autocorrelation function is cosine of the same frequency. The initial phase information is lost. If x (t) = A * sin (ωt + φ), then R (τ) = A 2/2 * cos (ωτ).
  • 3) The autocorrelation function and the power spectrum are related by the Fourier transform.
  • 4) If х (t) is any periodic function, then R (τ) for it can be represented as a sum of autocorrelation functions from a constant component and from a sinusoidally varying component.
  • 5) The function R (τ) does not carry any information about the initial phases of the harmonic components of the signal.
  • 6) For a random function of time, R (τ) decreases rapidly with increasing τ. The time interval after which R (τ) becomes equal to 0 is called the autocorrelation interval.
  • 7) A given x (t) corresponds to a well-defined R (τ), but for the same R (τ), different functions x (t) can correspond

Original signal with noise:

Autocorrelation function of the original signal:

Cross-correlation function (CCF) properties:

  • 1) CCF is neither even nor odd function, i.e. R xy (τ) is not equal to R xy (-τ).
  • 2) CCF remains unchanged when changing the alternation of functions and changing the sign of the argument, i.e. R xy (τ) = R xy (-τ).
  • 3) If the random functions x (t) and y (t) do not contain constant components and are created by independent sources, then for them R xy (τ) tends to 0. Such functions are called uncorrelated.

Original signal with noise:

A meander of the same frequency:

Correlation of original signal and meander:



Attention! Each electronic lecture notes are the intellectual property of their author and are published on the site for informational purposes only.

3 Correlation analysis of signals

The meaning of spectral analysis of signals is to study how a signal can be represented as a sum (or integral) of simple harmonic oscillations and how the waveform determines the structure of the frequency distribution of the amplitudes and phases of these oscillations. In contrast, the task of correlation analysis of signals is to determine a measure of the degree of similarity and difference between signals or time-shifted copies of one signal. The introduction of a measure opens the way to quantitative measurements of the degree of similarity of signals. It will be shown that there is a certain relationship between the spectral and correlation characteristics of the signals.

3.1 Autocorrelation function (ACF)

The autocorrelation function of a signal with a finite energy is the value of the integral of the product of two copies of this signal, shifted relative to each other by the time τ, considered as a function of this time shift τ:

If the signal is determined over a finite time interval, then its ACF is found as:

,

where is the overlap interval of shifted signal copies.

It is believed that the greater the value of the autocorrelation function at a given value, the more the two copies of the signal, shifted by the time interval, are similar to each other. Therefore, the correlation function is a measure of similarity for shifted copies of the signal.

The similarity measure introduced in this way for signals having the form of random oscillations around zero has the following characteristic properties.

If the shifted copies of the signal oscillate approximately in time to each other, then this is a sign of their similarity and the ACF takes large positive values ​​(large positive correlation). If the copies oscillate almost in antiphase, the ACF takes on large negative values ​​(anti-similarity of signal copies, large negative correlation).

The maximum ACF is achieved when copies coincide, that is, when there is no shift. Zero ACF values ​​are achieved at shifts at which neither the similarity nor the antisimilarity of the signal copies is noticeable (zero correlation,



no correlation).

Figure 3.1 shows a fragment of the implementation of a certain signal in the time interval from 0 to 1 s. The signal randomly fluctuates around zero. Since the signal existence interval is finite, its energy is also finite. Its ACF can be calculated according to the equation:

.

The autocorrelation function of the signal, calculated in MathCad in accordance with this equation, is shown in Fig. 3.2. The correlation function shows not only that the signal is similar to itself (shift τ = 0), but also that the copies of the signal that are shifted relative to each other by about 0.063 s (lateral maximum of the autocorrelation function) also have some similarity. In contrast to this, the copies of the signal, shifted by 0.032 s, should be anti-similar to each other, that is, in a sense, they should be opposite to each other.

Figure 33 shows the pairs of these two copies. The figure shows what is meant by the similarity and anti-similarity of the signal copies.

The correlation function has the following properties:

1. At τ = 0, the autocorrelation function takes on the largest value equal to the signal energy

2. The autocorrelation function is an even time shift function .

3.With increasing τ, the autocorrelation function decreases to zero

4. If the signal does not contain discontinuities of the type of δ - functions, then it is a continuous function.



5. If the signal is an electrical voltage, then the correlation function has dimensions.

For periodic signals in the definition of the autocorrelation function, the same integral is divided by the signal repetition period:

.

The introduced correlation function has the following properties:

The value of the correlation function at zero is equal to the signal power,

The dimension of the correlation function is equal to the square of the dimension of the signal, for example.

For example, let's calculate the correlation function of a harmonic oscillation:

Using a series of trigonometric transformations, we finally get:

Thus, the autocorrelation function of a harmonic oscillation is a cosine with the same period of variation as the signal itself. With shifts that are multiples of the oscillation period, the harmonic is converted into itself and the ACF takes the largest values, equal to half the square of the amplitude. Time shifts, multiples of half of the oscillation period, are equivalent to a phase shift by an angle, while the sign of the oscillations changes, and the ACF takes a minimum value, negative and equal to half the square of the amplitude. Shifts that are multiples of a quarter of a period translate, for example, a sinusoidal oscillation into a cosine one and vice versa. In this case, the ACF vanishes. Such signals, which are in quadrature with respect to each other, from the point of view of the autocorrelation function, turn out to be completely different from each other.

It is important that the expression for the signal correlation function did not include its initial phase. Phase information has been lost. This means that the signal itself cannot be reconstructed from the signal correlation function. The mapping, as opposed to the mapping, is not one-to-one.

If we understand the signal generation mechanism as a certain demiurge creating a signal according to the correlation function he has chosen, then he could create a whole set of signals (an ensemble of signals) that actually have the same correlation function, but differ from each other in phase relationships.

The act of manifestation by a signal of its free will, independent of the will of the creator (the emergence of separate realizations of some random process),

The result of extraneous violence against the signal (introduction into the signal of measuring information obtained during measurements of any physical quantity).

The situation is similar with any periodic signal. If a periodic signal with the main period T has an amplitude spectrum and a phase spectrum, then the signal correlation function takes the following form:

.

Already in these examples, a certain connection between the correlation function and the spectral properties of the signal is manifested. These ratios will be discussed in more detail later.

3.2 Cross-correlation function (CCF).

In contrast to the autocorrelation function, the cross-correlation function determines the degree of similarity of copies of two different signals x (t) and y (t), shifted by time τ relative to each other:

The cross-correlation function has the following properties:

1. At τ = 0, the cross-correlation function takes on a value equal to mutual energy signals, that is, the energy of their interaction

.

2. For any τ, the following relation holds:

,

where are the signal energies.

3. Changing the sign of the time shift is equivalent to the mutual permutation of signals:

.

4.With increasing τ, the cross-correlation function, although not monotonically, decreases to zero

5. The value of the cross-correlation function at zero does not stand out from other values.

For periodic signals, the concept of cross-correlation function, as a rule, is not used at all.

Instruments for measuring the values ​​of autocorrelation and cross-correlation functions are called correlators or correlators. Correlometers are used, for example, to solve the following information and measurement tasks:

Statistical analysis of electroencephalograms and other results of registration of biopotentials,

Determination of the spatial coordinates of the signal source by the magnitude of the time shift at which the maximum CCF is achieved,

Isolation of a weak signal against a background of strong static unrelated interference,

Detection and localization of information leakage channels by determining the correlation between radio signals indoors and outdoors,

Automated near-field detection, recognition and search for working radio-emitting eavesdropping devices, including mobile phones used as eavesdropping devices,

Localization of leaks in pipelines based on the determination of the CCF of two acoustic noise signals caused by a leak at two measurement points where sensors are located on the pipe.

3.3 Relationships between correlation and spectral functions.

Both correlation and spectral functions describe the internal structure of signals, their internal structure. Therefore, it can be expected that there is some interdependence between these two ways of describing signals. You have already seen the presence of such a connection on the example of periodic signals.

The cross-correlation function, like any other function of time, can be subjected to the Fourier transform:

Let's change the order of integration:

The expression in square brackets could be thought of as the Fourier transform for the signal y (t), but there is no minus sign in the exponent. This suggests that the inner integral gives us an expression that is complex conjugate to the spectral function.

But the expression does not depend on time, so it can be taken outside the sign of the external integral. Then the outer integral will simply give us the definition of the spectral function of the signal x (t). Finally, we have:

This means that the Fourier transform for the cross-correlation function of two signals is equal to the product of their spectral functions, one of which is subjected to complex conjugation. This product is called the cross-spectrum of signals:

An important conclusion follows from the obtained expression: if the spectra of signals x (t) and y (t) do not overlap each other, that is, they are located in different frequency ranges, then such signals are uncorrelated, independent of each other.

If we put in the above formulas: x (t) = y (t), then we get an expression for the Fourier transform of the autocorrelation function

This means that the autocorrelation function of the signal and the square of the modulus of its spectral function are related to each other through the Fourier transform.

The function is called energy spectrum signal. The energy spectrum shows how the total energy of a signal is distributed over the frequencies of its individual harmonic components.

3.4 Energy characteristics of signals from the frequency domain

The mutual correlation function of two signals is related by the Fourier transform to the mutual spectrum of the signals, therefore it can be expressed as the inverse Fourier transform of the cross spectrum:

.

Now let's substitute the time shift value into this chain of equalities. As a result, we get a ratio that determines the meaning Rayleigh equalities:

,

that is, the integral of the product of two signals is equal to the integral of the product of the spectra of these signals, one of which is subjected to complex conjugation.

.

This ratio is called Parseval's equality.

Periodic signals have infinite energy but finite power. When considering them, we have already encountered the possibility of calculating the power of a periodic signal through the sum of the squares of the moduli of the coefficients of its complex spectrum:

.

This relation has a complete analogy with Parseval's equality.

The meaning of spectral analysis of signals is to study how a signal can be represented as a sum (or integral) of simple harmonic oscillations and how the waveform determines the structure of the frequency distribution of the amplitudes and phases of these oscillations. In contrast, the task of correlation analysis of signals is to determine a measure of the degree of similarity and difference between signals or time-shifted copies of one signal. The introduction of a measure opens the way to quantitative measurements of the degree of similarity of signals. It will be shown that there is a certain relationship between the spectral and correlation characteristics of the signals.

3.1 Autocorrelation function (ACF)

The autocorrelation function of a signal with a finite energy is the value of the integral of the product of two copies of this signal, shifted relative to each other by the time τ, considered as a function of this time shift τ:

If the signal is detected at a finite time interval , then its ACF is found as:

,

where
- overlapping interval of shifted signal copies.

It is believed that the greater the value of the autocorrelation function
at this value , the more the two copies of the signal are shifted by the time interval are similar to each other. Therefore, the correlation function
and is a measure of similarity for shifted copies of the signal.

The similarity measure introduced in this way for signals having the form of random oscillations around zero has the following characteristic properties.

If the shifted copies of the signal oscillate approximately in time to each other, then this is a sign of their similarity and the ACF takes large positive values ​​(large positive correlation). If the copies oscillate almost in antiphase, the ACF takes on large negative values ​​(anti-similarity of signal copies, large negative correlation).

The maximum ACF is achieved when copies coincide, that is, when there is no shift. Zero ACF values ​​are achieved at shifts at which neither the similarity nor the antisimilarity of the signal copies is noticeable (zero correlation, about no correlation).

Figure 3.1 shows a fragment of the implementation of a certain signal in the time interval from 0 to 1 s. The signal randomly fluctuates around zero. Since the signal existence interval is finite, its energy is also finite. Its ACF can be calculated according to the equation:

.

The autocorrelation function of the signal, calculated in MathCad in accordance with this equation, is shown in Fig. 3.2. The correlation function shows not only that the signal is similar to itself (shift τ = 0), but also that the copies of the signal that are shifted relative to each other by about 0.063 s (lateral maximum of the autocorrelation function) also have some similarity. In contrast to this, the copies of the signal, shifted by 0.032 s, should be anti-similar to each other, that is, in a sense, they should be opposite to each other.

Figure 33 shows the pairs of these two copies. The figure shows what is meant by the similarity and anti-similarity of the signal copies.

The correlation function has the following properties:

1. At τ = 0, the autocorrelation function takes on the largest value equal to the signal energy

2. The autocorrelation function is an even time shift function
.

3.With increasing τ, the autocorrelation function decreases to zero

4. If the signal does not contain discontinuities of the type of δ - functions, then
- continuous function.

5... If the signal is an electrical voltage, then the correlation function has the dimension
.

For periodic signals in the definition of the autocorrelation function, the same integral is divided by the signal repetition period:

.

The introduced correlation function has the following properties:


For example, let's calculate the correlation function of a harmonic oscillation:

Using a series of trigonometric transformations, we finally get:

Thus, the autocorrelation function of a harmonic oscillation is a cosine with the same period of variation as the signal itself. With shifts that are multiples of the oscillation period, the harmonic is converted into itself and the ACF takes the largest values, equal to half the square of the amplitude. Time shifts, multiples of half of the oscillation period, are equivalent to a phase shift by an angle
, in this case the sign of the oscillations changes, and the ACF takes on a minimum value, negative and equal to half the square of the amplitude. Shifts that are multiples of a quarter of a period translate, for example, a sinusoidal oscillation into a cosine one and vice versa. In this case, the ACF vanishes. Such signals, which are in quadrature with respect to each other, from the point of view of the autocorrelation function, turn out to be completely different from each other.

It is important that the expression for the signal correlation function did not include its initial phase. Phase information has been lost. This means that the signal itself cannot be reconstructed from the signal correlation function. Display
as opposed to displaying
is not one-to-one.

If we understand the signal generation mechanism as a certain demiurge creating a signal according to the correlation function he has chosen, then he could create a whole set of signals (an ensemble of signals) that actually have the same correlation function, but differ from each other in phase relationships.

    the act of manifestation by a signal of its free will, independent of the will of the creator (the emergence of separate realizations of some random process),

    the result of extraneous violence against the signal (introduction into the signal of measuring information obtained during measurements of any physical quantity).

The situation is similar with any periodic signal. If a periodic signal with a fundamental period T has an amplitude spectrum
and phase spectrum
, then the signal correlation function takes the following form:

.

Already in these examples, a certain connection between the correlation function and the spectral properties of the signal is manifested. These ratios will be discussed in more detail later.

In communication theory, the correlation theory is used in the study of random processes, making it possible to establish a relationship between the correlation and spectral properties of random signals. The problem often arises of detecting one transmitted signal in another or in interference. For reliable signal detection, the method is applied. correlations based on correlation theory. In practice, it turns out to be useful to analyze the characteristics that give an idea of ​​the rate of change in time, as well as the duration of the signal without decomposing it into harmonic components.

Let the signal copy u (t - m) is offset from its original u (t) for the time interval t. To quantify the degree of difference (connection) of the signal u (t) and its offset copy u (t - t) use autocorrelation function(ACF). ACF shows the degree of similarity between a signal and its shifted copy - the larger the ACF value, the stronger this similarity.

For a deterministic signal of finite duration (finite signal), the analytical record of the ACF is an integral of the form

Formula (2.56) shows that in the absence of a copy shift relative to the signal (m = 0), the ACF is positive, maximum and equal to the signal energy:

Such energy [J] is released on a resistor with a resistance of 1 Ohm, if some voltage is connected to its terminals u (t)[V].

One of the most important properties of the ACF is its parity: V( t) = V(- T). Indeed, if in expression (2.56) we change the variable x = t - t then

Therefore, the integral (2.56) can be represented in another form:

For a periodic signal with a period Г, the energy of which is infinitely large (since the signal exists for an infinite time), the calculation of the ACF by formula (2.56) is unacceptable. In this case, the ACF is determined for the period:

Example 2.3

Let us define the ACF of a rectangular pulse, which has an amplitude E and the duration t and (Fig. 2.24).

Solution

For the impulse, it is convenient to calculate the ACF graphically. This arrangement is shown in Fig. 2.24, a - d, where the initial impulse is given, respectively u (t)= u t its copy m t (?) = u (t- t) = m t and their product u (f) u (t- t) = uu v Consider the graphical computation of the integral (2.56). Work u (t) u (t- t) is not equal to zero in the time interval when there is an overlap of any parts of the signal and its copy. As follows from Fig. 2.24, this interval is equal to x - t m if the time shift of the copy is less than the pulse duration. In such cases, for the pulse, the ACF is defined as V( t) = E 2 ( t and - | t |) with a time shift of the copy to the current time | t | B (0) = = E 2 m u = E (see Fig. 2.24, G).

Rice. 2.24.

a - pulse; 6 - copy; v - product of signal and copy; G - ACF

A numerical parameter, convenient for analyzing and comparing signals, is often introduced - correlation interval t k, analytically and graphically equal to the width of the ACF base. For this example, the correlation interval is m k = 2m and.

Example 2.4

Determine the ACF of a harmonic (cosine) signal u (t) == t / m cos (co? + a).


Rice. 2.25.

a - harmonic signal; b - ACF of harmonic signal

Solution

Using formula (2.57) and denoting In p ( t) = V( m), we find

It follows from this formula that the ACF of a harmonic signal is also a harmonic function (Fig. 2.25, b) and has the dimension of power (V 2). Note one more very important fact that the calculated ACF does not depend on the initial phase of the harmonic signal (parameter

An important conclusion follows from the analysis: ACF of almost any signal does not depend on its phase spectrum. Consequently, the signals, the amplitude spectra of which completely coincide, and the phase spectra differ, will have the same ACF. Another remark is that the original signal cannot be restored from the ACF (again, due to the loss of phase information).

Relationship between ACF and signal energy spectrum. Let the pulse signal u (t) has a spectral density 5 (ω). We define the ACF using formula (2.56) by writing and (C) in the form of the inverse Fourier transform (2.30):

By introducing a new variable x = t - m, from the last formula we obtain Here the integral

is the complex conjugate function of the spectral density of the signal

Taking into account relation (2.59), formula (2.58) takes the form Function

are called energy spectrum (spectral energy density) of the signal, showing the frequency distribution of energy. The dimension of the signal energy spectrum corresponds to the value of IP / s) - [(V 2 -s) / Hz].

Taking into account relation (2.60), we finally obtain the expression for the ACF:

So, the ACF of a signal is the inverse Fourier transform of its energy spectrum. Direct Fourier Transform of ACF

So, direct Fourier transform (2.62) ACF determines the energy spectrum, a inverse Fourier transform of the energy spectrum(2.61) - ACF of a deterministic signal. These results are important for two reasons. First, based on the distribution of energy in the spectrum, it becomes possible to estimate the correlation properties of signals - the wider the energy spectrum of the signal, the smaller the correlation interval. Accordingly, the larger the signal correlation interval, the shorter its energy spectrum. Second, relations (2.61) and (2.62) make it possible to experimentally determine one of the functions from the value of the other. It is often more convenient to first obtain the ACF and then calculate the energy spectrum using the direct Fourier transform. This technique is widely used in the analysis of signal properties in real time, i.e. without time delay in its processing.

Cross-correlation function of two signals. If you need to assess the degree of connection between signals u x (t) and u 2 (t), then use cross-correlation function(VKF)

For m = 0, the CCF is equal to the so-called mutual energy of two signals

The CCF value does not change if instead of delaying the second signal u 2 (t) consider its advance by the first signal m, (?), therefore

ACF is a special case of CCF if the signals are the same, i.e. u y (t) = u 2 (t) = u (t). In contrast to the ACF, the CCF of two signals B 12 (t) is not even and is not necessarily maximum at t = 0, i.e. in the absence of time shift of signals.

In the early stages of the development of radio engineering, the question of choosing the best signals for certain specific applications was not very acute. This was due, on the one hand, to the relatively simple structure of transmitted messages (telegraphic messages, radio broadcasting); on the other hand, the practical implementation of signals of complex shape in combination with equipment for their coding, modulation and reverse transformation into a message turned out to be difficult to implement.

Currently, the situation has changed radically. In modern radio-electronic complexes, the choice of signals is dictated primarily not by the technical convenience of their generation, conversion and reception, but by the possibility of optimal solution of the problems envisaged in the design of the system. In order to understand how the need for signals with specially selected properties arises, consider the following example.

Comparison of time-shifted signals.

Let us turn to the simplified idea of ​​the operation of a pulsed radar designed to measure the range to sing. Here information about the object of measurement is included in the value - the time delay between the probing and received signals. The shapes of the probing and and the received and signals are the same at any delays.

The block diagram of a radar signal processing device intended for measuring range may look as shown in Fig. 3.3.

The system consists of a set of elements that delay the "reference" transmitted signal for some fixed time intervals

Rice. 3.3. Signal delay time measuring device

The delayed signals, together with the received signal, are fed to the comparison devices, operating in accordance with the principle: the output signal appears only if both input oscillations are "copies" of each other. Knowing the number of the channel in which the specified event occurs, it is possible to measure the delay, and hence the range to the target.

Such a device will work the more accurately, the more the signal and its "copy", shifted in time, differ from each other.

This gives us a good "idea of ​​what signals are good" for a given application.

Let's move on to the exact mathematical formulation of the problem posed and show that this range of issues is directly related to the theory of energy spectra of signals.

Autocorrelation function of the signal.

To quantify the degree of difference between the signal and and its time-shifted copy, it is customary to introduce the autocorrelation function (ACF) of the signal, which is equal to the scalar product of the signal and the copy:

In what follows, we will assume that the signal under study has an impulsive character localized in time, so that an integral of the form (3.15) certainly exists.

It is directly seen that at, the autocorrelation function becomes equal to the signal energy:

One of the simplest properties of an ACF is its parity:

Indeed, if we make a change of variables in the integral (3.15), then

Finally, an important property of the autocorrelation function is as follows: for any value of the time shift, the module of the ACF does not exceed the signal energy:

This fact follows directly from the Cauchy - Bunyakovsky inequality (see Ch. 1):

So, the ACF appears to be a symmetrical curve with a central maximum, which is always positive. In this case, depending on the type of signal, the autocorrelation function can have both monotonically decreasing and oscillating character.

Example 3.3. Find the ACF of a rectangular video pulse.

In fig. 3.4, a shows a rectangular video pulse with amplitude U and duration. Here is also its "copy", shifted in time in the direction of delay by. Integral (3.15) is calculated in this case elementarily on the basis of a graphical construction. Indeed, the product of and and is nonzero only within the time interval when overlapping of signals is observed. From fig. 3.4, it can be seen that this time interval is equal if the shift does not exceed the pulse duration. Thus, for the considered signal

The graph of such a function is a triangle shown in Fig. 3.4, b. The width of the base of the triangle is twice the pulse width.

Rice. 3.4. Finding the ACF of a rectangular video pulse

Example 3.4. Find the ACF of a rectangular radio pulse.

We will consider a radio signal of the form

Knowing in advance that the ACF is even, we calculate the integral (3.15) by setting. Wherein

whence we easily get

Naturally, at, the value becomes equal to the energy of this pulse (see Example 1.9). Formula (3.21) describes the ACF of a rectangular radio pulse for all shifts lying within the limits If the absolute value of the shift exceeds the pulse duration, then the autocorrelation function will identically vanish.

Example 3.5. Determine the ACF of a sequence of rectangular video pulses.

In radar, signals are widely used, which are packets of pulses of the same shape, following each other at the same time interval. To detect such a burst, as well as to measure its parameters, for example, position in time, devices are created that hardware implement algorithms for calculating the ACF.

Rice. 3.5. ACF of a burst of three identical video pulses: a - a burst of pulses; b - ACF graph

In fig. 3.5, c shows a pack consisting of three identical rectangular video pulses. It also shows its autocorrelation function, calculated by the formula (3.15) (Fig. 3.5, b).

It is clearly seen that the maximum of the ACF is achieved at, however, if the delay turns out to be a multiple of the sequence period (at in our case), side lobes of the ACF are observed, comparable in height to the main lobe. Therefore, we can talk about the known imperfection of the correlation structure of this signal.

Autocorrelation function of an infinitely extended signal.

If it is required to consider periodic sequences infinitely extended in time, then the approach to studying the correlation properties of signals should be somewhat modified.

We will assume that such a sequence is obtained from some localized in time, i.e., impulse, signal, when the duration of the latter tends to infinity. In order to avoid the divergence of the obtained expressions, we define the new ACF as the average value of the scalar product of the signal and its copy:

With this approach, the autocorrelation function becomes equal to the average mutual power of these two signals.

For example, wanting to find the ACF for an infinite cosine wave, one can use formula (3.21) obtained for a radio pulse with a duration and then go to the limit given definition (3.22). As a result, we get

This ACF is itself a periodic function; its value at equals

The relationship between the energy spectrum of a signal and its autocorrelation function.

When studying the material of this chapter, the reader may think that the methods of correlation analysis act as some special techniques that have no connection with the principles of spectral decomposition. However, it is not. It is easy to show that there is a close relationship between the ACF and the energy spectrum of the signal.

Indeed, in accordance with formula (3.15), the ACF is a dot product: Here, the symbol denotes a time-shifted copy of the signal and,

Turning to the generalized Rayleigh formula (2.42), we can write the equality

Time-shifted spectral density

Thus, we come to the result:

The square of the modulus of the spectral density is known to represent the energy spectrum of the signal. So, the energy spectrum and the autocorrelation function are related by the Fourier transform:

It is clear that there is also an inverse relationship:

These results are fundamentally important for two reasons. First, it turns out to be possible to estimate the correlation properties of signals based on the distribution of their energy over the spectrum. The wider the signal bandwidth, the narrower the main lobe of the autocorrelation function and the more perfect the signal from the point of view of the possibility of accurately measuring the moment of its beginning.

Second, formulas (3.24) and (3.26) indicate the way to experimentally determine the energy spectrum. It is often more convenient to first obtain the autocorrelation function, and then, using the Fourier transform, find the energy spectrum of the signal. This technique has become widespread in the study of the properties of signals using high-speed computers in real time.

By the ratio sovtk It follows that the correlation interval

turns out to be the less, the higher the upper cutoff frequency of the signal spectrum.

Restrictions imposed on the type of signal autocorrelation function.

The found connection between the autocorrelation function and the energy spectrum makes it possible to establish an interesting and at first glance unobvious criterion for the existence of a signal with given correlation properties. The fact is that the energy spectrum of any signal, by definition, must be positive [see. formula (3.25)]. This condition will not be fulfilled for any choice of the ACF. For example, if you take

and calculate the corresponding Fourier transform, then

This alternating function cannot represent the energy spectrum of any signal.