Internet Windows Android

The variance of the quantization noise when rounding. Quantization noise

Lecture number 9

"Quantization Effects and Noise in Digital Filters"

In real devices implementing digital signal processing algorithms, it is necessary to take into account the effects caused by the quantization of the input signals and the finite bit width of all registers. The sources of errors in signal processing are rounding (or truncation) of the results of arithmetic operations, quantization noise associated with analog-to-digital conversion of input analog signals, inaccuracy in the implementation of the characteristics of digital filters due to rounding of their coefficients.

To analyze the effects associated with the finite bit depth of data representation, it is necessary to make some assumptions regarding the statistical independence of various noise sources arising in the digital filter. A statistical model is adopted based on the following assumptions:

1. Any two noise samples from the same source are not correlated.

2. Any two noise sources create uncorrelated noise.

3. The noise of each source is not correlated with the input signal.

These assumptions greatly simplify the analysis of processes associated with quantization noise in digital filters, since they make individual noise sources statistically independent from each other and make it possible to analyze for each of them separately. However, the accepted assumptions are not always true. There are many examples for which these assumptions are not true. For example, if the input signal is constant or sinusoidal, at a multiple of the sampling rate. In the first case, all samples of the quantization error will be the same, and in the second they form a periodic sequence. Thus, in both cases, the assumptions put forward are incorrect.

Quantization effects ultimately lead to errors in the output signals of digital filters, and in some cases to unstable operating modes. By virtue of the assumptions made, the output error of the digital filter is calculated as a superposition of errors due to each independent source.

If the input of a digital filter with an impulse response h (t ) the signal arrives x (t ), then the output signal of the filter is determined by the expression

(9.1).

Quantization of the input signal results in quantization noise e in (n ), which is superimposed on the input signal and affects the filter. Due to the linearity of the filter, the filter response can be calculated e out (n) to input noise

(9.2).

This assumes that all computing devices and filter memories are of infinite bit depth.

Similarly, you can find the signal error at any point in the filter block diagram, caused by the quantization noise of the input signal e in (n).

(9.3),

where h i (n ) Is the impulse response of a part of the filter from its input to the point at which the error is estimated.

If the filter input signal is quantized with a bit width b in , then the quantization error of the input signal when using rounding is limited to the value

(9.4),

and the error in the filter output signal caused by quantizing the input signal can be estimated as

(9.5).

Thus, the upper bound on the error caused by the quantization of the input signal depends on the quantization bit and on the sum of the sample units of the filter impulse response.

Variance of input round-off noise

(9.6),

therefore, the variance of the quantization noise at the filter output in accordance with (9.3) is

(9.7).

According to Parseval's equality

(9.8)

can be written (9.7) in the form

(9.9),

where - the amplitude-frequency characteristic of the digital filter.

Thus, according to the permissible value s out 2 and the known frequency response or impulse response of the filter, you can determine the admissible value of the variance of the error of the input signal s in 2 , which in turn determines the required bit depth b in quantize the input signal.

Signal to noise ratio at the filter output, which is defined as the ratio of the signal power to the noise power on a logarithmic scale, is defined as

(9.10),

where s s 2 Is the variance of the useful input signal, and b in - bit quantization of the input signal. Therefore, with an increase in the quantization bit depth by one bit, the signal-to-noise ratio increases by about 6 dB.

As an example, consider a first-order digital filter described by the equation

(9.11).

Its structural diagram is shown in Figure 9.1.

Let the input signal quantization noise have variance s in 2 ... The impulse response of such a filter has the form

(9.12).

According to (9.7), the variance of the noise of the output signal of such a filter, due to quantization of the input signal, is

(9.13).

For the filter to be stable, the condition must be metand therefore, i.e. the output noise power is greater than the input noise power. The closerto one, the greater the gain of the input noise by the filter.

Using Parseval's theorem, it is possible to determine the variance of the filter's output noise from its frequency response. Let a filter be given, the frequency response of which is shown in Figure 9.2.


Then, according to (9.9), the variance of the filter output noise caused by quantizing the input signal will be equal to

(9.14).

The choice of the optimal bit depth of the input signal quantization is determined by the required accuracy of the information representation inherent in the input signal, the presence of pre-input noise in it, and the procedure used to process the signal.

The noise in the signal determines the upperthe boundary of the number of quantization levels.Obviously, it makes no sense to use a large number of bits when the signal contains a lot of noise, since in this case the noise, and not the signal, will be quantized with great accuracy. It is enough to choose so many quantization levels that the contribution of the quantization noise is small compared to the noise contained in the signal.

On the other hand, the minimum allowable number of quantization levels should provide the desired quality of the output signal. Deterioration in the quality of the input signal can be caused by imperfections in the signal preprocessing stage (noise and limited frequency characteristics of the prescale amplifiers and analog filters).

Until now, it has been assumed that the coefficients of the filter difference equation are given with infinite precision. In the physical implementation of the filter, these coefficients are stored in electronic memory elements (storage cells), which have a limited capacity. This means that the filter coefficients are quantized as well as the input signal.

Quantization of the filter coefficients obeys the same laws as the quantization of the input signal. As a result of quantizing the filter coefficients, the values ​​of the poles and zeros of the filter transfer function change to a greater or lesser extent, which, in turn, leads to a corresponding change in the frequency characteristics of the filter. Thus, tilting the filter coefficients leads to the appearance of an error

(9.15),

where A (w ) - frequency response of the filter with non-quantized coefficients, A d (w ) - frequency response of the filter with quantized coefficients. The value should not exceed the permissible value, usually determined from the condition that the deviations of the real frequency response from the ideal were within acceptable limits.

Different filter structures have different sensitivity to changes in individual coefficients. Therefore, it is impossible to propose a universal method for determining the required number of bits of coefficient quantization for all types of filters. The required number of bits in the quantized filter coefficients can be determined by calculating for a sequentially increasing number of bits in the coefficient codes until the condition .

Other methods are possible and practically applied, in particular, methods based on a preliminary study of the sensitivity of the characteristics of a particular type of filter to changes in its coefficients.

As an example, consider a biquadratic block described by the transfer function

(9.16),

the structural diagram of which is shown in Figure 9.3.

If we denote the poles of the transfer function (9.16) by, then it is easy to see that

(9.17).

Then with small changes a 1 and a 2 the coordinates of the poles are changed by the values

(9.18),

similarly (9.19).

You can see that D r r close to one, while D q changes sharply at values q close to zero.

The sensitivity of the frequency characteristics of the filters to changes in the values ​​of the coefficients strongly depends on the structure chosen for the implementation of the filter.

When implementing the digital filter algorithm, the operations of addition and multiplication by coefficients are performed. The addition of numbers with a fixed point when the bit width of the adder is not less than the bit width of the representation of the terms does not lead to rounding errors in the presentation of the sum.

The multiplication operation is associated with round-off errors. The product of two fixed-point numbers with b 1 and b 2 digits, respectively, can contain up to b 1 + b 2 discharges. When performing sequential multiplication operations, it is necessary to limit the bit width of the products. Otherwise, the bit depth of subsequent works will increase indefinitely. Therefore, for storing works, storage cells are usually allocated with a capacity less than up to b 1 + b 2 ... Thus, the result of the multiplication is subject to rounding. As a result of product rounding, the filter algorithm is not implemented accurately and the output signal is calculated with an error.

The model of a multiplier with a finite number of digits is represented as a series connection of an ideal multiplier (with an unlimited number of digits) and an adder, to the input of which, along with the exact value of the product, quantization noise enters. At the output of the adder, the quantized value of the product with b mul discharges (Figure 9.4).

The rounding error of one product can be estimated by its upper bound

(9.20),

where Q mul - the step of quantizing the work. This error can be considered as a discrete stationary random process with a uniform probability distribution density, with zero mean and variance equal to

(9.21).

By adopting such a linear model for each multiply node on the filter block diagram, the error in the filter output can be computed as a superposition of the errors due to all sources of round-off noise. For this purpose, it is only necessary to determine the impulse characteristics g i (n ) parts of the filter structure from each i th noise source (i.e. output i th multiplier) to the filter output and calculate the component in the filter output noise due to i -m noise source as

(9.22).

Then, the rounding noise at the output due to all L sources of noise can be calculated as

(9.23).

Thus, the filter output noise due to i -m rounding source does not exceed the value

(9.24).

Then the maximum value of the output noise caused by all L rounding sources (despite the fact that the bit width of all multipliers is the same) is equal to

(9.25).

Based on (9.7), we can estimate the variance of the resulting round-off noise from all sources as

(9.26).

The output noise level of the filter, due to the quantization of the products, strongly depends on the features of the structure chosen to implement the filter. This is because the impulse response of the filter section from the output of a particular multiplier to the output of the filter depends on the structure used. When choosing a filter structure, it is necessary to take into account the effect of product quantization errors along with coefficient quantization errors.

All quantization noise sources of products make a different contribution to the resulting output noise.

As an example, consider the estimate of the output quantization noise of products in a biquadratic block having an impulse response h (n ). The noise model of the structure under consideration is shown in Figure 9.5.

It can be seen from the presented model that the filter structure has five sources of product quantization noise. Sources of e mul .4 and e mul .5 pass through the same circuit as the input signal. This means that the impulse responses g 4 (n) and g 5 (n ) coincide with the overall impulse response of the filter h (n). Sources e mul .1, e mul .2, e mul .3 directly add an error at the filter output, as a result of which they cannot be amplified by the filter. Their impulse characteristics are equal d (n ). In accordance with (9.7) and (9.26), the contribution of individual noise sources can be estimated as


(9.27).

The variance of the total quantization noise at the filter output in accordance with (9.26) will be equal to

(9.28).

The total quantization error caused by quantizing the input signal and quantizing the products is determined by the sum of the estimates of the corresponding errors.

When summing numbers with a fixed point, a round-off error does not occur (unless the adder has a capacity not less than the word length of the addends). However, when summing numbers with a fixed bit depth, overflow may occur when the resulting result does not fit into the number of digits corresponding to the bit width of the addends. In the event of an overflow, in order to avoid violation of the algorithm of the filter's operation, the sum should be limited taking into account the sign at the level of the maximum value that fits into the specified number of digits of the result. In the software implementation of the filter, this is carried out by the appropriate branching of the functioning algorithm, and in the hardware implementation, it requires the inclusion of special devices for analyzing the overflow and limiting the sum, taking into account the sign, into the filter circuit. However, even the implementation of these means does not solve all the problems associated with overflows, since in the presence of overflows, the filter turns into a substantially non-linear device with all the ensuing consequences. Therefore, for the normal operation of the filter, it is necessary to implement special measures to avoid the occurrence of an overflow situation altogether.

One of the means to prevent overflow is to introduce scaling, which is reduced to shifting to the right (which is equivalent to division) of the binary codes of the summands at all inputs of the adders. If the initial terms are normalized at the level of 1.0, then when summing two numbers, to eliminate the possibility of overflow, each of the terms must be shifted one bit to the right, which is equivalent to dividing each term by 2. After that, each of the terms in modulus will not exceed 0.5, and , which means that their sum will not exceed 1.0. If the adder has more than two inputs, then the terms must be shifted by more digits. This method is called automatic scaling.

As a result of such scaling, a scaling error occurs due to the fact that the least significant bit (or bits with a shift of more than one bit) of the shifted terms are lost and the resulting error in their representation increases. So, when the terms are shifted by one bit, the maximum value of the scaling error is

(9.29),

where b - the number of digits in the representation of the term. If the term being shifted is a direct-coded signed number, then the possible values ​​of this error are 2 - b, -2 - b, 0. If we take

(9.30),

then this error can be represented as random noise with mean value equal to 0 and variance

(9.31).

If the summand is a two's complement number, then the scaling error can take values ​​-2 - b or 0 with equal probability 0.5. In this case, the scaling noise has an average value of -2 - b / 2 and variance

(9.32).

In this way, scaling errors can be accounted for in the filter model in a similar way to quantization errors.

Another way to prevent the possibility of overflow is to scale the input signals of the filter or its constituent parts. If the impulse response of the filter or some part of it is h i (n ), then the output signal of the filter (or part of it) y i (n ) is limited by the value

(9.33),

where is the upper limit of the filter input signal. If, then the necessary condition for the absence of overflow is

(9.34).

If the filter coefficients are specified (i.e., specified h i (n )), then, so that there are no overflows, i.e. so that the output signal of any adder does not exceed one, it is necessary to appropriately limit the magnitudes of the input signal and the output signals of the multipliers. For this purpose, such scaling is introduced so that the signals

(9.35),

where g i - scaling factors.

Scaling multipliers are included at the filter inputs or at the multiplier outputs. If, then a sufficient condition for the absence of overflows is, according to (9.35), the choice of scaling coefficients based on the condition

(9.36).

Coefficients g i are chosen, as in the case of automatic scaling, usually equal to powers of two, and the scaling multiplication is reduced to shifts. In this case, similar to the case of automatic scaling, scaling noise occurs, which reduces the signal-to-noise ratio at the filter output.

With a significant decrease in the amplitudes of the signals passing through the filter, the signal-to-noise ratio at the filter output decreases. Calculation of scaling factors according to the formula (9.36) often leads to overestimated results and, consequently, to a decrease in the efficiency of the filter. In addition, with complex filter structures, calculating the sum of an infinite number of samples of the filter impulse response can be difficult to implement. Therefore, the calculation of the scale factors is often carried out according to a different technique based on the analysis of the spectrum of the input signal and the frequency properties of the filter.

If the filter structure contains m adders, the output signal of the i -th adder vi (n ) can be represented as

(9.37),

where x (n ) Is the input signal of the filter, h i (n ) - impulse response of the filter part from input to output i -th adder.

Z -signal conversion v i (n ) can be written as

(9.38),

where H i (z ) Is the transfer function of the filter part from input to output i -th adder.

Signal frequency response v i (n ) (for a stable filter) can be obtained by changing the variables in expression (9.38)

(9.40).

Then the output signal of the adder itself is v i (n ) can be defined as the inverse Fourier transform of V i (e j w T)

(9.41).

Assuming that the modulus of the spectrum of the input signal x (n C , then you can estimate the maximum value of the modulus of the output signal of the adder

(9.42).

If the filter input x (n ) is prescaled by a factor l i , then the last expression takes the form

(9.43).

To avoid overflows at the adder output, i.e. to fulfill the condition, it is enough to choose the value of the normalizing factor l i such that

(9.44).

If we make the assumption that the modulus of the frequency response H i (e j w T ) is limited to some value D , then you can estimate the maximum value of the modulus of the output signal of the adder in another way, namely

(9.45).

In this case, the normalizing factor l i to eliminate overflow at the output of the adder, it can be chosen such that

(9.46).

Finally, applying to expression (9.41) the Cauchy-Bunyakovsky inequality ( ) one can obtain the following inequality

(9.47).

If we assume that the energy of the spectrum of the input signal (the second radical expression in inequality (9.47)) is limited to some value E , then the normalizing factor l i can be selected based on the following expression

(9.48).

All three options for choosing the scaling factor are based on the availability of reliable information about the spectral characteristics of the filter input signal. If this information is not absolutely reliable, then the probability of an overflow at the output of the adder is not zero.

To exclude overflow at the outputs of all adders included in the filter block diagram, it is necessary to evaluate the coefficients l i for each of the adders and choose the final value of the normalizing factor at the filter input as

(9.49).

As in the case of automatic scaling, the coefficients l are usually chosen equal to powers of 2, which turns the operation of scaling multiplication into a shift of the input signal code by the corresponding number of digits to the right.

The scaling multiplier, like any other multiplier in the filter structure, is a source of quantization error noise, the effect of which on the output signal can be taken into account similar to the noise of other multipliers.

It is obvious that in cases where a certain adder in the filter's structural scheme adds more than two terms, even in the absence of overflow in the final sum, it can take place in intermediate partial sums. This fact was not taken into account in the previous reasoning. However, if the input and intermediate digital signals of the filter are presented in a complement code, then all the above methods of normalization remain valid, since when summing numbers in a complement code, the final result remains correct (if there is no overflow in it) even in the presence of overflow in partial sums.

The previous analysis was based on the assumption that noise signals are statistically independent from sample to sample and from source to source. This is true if the difference between two adjacent samples of the input signal is much larger than the quantization step. It is clear that in many cases (in particular, when the input signal is constant or equal to zero), this assumption is not valid. Under these circumstances, quantization errors can be highly correlated. This can lead to a malfunction of the filter, as a result of which the filter becomes unstable, and steady periodic oscillations are generated at its output. This phenomenon is called dead zone effect, and periodic oscillations at the output are called fluctuations in the limit cycle. The general analysis of this nonlinear effect is rather complex. Therefore, we will conduct a study of this phenomenon for the simplest digital filters.

Consider a first-order filter described by the difference equation

(9.50).

The transfer function of such a filter has the form

(9.51).

The block diagram of the filter is shown in Figure 9.6.


The impulse response of such a filter is

(9.52).

If the coefficient a 1 is equal to 1 or –1, then the filter becomes unstable and has an impulse response

(9.53).

Table 9.1 shows the exact values ​​of the impulse response samples (9.52) at b 0 = 10, a 1 = 0.9.

h (n)

H Q (n)

7.29

6.561

5.9049

5.31441

2.65614*10 -4

Now, suppose the filter has a fixed-point decimal multiplier in which each product is a 1 * y (n -1) is rounded to the nearest integer according to the condition

(9.54).

The third column of Table 9.1 shows the samples of the impulse response of such a filter. As you can see, when the filter response becomes constant, and quantization makes the filter unstable.

If we assume that the difference equation (9.50) remains valid for an unstable filter, then the effective value... If, then the filter response will decay in the absence of an input signal until the output signal reaches the zone [- k, k ] called dead zone... When this happens, the filter mode will become unstable. Any cause causing the module to exceed the output k , leads to the restoration of stability. However, in the absence of an input signal, the response decays again to a value corresponding to the dead zone.

Thus, the filter will be in the limit cycle mode with the output signal amplitude equal to k ... Since the effective value a 1 equals 1 for a 1> 0 or –1 for a 1 <0, то частота такого предельного цикла равна 0 или w s / 2.

(9.60).

This expression can be used to select the minimum number of bits of the computing device from the condition of limiting the oscillation amplitude of the limit cycle at a given level.

Let us analyze the effect of the dead zone for a second-order filter, which is described by the difference equation will be equal to 1. In this case

(9.66).

Therefore, as before, the condition for unstable filter operation can be defined as

(9.67).

If k Is an integer, then the quantities a 2 of the ranges

(9.68)

will lead to the appearance of dead zones [-1,1], [-2,2], ..., [- k, k ] respectively.

If the filter uses a binary multiplier with a quantization step of the result equal to q , then the condition for the appearance of oscillations of the limit cycle has the form

Mayorov V.P.
Semin M.S.

The purpose of this article is to show how images look at different signal-to-noise ratios. This ratio is crucial for evaluating the image quality and camera sensitivity.

Quantum noise as it is

Below are examples to illustrate how images look in different lighting conditions. The brightness of an object is expressed in terms of the number of electrons that are generated in a CCD cell as a result of exposure to light. The image quality is assessed by the signal-to-noise ratio (S / N) measured on the light portion of the image.

A VS-CTT-085-60 system based on a SONY ICX085AL CCD matrix was used as a television input system. In the calculations, the reading noise value of 25 electrons was taken (see below for the reading noise).

The original image is the centerpiece of the TV test chart. The signal-to-noise ratio is about 80. The size of this image is 256 * 256 pixels.

Fig 1. Original image

The left images are images that take into account the matrix reading noise (25 electrons), the right ones are images at the same illumination level, but in the absence of reading noise as such. We can say that the right column of images is an ideal case to which you can approach for an infinitely long time, but in principle it is impossible to surpass, because then everything rests on "quantum noises".

Signal strength Images with noise
reading 25 electrons
Images excluding
reading noise
Signal
25 electrons
S / N = 1
Signal
52 electrons

S / N = 2
Signal
108 electrons

S / N = 4
Signal
234 electrons

S / N = 8
Signal
547 electrons

S / N = 16
Signal
1400 electrons

S / N = 32

Let's try to explain all this.

The noise in the image obtained from the CCD matrix can be simplified into 2 main components (in fact, there are more of these components, but the rest in this case can be neglected):

  • matrix reading noise;
  • quantum noise of photons.

Matrix read noise is a constant and is determined only by the CCD circuitry. Unfortunately, the company SONY on the CCDs of which we carried out all our experiments does not report this parameter. We simply measured it on our specific VS-CTT-085-60 camera and it turned out to be equal to 20-25 electrons. We have seen similar numbers on the websites of foreign camera manufacturers on this matrix.

Quantum noise comes from the fundamental properties of all things and in particular light. Light quanta are randomly distributed in space and time. In this case, the number of electrons accumulated in a cell can be determined up to the square root of their number (Poisson statistics).

At a low brightness level of the object, the largest contribution to the noise is made by the matrix reading noise. This noise determines the lowest possible signal level that can be seen.

In an image composed of 400-625 electrons, the quantum noise is compared to the reading noise. When the signal is greater than this value, the largest contribution to the total noise is made by the "quantum noise of photons". The images from the last row are very close, but this is only 7% (!!!) of the maximum pixel capacity of the ICX085 matrix (20,000 e-1).

Conclusion

If the seller tells you that his super-duper camera has a sensitivity of 0.0хххх1 lux - do not forget to ask - at what signal-to-noise ratio is this all measured?

Look at the images and draw your own conclusions! We can repeat once again - miracles in increasing the sensitivity of television cameras should not be expected.

If you got a "noisy" image at illumination close to saturation of the matrix, then it makes no sense to look for the cause of these noises in the camera.

The main aspect of the calculation and development of engineering projects is the need to use analytical characteristics of the quality of systems functioning. Only in the presence of such characteristics can the system be objectively assessed and its cost effectively compared with the cost of alternative developments. One of the characteristics required by telephony engineers is the quality of the speech delivered to the listener. Speech quality measurements are complicated by the subjective properties of speech that are perceived by the typical listener. One of the features of the subjective perception of noise or distortion in a speech signal is associated with the frequency composition, or the spectrum of interfering influences in combination with the level of their power. These effects of noise as a function of frequency were considered in Chapter 1 when introducing the concepts of C-loop weighing and psophometric weighing.

Consecutive quantization errors in a PCM encoder are generally assumed to be randomly distributed and not correlated with each other. Thus, the cumulative effect of quantization errors in PCM systems can be viewed as additive noise with a subjective effect that is similar to that of band-limited white noise. In fig. 3.9 shows the dependence of the quantization errors on the signal amplitude for an encoder with uniform quantization steps. Note that if the signal has time to change in amplitude by several quantization steps, the quantization errors become independent. If the signal is sampled at a frequency much higher than f s, then successive samples will often fall on the same steps, which will lead to a loss of independence of the quantization errors.

Quantization errors, or quantization noise, that occur when converting an analog signal to digital form are usually expressed in terms of the average noise power versus the average signal power. Accordingly, the quantization signal-to-noise ratio can be defined as

OSHK = E (x 2 (t)) / E (2), (3.1)

where E (.) - mathematical expectation, or average value, x (t) - analog input signal, y (t) - decoded output signal.

There are three points to make in determining the average quantization noise.

    The error y (t) –x (t) is limited in amplitude by the value q / 2, where q is the quantization step. (The decoded output samples are located exactly in the middle of the quantization step.)

    It can be assumed that the sample values ​​with equal probability can fall into any point within the quantization step (a uniform probability density equal to 1 / q is assumed).

    It is assumed that the signal amplitudes are limited to the operating range of the encoder. If the discrete value exceeds the limit of the highest quantization step, then distortions caused by overload occur.

If, for convenience, we assume that the pull-up resistor has a resistance of 1 Ohm, then the average quantization noise power (calculated in Appendix A) is given by:

Quantization noise power = q 2/12. (3.2)

If all quantization steps have equal values ​​(uniform quantization) and the quantization noise does not depend on the sample values, then the quantization signal-to-noise ratio (in decibels) is defined as

SNR = 10lg = 10.8 + 20lg (v / q), (3.3)

where v is the root-mean-square value of the input signal amplitude. In particular, for a sinusoidal input signal, the quantization signal-to-noise ratio (in decibels) with uniform quantization is

SNR = 10lg [(A 2/2) / (q 2/12)] = 7.78 + 20lg (A / q), (3.4)

where A is the amplitude of the sinusoid.

Example 3.1 A sinusoidal signal with an amplitude of 1 V should be digitized so that a quantization signal-to-noise ratio of at least 30 dB is obtained. How many identical quantization steps are required and how many bits are required to encode each sample?

Solution. Using formula (3.4), we determine the maximum quantization step size q = 10 - (30 - 7.78) / 20 = 0.078B.

Thus, 13 quantization steps are required for each signal polarity (total 26 quantization steps). The number of bits required to encode each sample is defined as n = log 2 26 = 4.75 bits per sample.

In quantization noise power measurements, spectral components are often weighted in the same way as noise on analog channels. Unfortunately, measurements of weighted noise do not always reflect the true quality of speech perception passed by the encoder (decoder). If the spectral distribution of quantization noise more or less repeats the spectral distribution of the speech signal, these noises are much less noticeable than noises uncorrelated with speech. On the other hand, if the quantization process creates energy at tonal frequencies other than those found in specific sounds, these distortions become more noticeable.

High quality PCM encoders generate quantization noise that is evenly distributed over the PM range and does not depend on the signal being encoded. In this case, the quantization signal-to-noise ratio (3.4) is a good measure of the quality of the PCM transform. In some types of encoders discussed below (especially in vocoders), knowledge of the quantization noise power is not very useful. In describes other characteristics of the quality of speech passed through the encoder, which better determine the perception of speech by the listener.

Almost all DSPs use discretization of signals with a constant period T d, and deviations from this period? T i are random. These deviations lead to a change in the shape of the received signal, which is subjectively perceived as a characteristic interference called sampling noise.

The values ​​of? T i are determined mainly by low-frequency phase fluctuations of pulses caused by inaccuracies in the operation of the linear regenerators of the transmission station.

Signal immunity from sampling noise:

Allowable relative values ​​of sampling instants offsets;

where is the magnitude of the deviation caused by the instability of the master oscillators;

where is the magnitude of the deviation caused by phase fluctuations.

Sampling period

63 dB Required Sampling Noise Immunity

Because , then:

Conclusion: to ensure acceptable immunity from sampling noise, the sampling period should not deviate by more than 20 ns.

Quantization noise

Uniform Quantization Noise

Signal quantization by level is the main operation of analog-to-digital signal conversion and consists in rounding its instantaneous values ​​to the nearest allowed. With uniform quantization, the distance between quantization levels is the same. When quantizing a signal, errors occur, the magnitude of which is random and has a uniform distribution, not exceeding the value of half the quantization step. The quantized signal is the sum of the original signal and the error signal, which is perceived as fluctuation noise.

Quantization noise immunity for the weakest signals with uniform quantization:

Psophometric coefficient equal to 0.75 for the PM channel;

Signal dynamic range, equal, dB;

m is the number of bits in the binary code.

Table 5.2. Initial data

Signal levels:

Signal dynamic range:

Required number of digits:

The bitness of the code for uniform quantization.

The number of steps for uniform quantization will be:


Conclusion: to encode with a uniform code with a given security, you need code with a bit depth.

Non-uniform quantization noise

Real PCM systems use non-uniform quantization. Uneven quantization - reducing the slope of the characteristic by decreasing the size of the quantization steps for small instantaneous values ​​of the signal by increasing the steps for large values.

Non-uniform encoding uses 8-bit codes, i.e. the number of quantization levels is 256.

The dynamic range is compressed using the A - or - compression characteristics. In our case, the compression characteristic is used, which is described by the following expression:

Rice. 5.2.2. Compression characteristic

DSP uses segment non-uniform quantization characteristics because they are quite easy to implement on a digital basis. The characteristic is symmetric with respect to 0, its positive and negative branches consist of 8 segments, each segment is divided into 16 identical steps (within each segment, quantization is uniform).

The segments approximate a smooth curve of the type A compression characteristic in the zero and in the first segment, the step is minimal, and in each subsequent segment the step size is doubled in relation to the previous one.

The expression for the protection against quantization noise in the first two segments will be:

For 2-7 segments:

where i is the segment number.


The beginning of the graph - an inclined straight line - corresponds to the zero and first segments. This is a zone of uniform quantization, so the security increases in proportion to the increase in the signal level. When passing to the second segment, the protection is abruptly reduced by 6 dB. When the upper limit of the 7th segment is reached, the overload zone sets in.

Instrumental noises

In the process of converting an analog signal to digital, noise appears in the terminal equipment, which is determined by the deviation of the characteristics of the converter from the ideal. These deviations are caused by the limited speed and final accuracy of the operation of individual units, changes in the parameters of the converters with temperature fluctuations, aging of devices, etc. The level of instrumental noise increases with an increase in the transmission rate and bit width of the code.

The relationship between quantization noise and instrumental noise:

RMS value of the reduced instrumental conversion error;

The bitness of the code.

For non-uniform quantization:

For uniform quantization:

Output : with non-uniform quantization, the power of instrumental noise is much less than with uniform quantization, therefore, it is better to use non-uniform quantization.

Idle channel noise

In the absence of input signals at the input of the encoder, weak interferences act, which include intrinsic noises and crosstalk, unbalanced pulse residues, etc. If the encoder characteristic turns out to be shifted in such a way that the level of the zero input signal coincides with the decision level of the encoder, then interference with any arbitrarily small amplitude leads to a change in the code combination. In this case, the output signal of the decoder is a rectangular pulse with a span (is the value of the minimum quantization step) and with random times of zero crossing. The resulting noise is called idle channel noise. Despite its small value, these noises are not, as it were, “masked” by the signal, which is noticeable for subscribers.

The noise immunity of an unoccupied channel must be at least:

where, a is the crest factor of the signal, is the minimum quantization step for uniform and non-uniform quantization.

Uniform quantization:

Uneven Quantization:

Conclusion: with non-uniform quantization, the immunity from interference of an unoccupied channel is 12.1 dB higher than with a uniform one.

Signal quantization by level is the main operation of analog-to-digital signal conversion and consists in rounding its instantaneous values ​​to the nearest allowed. With uniform quantization, the distance between quantization levels is the same. When quantizing a signal, errors occur, the magnitude of which is random and has a uniform distribution, not exceeding the value of half the quantization step. The quantized signal is the sum of the original signal and the error signal, which is perceived as fluctuation noise.

Quantization noise immunity for the weakest signals with uniform quantization:

–Psophometric coefficient equal to 0.75 for the PM channel;

- dynamic range of the signal equal to , dB;

m is the number of bits in the binary code.

Table 5.2. Initial data

Signal levels:

Signal dynamic range:

Required number of digits:

–Digit capacity of the code for uniform quantization.

The number of steps for uniform quantization will be:

Conclusion: to encode with a uniform code with a given security, you need code with a bit depth.

5.2.2. Non-uniform quantization noise

Real PCM systems use non-uniform quantization. Uneven quantization - reducing the slope of the characteristic by decreasing the size of the quantization steps for small instantaneous values ​​of the signal by increasing the steps for large values.

Non-uniform encoding uses 8-bit codes, i.e. the number of quantization levels is 256.

The dynamic range is compressed using the A - or m - compression characteristics. In our case, the compression characteristic is used, which is described by the following expression:

Rice. 5.2.2. Compression characteristic

DSP uses segment non-uniform quantization characteristics because they are quite easy to implement on a digital basis. The characteristic is symmetric with respect to 0, its positive and negative branches consist of 8 segments, each segment is divided into 16 identical steps (within each segment, quantization is uniform).

The segments approximate a smooth curve of the type A compression characteristic in the zero and in the first segment, the step is minimal, and in each subsequent segment the step size is doubled in relation to the previous one.

The expression for the protection against quantization noise in the first two segments will be:

For 2-7 segments:

where i is the segment number.

The beginning of the graph - an inclined straight line - corresponds to the zero and first segments. This is a zone of uniform quantization, so the security increases in proportion to the increase in the signal level. When passing to the second segment, the protection is abruptly reduced by 6 dB. When the upper limit of the 7th segment is reached, the overload zone sets in.