Home > Articles > Home & Office Computing > Home Networking

  • Print
  • + Share This
This chapter is from the book


We now turn our attention away from the time and frequency domain and toward the probability domain where statistical methods of analysis are employed. As indicated in Section 3.1, such methods are required because of the uncertainty resulting from the introduction of noise and other factors during transmission.

3.3.1 The Cumulative Distribution Function and the Probability Density Function

A random variable X[1],[2] is a function that associates a unique numerical value X(&lamda;i) with every outcome &lamda;i of an event that produces random results. The value of a random variable will vary from event to event, and depending on the nature of the event will be either discrete or continuous. An example of a discrete random variable Xd is the number of heads that occur when a coin is tossed four times. As Xd can only have the values 0, 1, 2, 3, and 4, it is discrete. An example of a continuous random variable Xc is the distance of a shooter's bullet hole from the bull's eye. As this distance can take any value, Xc is continuous.

Two important functions of a random variable are the cumulative distribution function (CDF) and the probability density function (PDF).

The cumulative distribution function, F(x), of a random variable X is given by

Equation 3.22


where P[X(&lamda;) ≤ x] is the probability that the value X(&lamda;) taken by the random variable X is less than or equal to the quantity x.

The cumulative distribution function F(x) has the following properties:

  1. 0 ≤ F(x) ≤ 1

  2. F(x1) ≤ F(x2) if x1x2

  3. F(–∞) = 0

  4. F(+∞) = 1

The probability density function f(x) of a random variable X is the derivative of F(x) and thus is given by

Equation 3.23


The probability density function f(x) has the following properties:

  1. f(x) ≥ 0 for all values of x

  2. 03inl01a.giff(x) dx = 1

Further, from Eqs. (3.22) and (3.23), we have

Equation 3.24


The function within the integral is not shown as a function of x because, as per Eq. (3.22), x is defined here as a fixed quantity. It has been arbitrarily shown as a function of z, where z has the same dimension as x, f(z) being the same PDF as f(x). Some texts, however, show it equivalently as a function of x, with the understanding that x is used in the generalized sense.

The following example will help in clarifying the concepts behind the PDF, f(x), and the CDF, F(x). In Fig. 3.4(a) a four-level pulse amplitude modulated signal is shown. The amplitude of each pulse is random and equally likely to occupy any of the four levels. Thus, if a random variable X is defined as the signal level v, and P(v = x) is the probability that v = x, then

Equation 3.25


03fig04.gifFigure 3.4 A four-level PAM signal and its associated CDF and PDF.

With this probability information we can determine the associated CDF, F4L(v). For example, for v = –1

Equation 3.26


In a similar fashion, F4L(v) for other values of v may be determined. A plot of F4L(v) versus v is shown in Fig. 3.4(b).

The PDF f4L(v) corresponding to F4L(v) can be found by differentiating F4L(v) with respect to v. The derivative of a step of amplitude V is a pulse of value V. Thus, since the steps of F4L(v) are of value 0.25,

Equation 3.27


A plot of f4L(v) versus v is shown in Fig. 3.4(c).

3.3.2 The Average Value, the Mean Squared Value, and the Variance of a Random Variable

The average value or mean, m, of a random variable X, also called the expectation of X, is also denoted either by 03inl05.gif or E(x). For a discrete random variable, Xd, where n is the total number of possible outcomes of values x1, x2, . . . , xn, and where the probabilities of the outcomes are P(x1), P(x2), . . . , P(xn) it can be shown that

Equation 3.28


For a continuous random variable Xc, with PDF fc(x), it can be shown that

Equation 3.29


and that the mean square value, 03inl06.gif is given by

Equation 3.30


Figure 3.5 shows an arbitrary PDF of a continuous random variable. A useful number to help in evaluating a continuous random variable is one that gives a measure of how widely spread its values are around its mean m. Such a number is the root mean square (rms) value of (Xm) and is called the standard deviation σ of X.

03fig05.gifFigure 3.5 A Probability Distribution Function (PDF) of a continuous random variable.

The square of the standard deviation, σ2, is called the variance of X and is given by

Equation 3.31


The relationship between the variance σ2 and the mean square value E (X2) is given by

Equation 3.32


We note that for the average value m = 0, the variance σ2 = E (X2).

3.3.3 The Gaussian Probability Density Function

The Gaussian or, as it's sometimes called, the normal PDF[1],[2] is very important to the study of wireless transmission and is the function most often used to describe thermal noise. Thermal noise is the result of thermal motions of electrons in the atmosphere, resistors, transistors, and so on and is thus unavoidable in communication systems. The Gaussian probability density function, f(x), is given by

Equation 3.33


where m is as defined in Eq. (3.28) and σ as defined in Eq. (3.31). When m = 0 and σ = 1 the normalized Gaussian probability density function is obtained. A graph of the Gaussian PDF is shown in Fig. 3.6(a).

03fig06.gifFigure 3.6 The Gaussian random variable.

The CDF corresponding to the Gaussian PDF is given by

Equation 3.34


When m = 0, the normalized Gaussian cumulative distribution function is obtained and is given by

Equation 3.35


A graph of the Gaussian cumulative distribution function is shown in Fig. 3.6(b). In practice, since the integral in Eq. (3.35) is not easily determined, it is normally evaluated by relating it to the well-known and numerically computed function, the error function. The error function of v is defined by

Equation 3.36


and it can be shown that erf(0) = 0 and erf(∞) = 1.

The function [1 – erf(v)] is referred to as the complementary error function, erfc(v). Noting that 03inl07.gif, we have

Equation 3.37


Tabulated values of erfc(v) are only available for positive values of v.

Using the substitution 03inl08.gif, it can be shown[1] that the Gaussian CDF F(x) of Eq. (3.35) may be expressed in terms of the complementary error function of Eq. (3.37) as follows:

Equation 3.38a


Equation 3.38b


3.3.4 The Rayleigh Probability Density Function

The propagation of wireless signals through the atmosphere is often subject to multipath fading. Such fading will be described in detail in Chapter 5. Multipath fading is best characterized by the Rayleigh PDF.[1] Other phenomena in wireless transmission are also characterized by the Rayleigh PDF, making it an important tool in wireless analysis. The Rayleigh probability density function f(r) is defined by

Equation 3.39a


Equation 3.39b


and hence the corresponding CDF is given by

Equation 3.40a


Equation 3.40b


A graph of f(r) as a function of r is shown in Fig. 3.7. It has a maximum value of 03inl10.gif, which occurs at r = α. It has a mean value 03inl11.gif, a mean-square value 03inl12.gif, and hence, by Eq. (3.32), a variance σ2 given by

Equation 3.41


03fig07.gifFigure 3.7 The Rayleigh probability density function. (From Taub, H., and Schilling, D., Principles of Communication Systems, McGraw-Hill, 1971, and reproduced with the permission of the McGraw-Hill Companies.)

A graph of F(r) versus 10 log10 (r2 / 2α2), which is from Feher,[3] is shown in Fig. 3.8. If the amplitude envelope variation of a radio signal is represented by the Rayleigh random variable R, then the envelope has a mean-square value of 03inl12.gif, and hence the signal has an average power of 03inl09.gif. Thus, 10 log10 (r2 / 2α2), which equals 10 log10 (r2 / 2) – 10 log102), represents the decibel difference between the signal power level when its amplitude is r and its average power. From Fig. 3.8 it will be noted that for signal power less than the average power by 10 dB or more, the distribution function F(r) decreases by a factor of 10 for every 10-dB decrease in signal power. As a result, when fading radio signals exhibit this behavior, the fading is described as Rayleigh fading.

03fig08.gifFigure 3.8 The Rayleigh cumulative distribution function. (By permission from Ref. 3.)

3.3.5 Thermal Noise

White noise[1] is defined as a random signal whose power spectral density is constant (i.e., independent of frequency). True white noise is not physically realizable since constant power spectral density over an infinite frequency range implies infinite power. However, thermal noise, which as indicated earlier has a Gaussian PDF, has a power spectral density that is relatively uniform up to frequencies of about 1000 GHz at room temperature (290K), and up to about 100 GHz at 29K.[4] Thus, for the purpose of practical communications analysis, it is regarded as white. A simple model for thermal noise is one where the two-sided power spectral density Gn(f) is given by

Equation 3.42


where N0 is a constant.

In a typical wireless communications receiver, the incoming signal and accompanying thermal noise is normally passed through a symmetrical bandpass filter centered at the carrier frequency fc to minimize interference and noise. The width of the bandpass filter, W, is normally small compared to the carrier frequency. When this is the case the filtered noise can be characterized via its so-called narrowband representation.[1] In this representation, the filtered noise voltage, nnb(t), is given by

Equation 3.43


where nc(t) and ns(t) are Gaussian random processes of zero mean value, of equal variance and, further, independent of each other. Their power spectral densities, gncf.gif and gnsf.gif, extend only over the range –W/2 to W/2 and are related to Gn (f) as follows:

Equation 3.44


The relationship between these power spectral densities is shown in Fig. 3.9. This narrowband noise representation will be found to be very useful when we study carrier modulation methods.

03fig09.gifFigure 3.9 Spectral density relationships associated with narrowband representation of noise.

3.3.6 Noise Filtering and Noise Bandwidth

In a receiver, a received signal contaminated with thermal noise is normally filtered to minimize the noise power relative to the signal power prior to demodulation. If, as shown in Fig. 3.10, the input two-sided noise spectral density is N0 / 2, the transfer function of the real filter is Hr(f), and the output noise spectral density is Gno(f), then, by Eq. (3.21), we have

Equation 3.45


03fig10.gifFigure 3.10 Filtering of white noise.

and thus the normalized noise power at the filter output, Po, is given by

Equation 3.46


A useful quantity to compare the amount of noise passed by one receiver filter versus another is the filter noise bandwidth.[1] The noise bandwidth of a filter is defined as the width of an ideal brick-wall (rectangular) filter that passes the same average power from a white noise source as does the real filter. In the case of a real low pass filter, it is assumed that the absolute values of the transfer functions of both the real and brick-wall filters are normalized to one at zero frequency. In the case of a real bandpass filter, it is assumed that the brick-wall filter has the same center frequency as the real filter, fc say, and that the absolute values of the transfer functions of both the real and brick-wall fitlers are normalized to one at fc.

For an ideal brick-wall low pass filter of two-sided bandwidth Bn and |Hbw (f)| = 1 from –Bn/2 to +Bn/2

Equation 3.47


Thus, from Eqs. (3.46) and (3.47) we determine that

Equation 3.48


Figure 3.11 shows the transfer function Hbw(f) of a low pass brick-wall filter of two-sided noise bandwidth Bn superimposed on the two-sided transfer function Hr(f) of a real filter.

03fig11.gifFigure 3.11 Low pass filter two-sided noise bandwidth, Bn.

  • + Share This
  • 🔖 Save To Your Account