Home > Articles > Networking > Wireless/High Speed/Optical

📄 Contents

  1. Motivation
  2. Wireless Signaling Environment
  3. Basic Receiver Signal Processing for Wireless
  4. Outline of the Book
  • Print
  • + Share This
This chapter is from the book

1.3 Basic Receiver Signal Processing for Wireless

This book is concerned with the design of advanced signal processing methods for wireless receivers, based largely on the models discussed in preceding sections. Before moving to these methods, however, it is of interest to review briefly some basic elements of signal processing for these models. This is not intended to be a comprehensive treatment, and the reader is referred to [145, 146, 270, 376, 381, 385, 391, 396, 510, 520, 523] for further details.

1.3.1 Matched Filter/RAKE Receiver

We consider first the particular case of the model of (1.9), in which there is only a single user (i.e., K = 1), the channel impulse g1(·, ·) is known to the receiver, there is no CCI [i.e., i(·) ≡ 0], and the ambient noise is AWGN with spectral height σ2. That is, we have the following model for the received signal:

Equation 1.18


where fi, 1(·) denotes the composite waveform of (1.11), given by

Equation 1.19


Let us further restrict attention, for the moment, to the case in which there is only a single symbol to be transmitted (i.e., M = 1), in which case we have the received waveform

Equation 1.20


Optimal inferences about the symbol b1[0] in (1.20) can be made on the basis of the likelihood function of the observations, conditioned on the symbol b1[0], which is given in this case by [377]

Equation 1.21


where the superscript asterisk denotes complex conjugation and ℜ{·} denotes the real part of its argument.

Optimal inferences about the symbol b1[0] can be made, for example, by choosing maximum-likelihood (ML) or maximum a posteriori probability (MAP) values for the symbol. The ML symbol decision is given simply by the argument that maximizes L (r(·) | b1[0]) over the symbol alphabet, A:

Equation 1.22


It is easy to see that the corresponding symbol estimate is the solution to the problem

Equation 1.23



Equation 1.24


Thus, the ML symbol estimate is the closest point in the symbol alphabet to the observable z.

Note that the two simplest and most common choices of symbol alphabet are M-ary phase-shift keying (MPSK) and quadrature amplitude modulation (QAM). In MPSK, the symbol alphabet is

Equation 1.25


or some rotation of this set around the unit circle. (M as used in this paragraph should not be confused with the framelength M.) For QAM, a symbol alphabet containing M × N values is

Equation 1.26


where AR and AI are discrete sets of amplitudes containing M and N points, respectively; for example, for M = N even, a common choice is

Equation 1.27


or a scaled version of this choice. A special case of both of these is that of binary phase-shift keying (BPSK), in which A = {−1, +1}. The latter case is the one we consider most often in this treatment, primarily for the sake of simplicity. However, most of the results discussed herein extend straightforwardly to these more general signaling alphabets.

ML symbol estimation [i.e., the solution to (1.23)] is very simple for MPSK and QAM. In particular, since the MPSK symbols correspond to phasors at evenly spaced angles around the unit circle, the ML symbol choice is that whose angle is closest to the angle of the complex number z of (1.24). For QAM, the choices of the real and imaginary parts of the ML symbol estimate are decoupled, with ℜ{b} being chosen to be the closest element of AR to ℜ{z}, and similarly for ℑ{b}. For BPSK, the ML symbol estimate is

Equation 1.28


where sign{·} denotes the signum function:

Equation 1.29


MAP symbol detection in (1.20) is also based on the likelihood function of (1.21), after suitable transformation. In particular, if the symbol b1[0] is a random variable, taking values in A with known probabilities, the a posteriori probability distribution of the symbol conditioned on r(·) is given via Bayes’ formula as

Equation 1.30


The MAP criterion specifies a symbol decision given by

Equation 1.31


Note that in this single-symbol case, if the symbol values are equiprobable, the ML and MAP decisions are the same.

The structure of the ML and MAP decision rules above shows that the main receiver signal processing task in this single-user, single-symbol, known-channel case is the computation of the term

Equation 1.32


This structure is called a correlator because it correlates the received signal r(·) with the known composite signaling waveform f1,0(·). This structure can also be implemented by sampling the output of a time-invariant linear filter:

Equation 1.33


where * denotes convolution and h is the impulse response of the time-invariant linear filter given by

Equation 1.34


This structure is called a matched filter, since its impulse response is matched to the composite waveform on which the symbol is received. When the composite signaling waveform has a finite duration so that h(t) = 0 for t < − D ≤ 0, the matched-filter receiver can be implemented by sampling at time D the output of the causal filter with the following impulse response:

Equation 1.35


For example, if the signaling waveform s0,1(t) has duration [0,T] and the channel has delay spread τd, the composite signaling waveform will have this property with D = T + τd.

A special case of the correlator (1.32) arises for a pure multipath channel in which the channel impulse response is given by (1.10). The composite waveform (1.11) in this case is

Equation 1.36


and the correlator output (1.32) becomes

Equation 1.37


a configuration known as a RAKE receiver. Further details on this basic receiver structure can be found, for example, in [396].

1.3.2 Equalization

We now turn to the situation in which there is more than one symbol in the frame of interest (i.e., when M > 1). In this case we would like to consider the likelihood function of the observations r(·) conditioned on the entire frame of symbols, b1[0], b1 [1], ..., b1[M − 1], which is given by

Equation 1.38


where the superscript H denotes the conjugate transpose (i.e., the Hermitian transpose), b1 denotes a column vector whose ith component is b1[i], i = 0, 1, ..., M − 1, y1 denotes a column vector whose ith component is given by

Equation 1.39


and H1 is an M × M Hermitian matrix, whose (i,j)th element is the cross-correlation between fi, 1(t) and fj, 1(t):

Equation 1.40


Since the likelihood function depends on r(·) only through the vector y1 of correlator outputs, this vector is a sufficient statistic for making inferences about the vector b1 of symbols [377].

Maximum-likelihood detection in this situation is given by

Equation 1.41


Note that if H1 is a diagonal matrix (i.e., all of its off-diagonal elements are zero), (1.41) decouples into a set of M independent problems of the single-symbol type (1.22). The solution in this case is correspondingly given by

Equation 1.42



Equation 1.43


However, in the more general case in which there is intersymbol interference, (1.41) will not decouple and the optimization must take place over the entire frame, a problem known as sequence detection.

The problem of (1.41) is an integer quadratic program which is known to be an NP-complete combinatorial optimization problem [380]. This implies that the complexity of (1.41) is potentially quite high: exponential in the frame length M, which is essentially the complexity order of exhausting over the sequence alphabet AM. This is a prohibitive degree of complexity for most applications, since a typical frame length might be hundreds or even thousands of symbols. Fortunately, this complexity can be mitigated substantially for practical ISI channels. In particular, if the composite signaling waveforms have finite duration D, the matrix H1 is a banded matrix with nonzero elements only on those diagonals that are no more than Δ = ⌈ D/T ⌉ diagonals away from the main diagonal (here ⌈ · ⌉ denotes the smallest integer not less than its argument); that is,

Equation 1.44


This structure of the matrix permits solution of (1.41) with a dynamic program of complexity order 018fig02.gif, as opposed to the 018fig03.gif complexity of direct search. In most situations Δ « M, which implies an enormous savings in complexity (see, e.g., [380]). This dynamic programming solution, which can be structured in various ways, is known as a maximum-likelihood sequence detector (MLSD).

MAP detection in this model is also potentially of very high complexity. The a posteriori probability distribution of a particular symbol, say b1[i], is given by

Equation 1.45


Note that these summations have 018fig03.gif terms and thus are of complexity similar to those of the maximization in (1.41) in general. Fortunately, like (1.41), when H1 is banded these summations can be computed much more efficiently using a generalized dynamic programming technique that results in 018fig02.gif complexity (see, e.g., [380]).

The dynamic programs that facilitate (1.41) and (1.45) are of much lower complexity than brute-force computations. However, even this lower complexity is too high for many applications. A number of lower-complexity algorithms have been devised to deal with such situations. These techniques can be discussed easily by examining the sufficient statistic vector y1 of (1.39), which can be written as

Equation 1.46


where n1 is a complex Gaussian random vector with independent real and imaginary parts having identical 018fig05.gif distributions. Equation (1.46) describes a linear model, and the goal of equalization is thus to fit this model with the data vector b1. The ML and MAP detectors are two ways of doing this fitting, each of which has exponential complexity with exponent equal to the bandwidth of H1. The essential difficulty of this problem arises from the fact that the vector b1 takes on values from a discrete set. One way of easing this difficulty is first to fit the linear model without constraining b1 to be discrete, and then to quantize the resulting (continuous) estimate of b1 into symbol estimates. In particular, we can use a linear fit, My1, as a continuous estimate of b1, where M is an M × M matrix. In this way, the ith symbol decision is

Equation 1.47


where [My1]i denotes the ith component of My1 and where q(·) denotes a quantizer mapping the complex numbers to the symbol alphabet A. Various choices of the matrix M lead to different linear equalizers. For example, if we choose M = IM, the M × M identity matrix, the resulting linear detector is the common matched filter, which is optimal in the absence of ISI. A difficulty with the matched filter is that it ignores the ISI. Alternatively, if H1 is invertible, the choice 019fig01.gif forces the ISI to zero,

Equation 1.48


and is thus known as the zero-forcing equalizer (ZFE). Note that this would be optimal (i.e., it would give perfect decisions) in the absence of AWGN. A difficulty with the ZFE is that it can significantly enhance the effects of AWGN by placing high gains on some directions in the set of M-dimensional complex vectors. A trade-off between these extremes is effected by the minimum-mean-square-error (MMSE) linear equalizer, which chooses M to give an MMSE fit of the model (1.46). Assuming that the symbols are independent of the noise, this results in the choice

Equation 1.49


where Σb denotes the covariance matrix of the symbol vector b1. (Typically, this covariance matrix will be in the form of a constant times IM.) A number of other techniques for fitting the model (1.46) have been developed, including iterative methods with and without quantization of intermediate results [decision-feedback equalizers (DFEs)], and so on. For a more detailed treatment of equalization methods, see [396].

1.3.3 Multiuser Detection

To finish this section we turn finally to the full multiple-access model of (1.9), within which data detection is referred to as multiuser detection. This situation is very similar to the ISI channel described above. In particular, we now consider the likelihood function of the observations r(·) conditioned on all symbols of all users. Sorting these symbols first by symbol number and then by user number, we can collect them in a column vector b given as

Equation 1.50


so that the nth element of b is given by

Equation 1.51


where [·]K denotes reduction of the argument modulo K and ⌊ . ⌋ denotes the integer part of the argument. Analogously with (1.38) we can write the corresponding likelihood function as

Equation 1.52


where y is a column vector that collects the set of observables

Equation 1.53


indexed conformally with b, and where H denotes the KM × KM Hermitian cross-correlation matrix of the composite waveforms associated with the symbols in b, again with conformal indexing:

Equation 1.54



Equation 1.55


Comparing (1.52), (1.53), and (1.54) with their single-user counterparts (1.38), (1.39), and (1.40), we see that y is a sufficient statistic for making inferences about b, and moreover that such inferences can be made in a manner very similar to that for the single-user ISI channel. The principal difference is one of dimensionality: Decisions in the single-user ISI channel involve simultaneous sequence detection with M symbols, whereas decisions in the multiple-access channel involve simultaneous sequence detection with KM symbols. This, of course, can increase the complexity considerably. For example, the complexity of exhaustive search in ML detection, or exhaustive summation in MAP detection, is now on the order of 020fig01.gif. However, as in the single-user case, this complexity can be mitigated considerably if the delay spread of the channel is small. In particular, if the duration of the composite signaling waveforms is D, the matrix H will be a banded matrix with

Equation 1.56


where, as before, Δ = ⌈D/T⌉. This bandedness allows the complexity of both ML and MAP detection to be reduced to the order of 020fig02.gif via dynamic programming.

Although further complexity reduction can be obtained in this problem within additional structural constraints on H (see, e.g., [380]), the 020fig03.gif complexity of ML and MAP multiuser detection is not generally reducible. Consequently, as with the equalization of single-user channels, a number of lower-complexity suboptimal multiuser detectors have been developed. For example, analogously with (1.47), linear multiuser detectors can be written in the form

Equation 1.57


where M is a KM × KM matrix, [My]n denotes the nth component of My, and where, as before, q(·) denotes a quantizer mapping the complex numbers to the symbol alphabet A. The choice M = H− 1 forces both MAI and ISI to zero and is known as the decorrelating detector, or decorrelator. Similarly, the choice

Equation 1.58


where Σb denotes the covariance matrix of the symbol vector b, is known as the linear MMSE multiuser detector. Linear and nonlinear iterative versions of these detectors have also been developed, both to avoid the complexity of inverting KM × KM matrices and to exploit the finite-alphabet property of the symbols (see, e.g., [520]).

As a final issue here we note that all of the discussion above has involved direct processing of continuous-time observations to obtain a sufficient statistic (in practice, this corresponds to hardware front-end processing), followed by algorithmic processing to obtain symbol decisions. Increasingly, an intermediate step is of interest. In particular, it is often of interest to project continuous-time observations onto a large but finite set of orthonormal functions to obtain a set of observables. These observables can then be processed further using digital signal processing (DSP) to determine symbol decisions (perhaps with intermediate calculation of the sufficient statistic), which is the principal advantage of this approach. A tacit assumption in this process is that the orthonormal set spans all of the composite signaling waveforms of interest, although this will often be only an approximation. A prime example of this kind of processing arises in direct-sequence spread-spectrum systems [see (1.6)], in which the received signal can be passed through a filter matched to the chip waveform and then sampled at the chip rate to produce N samples per symbol interval. These N samples can then be combined in various ways (usually, linearly) for data detection. In this way, for example, the linear equalizer and multiuser detectors discussed above are particularly simple to implement. A significant advantage of this approach is that this combining can often be done adaptively when some aspects of the signaling waveforms are unknown. For example, the channel impulse response may be unknown to the receiver, as may the waveforms of some interfering signals. This kind of processing is a basic element of many of the results discussed in this book and will be revisited in more detail in Chapter 2.

  • + Share This
  • 🔖 Save To Your Account