- Motivation
- Wireless Signaling Environment
- Basic Receiver Signal Processing for Wireless
- Outline of the Book

## 1.3 Basic Receiver Signal Processing for Wireless

This book is concerned with the design of advanced signal processing methods for wireless receivers, based largely on the models discussed in preceding sections. Before moving to these methods, however, it is of interest to review briefly some basic elements of signal processing for these models. This is not intended to be a comprehensive treatment, and the reader is referred to [145, 146, 270, 376, 381, 385, 391, 396, 510, 520, 523] for further details.

#### 1.3.1 Matched Filter/RAKE Receiver

We consider first the particular case of the model of (1.9), in which there is only a single user (i.e., *K* = 1), the channel impulse *g*_{1}(·, ·) is known to the receiver, there is no CCI [i.e., *i*(·) ≡ 0], and the ambient noise is AWGN with spectral height *σ*^{2}. That is, we have the following model for the received signal:

where *f*_{i, 1}(·) denotes the composite waveform of (1.11), given by

Let us further restrict attention, for the moment, to the case in which there is only a single symbol to be transmitted (i.e.,
*M* = 1), in which case we have the received waveform

Optimal inferences about the symbol *b*_{1}[0] in (1.20) can be made on the basis of the likelihood function of the observations, conditioned on the symbol *b*_{1}[0], which is given in this case by [377]

where the superscript asterisk denotes complex conjugation and ℜ{·} denotes the real part of its argument.

Optimal inferences about the symbol *b*_{1}[0] can be made, for example, by choosing *maximum-likelihood* (ML) or *maximum a posteriori probability* (MAP) values for the symbol. The ML symbol decision is given simply by the argument that maximizes *L* (*r*(·) | *b*_{1}[0]) over the symbol alphabet, *A*:

It is easy to see that the corresponding symbol estimate is the solution to the problem

where

Thus, the ML symbol estimate is the closest point in the symbol alphabet to the observable *z*.

Note that the two simplest and most common choices of symbol alphabet are *M*-ary phase-shift keying (MPSK) and quadrature amplitude modulation (QAM). In MPSK, the symbol alphabet is

or some rotation of this set around the unit circle. (*M* as used in this paragraph should not be confused with the framelength *M*.) For QAM, a symbol alphabet containing *M × N* values is

where *A _{R}* and

*A*are discrete sets of amplitudes containing

_{I}*M*and

*N*points, respectively; for example, for

*M = N*even, a common choice is

or a scaled version of this choice. A special case of both of these is that of binary phase-shift keying (BPSK), in which
*A* = {−1, +1}. The latter case is the one we consider most often in this treatment, primarily for the sake of simplicity. However,
most of the results discussed herein extend straightforwardly to these more general signaling alphabets.

ML symbol estimation [i.e., the solution to (1.23)] is very simple for MPSK and QAM. In particular, since the MPSK symbols correspond to phasors at evenly spaced angles around
the unit circle, the ML symbol choice is that whose angle is closest to the angle of the complex number *z* of (1.24). For QAM, the choices of the real and imaginary parts of the ML symbol estimate are decoupled, with ℜ{*b*} being chosen to be the closest element of *A _{R}* to ℜ{

*z*}, and similarly for ℑ{

*b*}. For BPSK, the ML symbol estimate is

where sign{·} denotes the signum function:

MAP symbol detection in (1.20) is also based on the likelihood function of (1.21), after suitable transformation. In particular, if the symbol *b*_{1}[0] is a random variable, taking values in *A* with known probabilities, the *a posteriori* probability distribution of the symbol conditioned on *r*(·) is given via Bayes’ formula as

The MAP criterion specifies a symbol decision given by

Note that in this single-symbol case, if the symbol values are equiprobable, the ML and MAP decisions are the same.

The structure of the ML and MAP decision rules above shows that the main receiver signal processing task in this single-user, single-symbol, known-channel case is the computation of the term

This structure is called a *correlator* because it correlates the received signal *r*(·) with the known composite signaling waveform *f*_{1,0}(·). This structure can also be implemented by sampling the output of a time-invariant linear filter:

where *** denotes convolution and *h* is the impulse response of the time-invariant linear filter given by

This structure is called a *matched filter*, since its impulse response is matched to the composite waveform on which the symbol is received. When the composite signaling
waveform has a finite duration so that *h*(*t*) = 0 for *t* < − *D* ≤ 0, the matched-filter receiver can be implemented by sampling at time *D* the output of the causal filter with the following impulse response:

For example, if the signaling waveform *s*_{0,1}(*t*) has duration [0,*T*] and the channel has delay spread *τ _{d}*, the composite signaling waveform will have this property with

*D = T + τ*.

_{d}A special case of the correlator (1.32) arises for a pure multipath channel in which the channel impulse response is given by (1.10). The composite waveform (1.11) in this case is

and the correlator output (1.32) becomes

a configuration known as a RAKE receiver. Further details on this basic receiver structure can be found, for example, in [396].

#### 1.3.2 Equalization

We now turn to the situation in which there is more than one symbol in the frame of interest (i.e., when *M* > 1). In this case we would like to consider the likelihood function of the observations *r*(·) conditioned on the entire frame of symbols, *b*_{1}[0], *b*_{1} [1], ..., *b*_{1}[*M* − 1], which is given by

where the superscript *H* denotes the conjugate transpose (i.e., the Hermitian transpose), *b*_{1} denotes a column vector whose *i*th component is *b*_{1}[*i*], *i* = 0, 1, ..., *M* − 1, *y*_{1} denotes a column vector whose *i*th component is given by

and *H*_{1} is an *M × M* Hermitian matrix, whose (*i,j*)th element is the cross-correlation between *f*_{i, 1}(*t*) and *f*_{j, 1}(*t*):

Since the likelihood function depends on *r*(·) only through the vector *y*_{1} of correlator outputs, this vector is a sufficient statistic for making inferences about the vector *b*_{1} of symbols [377].

Maximum-likelihood detection in this situation is given by

Note that if *H*_{1} is a diagonal matrix (i.e., all of its off-diagonal elements are zero), (1.41) decouples into a set of *M* independent problems of the single-symbol type (1.22). The solution in this case is correspondingly given by

where

However, in the more general case in which there is intersymbol interference, (1.41) will not decouple and the optimization must take place over the entire frame, a problem known as *sequence detection*.

The problem of (1.41) is an integer quadratic program which is known to be an NP-complete combinatorial optimization problem [380]. This implies that the complexity of (1.41) is potentially quite high: exponential in the frame length *M*, which is essentially the complexity order of exhausting over the sequence alphabet *A ^{M}*. This is a prohibitive degree of complexity for most applications, since a typical frame length might be hundreds or even
thousands of symbols. Fortunately, this complexity can be mitigated substantially for practical ISI channels. In particular,
if the composite signaling waveforms have finite duration

*D*, the matrix

*H*_{1}is a banded matrix with nonzero elements only on those diagonals that are no more than Δ = ⌈

*D/T*⌉ diagonals away from the main diagonal (here ⌈ · ⌉ denotes the smallest integer not less than its argument); that is,

This structure of the matrix permits solution of (1.41) with a dynamic program of complexity order , as opposed to the complexity of direct search. In most situations Δ « *M*, which implies an enormous savings in complexity (see, e.g., [380]). This dynamic programming solution, which can be structured in various ways, is known as a *maximum-likelihood sequence detector* (MLSD).

MAP detection in this model is also potentially of very high complexity. The *a posteriori* probability distribution of a particular symbol, say *b*_{1}[*i*], is given by

Note that these summations have terms and thus are of complexity similar to those of the maximization in (1.41) in general. Fortunately, like (1.41), when *H*_{1} is banded these summations can be computed much more efficiently using a generalized dynamic programming technique that results
in complexity (see, e.g., [380]).

The dynamic programs that facilitate (1.41) and (1.45) are of much lower complexity than brute-force computations. However, even this lower complexity is too high for many applications.
A number of lower-complexity algorithms have been devised to deal with such situations. These techniques can be discussed
easily by examining the sufficient statistic vector *y*_{1} of (1.39), which can be written as

where *n*_{1} is a complex Gaussian random vector with independent real and imaginary parts having identical distributions. Equation (1.46) describes a linear model, and the goal of equalization is thus to fit this model with the data vector *b*_{1}. The ML and MAP detectors are two ways of doing this fitting, each of which has exponential complexity with exponent equal
to the bandwidth of *H*_{1}. The essential difficulty of this problem arises from the fact that the vector *b*_{1} takes on values from a discrete set. One way of easing this difficulty is first to fit the linear model without constraining
*b*_{1} to be discrete, and then to quantize the resulting (continuous) estimate of *b*_{1} into symbol estimates. In particular, we can use a linear fit, *My*_{1}, as a continuous estimate of *b*_{1}, where ** M** is an

*M × M*matrix. In this way, the

*i*th symbol decision is

where [*My*_{1}]_{i} denotes the *i*th component of *My*_{1} and where *q*(·) denotes a quantizer mapping the complex numbers to the symbol alphabet *A*. Various choices of the matrix ** M** lead to different linear equalizers. For example, if we choose

**=**

*M*

*I*_{M}, the

*M × M*identity matrix, the resulting linear detector is the common matched filter, which is optimal in the absence of ISI. A difficulty with the matched filter is that it ignores the ISI. Alternatively, if

*H*_{1}is invertible, the choice forces the ISI to zero,

and is thus known as the *zero-forcing equalizer* (ZFE). Note that this would be optimal (i.e., it would give perfect decisions) in the absence of AWGN. A difficulty with
the ZFE is that it can significantly enhance the effects of AWGN by placing high gains on some directions in the set of *M*-dimensional complex vectors. A trade-off between these extremes is effected by the *minimum-mean-square-error* (MMSE) *linear equalizer*, which chooses ** M** to give an MMSE fit of the model (1.46). Assuming that the symbols are independent of the noise, this results in the choice

where *Σ*_{b} denotes the covariance matrix of the symbol vector *b*_{1}. (Typically, this covariance matrix will be in the form of a constant times *I*_{M}.) A number of other techniques for fitting the model (1.46) have been developed, including iterative methods with and without quantization of intermediate results [decision-feedback
equalizers (DFEs)], and so on. For a more detailed treatment of equalization methods, see [396].

#### 1.3.3 Multiuser Detection

To finish this section we turn finally to the full multiple-access model of (1.9), within which data detection is referred to as *multiuser detection*. This situation is very similar to the ISI channel described above. In particular, we now consider the likelihood function
of the observations *r*(·) conditioned on all symbols of all users. Sorting these symbols first by symbol number and then by user number, we can
collect them in a column vector ** b** given as

so that the *n*th element of ** b** is given by

where [*·*]_{K} denotes reduction of the argument modulo *K* and ⌊ . ⌋ denotes the integer part of the argument. Analogously with (1.38) we can write the corresponding likelihood function as

where ** y** is a column vector that collects the set of observables

indexed conformally with ** b**, and where

**denotes the**

*H**KM*×

*KM*Hermitian cross-correlation matrix of the composite waveforms associated with the symbols in

**, again with conformal indexing:**

*b*with

Comparing (1.52), (1.53), and (1.54) with their single-user counterparts (1.38), (1.39), and (1.40), we see that ** y** is a sufficient statistic for making inferences about

**, and moreover that such inferences can be made in a manner very similar to that for the single-user ISI channel. The principal difference is one of dimensionality: Decisions in the single-user ISI channel involve simultaneous sequence detection with**

*b**M*symbols, whereas decisions in the multiple-access channel involve simultaneous sequence detection with

*KM*symbols. This, of course, can increase the complexity considerably. For example, the complexity of exhaustive search in ML detection, or exhaustive summation in MAP detection, is now on the order of . However, as in the single-user case, this complexity can be mitigated considerably if the delay spread of the channel is small. In particular, if the duration of the composite signaling waveforms is

*D*, the matrix

**will be a banded matrix with**

*H*where, as before, Δ = ⌈*D/T*⌉. This bandedness allows the complexity of both ML and MAP detection to be reduced to the order of via dynamic programming.

Although further complexity reduction can be obtained in this problem within additional structural constraints on ** H** (see, e.g., [380]), the complexity of ML and MAP multiuser detection is not generally reducible. Consequently, as with the equalization of single-user
channels, a number of lower-complexity suboptimal multiuser detectors have been developed. For example, analogously with (1.47), linear multiuser detectors can be written in the form

where ** M** is a

*KM*×

*KM*matrix, [

**]**

*My*_{n}denotes the

*n*th component of

**, and where, as before,**

*My**q*(·) denotes a quantizer mapping the complex numbers to the symbol alphabet

*A*. The choice

**=**

*M*

*H*^{− 1}forces both MAI and ISI to zero and is known as the decorrelating detector, or

*decorrelator*. Similarly, the choice

where **Σ**_{b} denotes the covariance matrix of the symbol vector ** b**, is known as the

*linear MMSE multiuser detector*. Linear and nonlinear iterative versions of these detectors have also been developed, both to avoid the complexity of inverting

*KM × KM*matrices and to exploit the finite-alphabet property of the symbols (see, e.g., [520]).

As a final issue here we note that all of the discussion above has involved direct processing of continuous-time observations
to obtain a sufficient statistic (in practice, this corresponds to hardware front-end processing), followed by algorithmic
processing to obtain symbol decisions. Increasingly, an intermediate step is of interest. In particular, it is often of interest
to project continuous-time observations onto a large but finite set of orthonormal functions to obtain a set of observables.
These observables can then be processed further using digital signal processing (DSP) to determine symbol decisions (perhaps
with intermediate calculation of the sufficient statistic), which is the principal advantage of this approach. A tacit assumption
in this process is that the orthonormal set spans all of the composite signaling waveforms of interest, although this will
often be only an approximation. A prime example of this kind of processing arises in direct-sequence spread-spectrum systems
[see (1.6)], in which the received signal can be passed through a filter matched to the chip waveform and then sampled at the chip
rate to produce *N* samples per symbol interval. These *N* samples can then be combined in various ways (usually, linearly) for data detection. In this way, for example, the linear
equalizer and multiuser detectors discussed above are particularly simple to implement. A significant advantage of this approach
is that this combining can often be done adaptively when some aspects of the signaling waveforms are unknown. For example,
the channel impulse response may be unknown to the receiver, as may the waveforms of some interfering signals. This kind of
processing is a basic element of many of the results discussed in this book and will be revisited in more detail in Chapter 2.