Home > Articles > Networking > Wireless/High Speed/Optical

📄 Contents

  1. Motivation
  2. Wireless Signaling Environment
  3. Basic Receiver Signal Processing for Wireless
  4. Outline of the Book
This chapter is from the book

1.3 Basic Receiver Signal Processing for Wireless

This book is concerned with the design of advanced signal processing methods for wireless receivers, based largely on the models discussed in preceding sections. Before moving to these methods, however, it is of interest to review briefly some basic elements of signal processing for these models. This is not intended to be a comprehensive treatment, and the reader is referred to [145, 146, 270, 376, 381, 385, 391, 396, 510, 520, 523] for further details.

1.3.1 Matched Filter/RAKE Receiver

We consider first the particular case of the model of (1.9), in which there is only a single user (i.e., K = 1), the channel impulse g1(·, ·) is known to the receiver, there is no CCI [i.e., i(·) ≡ 0], and the ambient noise is AWGN with spectral height σ2. That is, we have the following model for the received signal:

Equation 1.18

01equ018.gif


where fi, 1(·) denotes the composite waveform of (1.11), given by

Equation 1.19

01equ019.gif


Let us further restrict attention, for the moment, to the case in which there is only a single symbol to be transmitted (i.e., M = 1), in which case we have the received waveform

Equation 1.20

01equ020.gif


Optimal inferences about the symbol b1[0] in (1.20) can be made on the basis of the likelihood function of the observations, conditioned on the symbol b1[0], which is given in this case by [377]

Equation 1.21

01equ021.gif


where the superscript asterisk denotes complex conjugation and ℜ{·} denotes the real part of its argument.

Optimal inferences about the symbol b1[0] can be made, for example, by choosing maximum-likelihood (ML) or maximum a posteriori probability (MAP) values for the symbol. The ML symbol decision is given simply by the argument that maximizes L (r(·) | b1[0]) over the symbol alphabet, A:

Equation 1.22

01equ022.gif


It is easy to see that the corresponding symbol estimate is the solution to the problem

Equation 1.23

01equ023.gif


where

Equation 1.24

01equ024.gif


Thus, the ML symbol estimate is the closest point in the symbol alphabet to the observable z.

Note that the two simplest and most common choices of symbol alphabet are M-ary phase-shift keying (MPSK) and quadrature amplitude modulation (QAM). In MPSK, the symbol alphabet is

Equation 1.25

01equ025.gif


or some rotation of this set around the unit circle. (M as used in this paragraph should not be confused with the framelength M.) For QAM, a symbol alphabet containing M × N values is

Equation 1.26

01equ026.gif


where AR and AI are discrete sets of amplitudes containing M and N points, respectively; for example, for M = N even, a common choice is

Equation 1.27

01equ027.gif


or a scaled version of this choice. A special case of both of these is that of binary phase-shift keying (BPSK), in which A = {−1, +1}. The latter case is the one we consider most often in this treatment, primarily for the sake of simplicity. However, most of the results discussed herein extend straightforwardly to these more general signaling alphabets.

ML symbol estimation [i.e., the solution to (1.23)] is very simple for MPSK and QAM. In particular, since the MPSK symbols correspond to phasors at evenly spaced angles around the unit circle, the ML symbol choice is that whose angle is closest to the angle of the complex number z of (1.24). For QAM, the choices of the real and imaginary parts of the ML symbol estimate are decoupled, with ℜ{b} being chosen to be the closest element of AR to ℜ{z}, and similarly for ℑ{b}. For BPSK, the ML symbol estimate is

Equation 1.28

01equ028.gif


where sign{·} denotes the signum function:

Equation 1.29

01equ029.gif


MAP symbol detection in (1.20) is also based on the likelihood function of (1.21), after suitable transformation. In particular, if the symbol b1[0] is a random variable, taking values in A with known probabilities, the a posteriori probability distribution of the symbol conditioned on r(·) is given via Bayes’ formula as

Equation 1.30

01equ030.gif


The MAP criterion specifies a symbol decision given by

Equation 1.31

01equ031.gif


Note that in this single-symbol case, if the symbol values are equiprobable, the ML and MAP decisions are the same.

The structure of the ML and MAP decision rules above shows that the main receiver signal processing task in this single-user, single-symbol, known-channel case is the computation of the term

Equation 1.32

01equ032.gif


This structure is called a correlator because it correlates the received signal r(·) with the known composite signaling waveform f1,0(·). This structure can also be implemented by sampling the output of a time-invariant linear filter:

Equation 1.33

01equ033.gif


where * denotes convolution and h is the impulse response of the time-invariant linear filter given by

Equation 1.34

01equ034.gif


This structure is called a matched filter, since its impulse response is matched to the composite waveform on which the symbol is received. When the composite signaling waveform has a finite duration so that h(t) = 0 for t < − D ≤ 0, the matched-filter receiver can be implemented by sampling at time D the output of the causal filter with the following impulse response:

Equation 1.35

01equ035.gif


For example, if the signaling waveform s0,1(t) has duration [0,T] and the channel has delay spread τd, the composite signaling waveform will have this property with D = T + τd.

A special case of the correlator (1.32) arises for a pure multipath channel in which the channel impulse response is given by (1.10). The composite waveform (1.11) in this case is

Equation 1.36

01equ036.gif


and the correlator output (1.32) becomes

Equation 1.37

01equ037.gif


a configuration known as a RAKE receiver. Further details on this basic receiver structure can be found, for example, in [396].

1.3.2 Equalization

We now turn to the situation in which there is more than one symbol in the frame of interest (i.e., when M > 1). In this case we would like to consider the likelihood function of the observations r(·) conditioned on the entire frame of symbols, b1[0], b1 [1], ..., b1[M − 1], which is given by

Equation 1.38

01equ038.gif


where the superscript H denotes the conjugate transpose (i.e., the Hermitian transpose), b1 denotes a column vector whose ith component is b1[i], i = 0, 1, ..., M − 1, y1 denotes a column vector whose ith component is given by

Equation 1.39

01equ039.gif


and H1 is an M × M Hermitian matrix, whose (i,j)th element is the cross-correlation between fi, 1(t) and fj, 1(t):

Equation 1.40

01equ040.gif


Since the likelihood function depends on r(·) only through the vector y1 of correlator outputs, this vector is a sufficient statistic for making inferences about the vector b1 of symbols [377].

Maximum-likelihood detection in this situation is given by

Equation 1.41

01equ041.gif


Note that if H1 is a diagonal matrix (i.e., all of its off-diagonal elements are zero), (1.41) decouples into a set of M independent problems of the single-symbol type (1.22). The solution in this case is correspondingly given by

Equation 1.42

01equ042.gif


where

Equation 1.43

01equ043.gif


However, in the more general case in which there is intersymbol interference, (1.41) will not decouple and the optimization must take place over the entire frame, a problem known as sequence detection.

The problem of (1.41) is an integer quadratic program which is known to be an NP-complete combinatorial optimization problem [380]. This implies that the complexity of (1.41) is potentially quite high: exponential in the frame length M, which is essentially the complexity order of exhausting over the sequence alphabet AM. This is a prohibitive degree of complexity for most applications, since a typical frame length might be hundreds or even thousands of symbols. Fortunately, this complexity can be mitigated substantially for practical ISI channels. In particular, if the composite signaling waveforms have finite duration D, the matrix H1 is a banded matrix with nonzero elements only on those diagonals that are no more than Δ = ⌈ D/T ⌉ diagonals away from the main diagonal (here ⌈ · ⌉ denotes the smallest integer not less than its argument); that is,

Equation 1.44

01equ044.gif


This structure of the matrix permits solution of (1.41) with a dynamic program of complexity order 018fig02.gif, as opposed to the 018fig03.gif complexity of direct search. In most situations Δ « M, which implies an enormous savings in complexity (see, e.g., [380]). This dynamic programming solution, which can be structured in various ways, is known as a maximum-likelihood sequence detector (MLSD).

MAP detection in this model is also potentially of very high complexity. The a posteriori probability distribution of a particular symbol, say b1[i], is given by

Equation 1.45

01equ045.gif


Note that these summations have 018fig03.gif terms and thus are of complexity similar to those of the maximization in (1.41) in general. Fortunately, like (1.41), when H1 is banded these summations can be computed much more efficiently using a generalized dynamic programming technique that results in 018fig02.gif complexity (see, e.g., [380]).

The dynamic programs that facilitate (1.41) and (1.45) are of much lower complexity than brute-force computations. However, even this lower complexity is too high for many applications. A number of lower-complexity algorithms have been devised to deal with such situations. These techniques can be discussed easily by examining the sufficient statistic vector y1 of (1.39), which can be written as

Equation 1.46

01equ046.gif


where n1 is a complex Gaussian random vector with independent real and imaginary parts having identical 018fig05.gif distributions. Equation (1.46) describes a linear model, and the goal of equalization is thus to fit this model with the data vector b1. The ML and MAP detectors are two ways of doing this fitting, each of which has exponential complexity with exponent equal to the bandwidth of H1. The essential difficulty of this problem arises from the fact that the vector b1 takes on values from a discrete set. One way of easing this difficulty is first to fit the linear model without constraining b1 to be discrete, and then to quantize the resulting (continuous) estimate of b1 into symbol estimates. In particular, we can use a linear fit, My1, as a continuous estimate of b1, where M is an M × M matrix. In this way, the ith symbol decision is

Equation 1.47

01equ047.gif


where [My1]i denotes the ith component of My1 and where q(·) denotes a quantizer mapping the complex numbers to the symbol alphabet A. Various choices of the matrix M lead to different linear equalizers. For example, if we choose M = IM, the M × M identity matrix, the resulting linear detector is the common matched filter, which is optimal in the absence of ISI. A difficulty with the matched filter is that it ignores the ISI. Alternatively, if H1 is invertible, the choice 019fig01.gif forces the ISI to zero,

Equation 1.48

01equ048.gif


and is thus known as the zero-forcing equalizer (ZFE). Note that this would be optimal (i.e., it would give perfect decisions) in the absence of AWGN. A difficulty with the ZFE is that it can significantly enhance the effects of AWGN by placing high gains on some directions in the set of M-dimensional complex vectors. A trade-off between these extremes is effected by the minimum-mean-square-error (MMSE) linear equalizer, which chooses M to give an MMSE fit of the model (1.46). Assuming that the symbols are independent of the noise, this results in the choice

Equation 1.49

01equ049.gif


where Σb denotes the covariance matrix of the symbol vector b1. (Typically, this covariance matrix will be in the form of a constant times IM.) A number of other techniques for fitting the model (1.46) have been developed, including iterative methods with and without quantization of intermediate results [decision-feedback equalizers (DFEs)], and so on. For a more detailed treatment of equalization methods, see [396].

1.3.3 Multiuser Detection

To finish this section we turn finally to the full multiple-access model of (1.9), within which data detection is referred to as multiuser detection. This situation is very similar to the ISI channel described above. In particular, we now consider the likelihood function of the observations r(·) conditioned on all symbols of all users. Sorting these symbols first by symbol number and then by user number, we can collect them in a column vector b given as

Equation 1.50

01equ050.gif


so that the nth element of b is given by

Equation 1.51

01equ051.gif


where [·]K denotes reduction of the argument modulo K and ⌊ . ⌋ denotes the integer part of the argument. Analogously with (1.38) we can write the corresponding likelihood function as

Equation 1.52

01equ052.gif


where y is a column vector that collects the set of observables

Equation 1.53

01equ053.gif


indexed conformally with b, and where H denotes the KM × KM Hermitian cross-correlation matrix of the composite waveforms associated with the symbols in b, again with conformal indexing:

Equation 1.54

01equ054.gif


with

Equation 1.55

01equ055.gif


Comparing (1.52), (1.53), and (1.54) with their single-user counterparts (1.38), (1.39), and (1.40), we see that y is a sufficient statistic for making inferences about b, and moreover that such inferences can be made in a manner very similar to that for the single-user ISI channel. The principal difference is one of dimensionality: Decisions in the single-user ISI channel involve simultaneous sequence detection with M symbols, whereas decisions in the multiple-access channel involve simultaneous sequence detection with KM symbols. This, of course, can increase the complexity considerably. For example, the complexity of exhaustive search in ML detection, or exhaustive summation in MAP detection, is now on the order of 020fig01.gif. However, as in the single-user case, this complexity can be mitigated considerably if the delay spread of the channel is small. In particular, if the duration of the composite signaling waveforms is D, the matrix H will be a banded matrix with

Equation 1.56

01equ056.gif


where, as before, Δ = ⌈D/T⌉. This bandedness allows the complexity of both ML and MAP detection to be reduced to the order of 020fig02.gif via dynamic programming.

Although further complexity reduction can be obtained in this problem within additional structural constraints on H (see, e.g., [380]), the 020fig03.gif complexity of ML and MAP multiuser detection is not generally reducible. Consequently, as with the equalization of single-user channels, a number of lower-complexity suboptimal multiuser detectors have been developed. For example, analogously with (1.47), linear multiuser detectors can be written in the form

Equation 1.57

01equ057.gif


where M is a KM × KM matrix, [My]n denotes the nth component of My, and where, as before, q(·) denotes a quantizer mapping the complex numbers to the symbol alphabet A. The choice M = H− 1 forces both MAI and ISI to zero and is known as the decorrelating detector, or decorrelator. Similarly, the choice

Equation 1.58

01equ058.gif


where Σb denotes the covariance matrix of the symbol vector b, is known as the linear MMSE multiuser detector. Linear and nonlinear iterative versions of these detectors have also been developed, both to avoid the complexity of inverting KM × KM matrices and to exploit the finite-alphabet property of the symbols (see, e.g., [520]).

As a final issue here we note that all of the discussion above has involved direct processing of continuous-time observations to obtain a sufficient statistic (in practice, this corresponds to hardware front-end processing), followed by algorithmic processing to obtain symbol decisions. Increasingly, an intermediate step is of interest. In particular, it is often of interest to project continuous-time observations onto a large but finite set of orthonormal functions to obtain a set of observables. These observables can then be processed further using digital signal processing (DSP) to determine symbol decisions (perhaps with intermediate calculation of the sufficient statistic), which is the principal advantage of this approach. A tacit assumption in this process is that the orthonormal set spans all of the composite signaling waveforms of interest, although this will often be only an approximation. A prime example of this kind of processing arises in direct-sequence spread-spectrum systems [see (1.6)], in which the received signal can be passed through a filter matched to the chip waveform and then sampled at the chip rate to produce N samples per symbol interval. These N samples can then be combined in various ways (usually, linearly) for data detection. In this way, for example, the linear equalizer and multiuser detectors discussed above are particularly simple to implement. A significant advantage of this approach is that this combining can often be done adaptively when some aspects of the signaling waveforms are unknown. For example, the channel impulse response may be unknown to the receiver, as may the waveforms of some interfering signals. This kind of processing is a basic element of many of the results discussed in this book and will be revisited in more detail in Chapter 2.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020