## 4.2 Solution Algorithms

The theoretical basis and approach to developing the solutions of various types of computational problems is briefly described in this section. This discussion is not meant to be exhaustive or comprehensive, but rather introductory, in nature. Several alternative techniques are available for solving the various types of problems; the following discussion is in most cases confined to presenting an outline of one of the techniques.

### 4.2.1 Linear Algebraic Equations

It should be clear that systems of linear algebraic equations can range in size from very small (fewer than five equations) to very large (several hundreds), depending on the number of components and complexity of operations. For example, a system consisting of four components being separated in a distillation column containing five stages yields a system of 20 material balance equations. Typically, the system of equations is rearranged into the following matrix form:

In this equation, [*X*] is the column matrix of *n* variables; [*A*], the *n* × *n* matrix of coefficients; and [*B*], a column matrix of *n* function values.

The *Gauss elimination* technique for solving this system of equations involves progressive elimination of variables from the equations such that at the end only a single linear equation is obtained in one variable. The value of that variable is then obtained and back-substituted progressively into the equations in reverse order of elimination to obtain the values of the rest of the variables that satisfy equation 4.15. For example, if the system consists of *n* equations in variables *x*_{1}, *x*_{2},…, *x _{n}*, then the first step is elimination of variable

*x*

_{1}from equations 2 to

*n*using equation 1 to express

*x*

_{1}in terms of the rest of variables. The result is a system of

*n*− 1 equations in

*n*− 1 variables

*x*

_{2},

*x*

_{3},…,

*x*. Repeating this procedure then allows us to eliminate variables

_{n}*x*

_{2},

*x*

_{3}, and so on, until only an equation in

*x*is left. The value of

_{n}*x*is calculated, and reversing the calculations, values of

_{n}*x*

_{n}_{−}_{1},

*x*

_{n}_{−}_{2},…,

*x*

_{1}are obtained [4].

Iterative procedures offer an alternative to elimination techniques. The *Gauss-Seidel* method involves assuming an initial solution by guessing the values for the variables. It is often convenient to assume that all the variables are 0. Based on this initial guess, the values of the variables are recalculated using the system of equations: *x*_{1} is calculated from the first equation, and its value is updated in the solution matrix; *x*_{2} is calculated from the second equation; and so on. The steps are repeated until the values converge for each variable [8]. The Gauss-Seidel method is likely to be more efficient than the elimination method for systems containing a very large number of equations or systems of equations with a sparse coefficient matrix, that is, where the majority of coefficients are zero [9].

Many sophisticated variations of the elimination and iteration techniques are available for the solution. One other solution technique involves matrix inversion and multiplication. The effectiveness of these and solution techniques is dependent on the nature of the system of equations. Certain techniques may work better in some situations, whereas it might be appropriate to use alternative techniques in other instances.

### 4.2.2 Polynomial and Transcendental Equations

The complexity of solutions for polynomial and transcendental equations increases with increasing nonlinearity. Quadratic equations can be readily solved using the quadratic formula, provided such equations can be readily rearranged in the appropriate form. Formulas exist for obtaining roots of a cubic equation, but these are rarely used. No such easy formulas are available for solution of higher-order polynomials and transcendental equations.

These equations are typically solved by guessing a solution (root) and refining the value of the root on the basis of the behavior of the function. The principle of the *Newton-Raphson* technique, one of the most common techniques used for determining the roots of an equation, is represented by equation 4.16 [4]:

Here, *x _{n}* and

*x*

_{n+}_{1}are the old and new values of the root;

*f*(

*x*) and

_{n}*f*’(

*x*) are values of the function and its derivative, respectively, evaluated at the old root.

_{n}The calculations are repeated iteratively; that is, so long as the values of the roots do not converge, the new root is reset as the old root and a newer value of the root evaluated. It is obvious the new root will equal the old root when the function value is zero. In practice, the two values do not coincide exactly, but a tolerance value is defined for convergence. For example, the calculations may be stopped when the two values differ by less than 0.1% (or some other acceptable criteria).

The computations for this technique depend on not only the function value but also its behavior (derivative) at the root value. The initial guess is extremely important, as the search for the root proceeds on the basis of the function and derivative values at this point. Proper choice of the root will yield a quick solution, whereas an improper choice of the initial guess may lead to the failure of the technique.

The iterative successive substitution method can also be used to solve such equations [9]. The method involves rearranging the equation *f*(*x*) = 0 in the form *x* = *g*(*x*). The iterative solution algorithm can then be represented by the following equation:

Here, *x _{i}*

_{+1}is the new value of the root, which is calculated from the old value of the root

*x*. Each successive value of x would be close to the actual solution of the equation. The key to the success of the method is in the proper rearrangement of the equations, as it is possible for the values to diverge away from the solution rather than toward a solution.

_{i}Finding the roots of polynomial equations presents a particular challenge. An *n*th-order polynomial will have *n* roots, which may or may not be distinct and may be real or complex. The solution technique described previously may be able to find only a single root, irrespective of the initial guess. The polynomial needs to be *deflated*—its order reduced by factoring out the root discovered—progressively to find all the *n* roots. It should be noted that in engineering applications, only one root may be of interest, the others needed only for mathematically complete solution. For example, the cubic equation of state may have only one real positive root for volume, and that is the only root of interest to the engineer. A complex or negative root, while mathematically correct as an answer, is not needed by the engineer.

### 4.2.3 Derivatives and Differential Equations

Some computational problems may involve calculating or obtaining derivatives of functions. Depending on the complexity of the function, it may not be possible to obtain an explicit analytical expression for the derivative. Similarly, some of the problems may involve obtaining the derivative from observed data. For example, an experiment conducted for determination of the kinetics of a reaction will yield concentration-time data. An alternative method of determining the rate constant for the reaction involves regressing the rate of the reaction as a function of concentration. The rate of the reaction is defined as ; thus, the problem involves estimating the derivative from the concentration-time data. One of the numerical techniques for obtaining the derivative is represented by equation 4.17.

The subscripts refer to the time period. Thus, *C _{Ai}* is the concentration at time

*t*, and so on. The derivative is approximated by the ratio of differences in the quantities. This formula is termed the

_{i}*forward difference*formula, as the derivative at

*t*is calculated using values at

_{i}*t*and

_{i}*t*

_{i}_{+1}. Similarly, there are

*backward*and

*central difference*formulas that are also applied for the calculation of the derivative [4, 10]. The comparative advantages and disadvantages of the different formulas are beyond the scope of this book and are not discussed further.

Similarly, the numerical techniques for integration of ordinary and partial differential equations are beyond the scope of this book. Interested readers may find a convenient starting point in reference [4] for further knowledge of such techniques.

### 4.2.4 Regression Analysis

The common basis for linear as well as multiple regression is the minimization of the sum of squared errors (SSE) between the experimentally observed values and the values predicted by the model, as shown in equation 4.19:

In this equation, *y _{i}* is the observed value, and

*f*(

*x*) is the predicted value based on the presumed function

_{i}*f*. The function can be linear in a single variable (generally, what is implied by the term linear regression), linear in multiple variables (multiple regression), or polynomial (polynomial regression). Minimization of SSE yields values of model parameters (slope and intercept for a linear function, for example) in terms of the observed data points (

*x*,

_{i}*y*). The

_{i}*least squares*regression formulas are built into many software programs.

### 4.2.5 Integration

As mentioned previously, numerical computation of an integral is needed when it is not possible to integrate the expression analytically. In other cases, discrete values of the function may be available at various points. Numerical integration of such functions involves summing up the weighted values of the function evaluated or observed at specified points. The fundamental approach is to construct a trapezoid between any two points, with the two parallel sides being the function values and the interval between the independent variable values constituting the height [4, 10]. If the function is evaluated at two points, *a* and *b*, then the following applies:

Decreasing the interval increases the accuracy of the estimate. Several other refinements are also possible but are not discussed here.

Section 4.3 describes various software programs that are available for the computations and solutions of the different types of problems just discussed. These software programs feature built-in tools developed on the basis of these algorithms, obviating any need for an engineer to write a detailed program customized for the problem at hand. The engineer has to know merely how to give the command in the language that is understood by the program. The previous discussion should, however, provide the theoretical basis for the solution as well as illustrate the limitations of the solution technique and possible causes of failure. A course in numerical techniques is often a required core course in graduate chemical engineering programs and sometimes an advanced undergraduate elective course.