**Direct and iterative method**

- By : Admin
- Category : Free Essays

### Introduction TO DIRECT AND ITERATIVE METHOD

Many of import practical jobs give rise to systems of additive equations written as the matrix equation

Ax = degree Celsius,

There's a specialist from your university waiting to help you with that essay.

Tell us what you need to have done now!

order now

where A is a given N ? nnonsingular matrix and degree Celsius is an n-dimensional vector ; the

job is to happen an n-dimensional vector ten fulfilling equation.

Such systems of additive equations arise chiefly from distinct estimates of partial

differential equations. To work out them, two types of methods are usually used: direct

methods and iterative methods.

Directapproximate the solution after a finite figure of drifting point operations.

Since computing machine drifting point operations can merely be obtained to a given

preciseness, the computed solution is normally different from the exact solution. When a

square matrix A is big and sparse, work outing Ax = degree Celsius by direct methods can be impractical,

and iterative methods go a feasible option.

Iterative methods, based on dividing A into A = M?N, compute consecutive estimates

ten ( T ) to obtain more accurate solutions to a additive system at each loop

measure t. This procedure can be written in the signifier of the matrix equation

ten ( t ) = Gx ( t?1 ) + g,

where an n ? n matrix G = M?1N is the loop matrix. The loop procedure

is stopped when some predefined standard is satisfied ; the obtained vector ten ( T ) is an

estimate to the solution. Iterative methods of this signifier are called additive stationary

iterative methods of the first grade. The method is of the first grade because x ( T )

depends explicitly merely on x ( t?1 ) and non on x ( t?2 ) , . . . , x ( 0 ) . The method is additive

because neither G nor g depends on x ( t?1 ) , and it is stationary because neither G nor g

depends on t. In this book, we besides consider additive stationary iterative methods of the

2nd grade, represented by the matrix equation

ten ( t ) = Mx ( t?1 ) ? Nx ( t?2 ) + H.

### History OF DIRECT AND ITERATIVE METHOD

### O Direct methods to work out additive systems

Direct methods for work outing the additive systems with the Gauss riddance method is given by Carl Friedrich Gauss ( 1777-1855 ) . Thereafter the Choleski gives method for symmetric positive definite matrices.

### O Iterative methods for non-linear equations

The Newton_Raphson method is an iterative method to work out nonlinear equations. The method is defined by Isaac Newton ( 1643-1727 ) and Joseph Raphson ( 1648-1715 ) .

### O Iterative methods for additive equations

The standard iterative methods, which are used are the Gauss-Jacobi and the Gauss-Seidel method. Carl Friedrich Gauss ( 1777-1855 ) is a really celebrated mathematician working on abstract and applied mathematics. Carl Gustav Jacob Jacobi ( 1804-1851 ) is good known for case for the Jacobian the determiner of the matrix of partial derived functions. He has besides done work on iterative methods taking to the Gauss-Jacobi method.

Another iterative method is the Chebyshev method. This method is based on extraneous multinomials bearing the name of Pafnuty Lvovich Chebyshev ( 1821-1894 ) . The Gauss-Jacobi and Gauss-Seidel method usage a really simple multinomial to come close the solution. In the Chebyshev method an optimum multinomial is used.

### DIRECT AND ITERATIVE METHOD

Direct methods compute the solution to a job in a finite figure of stairss. These methods would give the precise reply if they were performed in infinite preciseness arithmetic. Examples include Gaussian riddance, the QR factorisation method for work outing systems of additive equations, and the simplex method of additive scheduling.

In contrast to direct methods, iterative methods are non expected to end in a figure of stairss. Get downing from an initial conjecture, iterative methods form consecutive estimates that converge to the exact solution merely in the bound. A convergence standard is specified in order to make up one’s mind when a sufficiently accurate solution has ( hopefully ) been found. Even utilizing infinite preciseness arithmetic these methods would non make the solution within a finite figure of stairss ( in general ) . Examples include Newton ‘s method, the bisection method, and Jacobi loop. In computational matrix algebra, iterative methods are by and large needed for big jobs.

Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in rule but are normally used as though they were non, e.g. GMRES and the conjugate gradient method. For these methods the figure of stairss needed to obtain the exact solution is so big that an estimate is accepted in the same mode as for an iterative method.

In the instance of a system of additive equations, the two chief categories of iterative methods are thestationary iterative methods, and the more general Krylov subspace methods.

### O Stationary iterative methods

Stationary iterative methods solve a additive system with an operator come closing the original one ; and based on a measuring of the mistake ( the remainder ) , organize a rectification equation for which this procedure is repeated. While these methods are simple to deduce, implement, and analyse, convergence is merely guaranteed for a limited category of matrices. Examples of stationary iterative methods are the Jacobi method, Gauss–Seidel method and the Consecutive over-relaxation method.

### O Krylov subspace methods

Krylov subspace methods form an extraneous footing of the sequence of consecutive matrix powers times the initial residuary ( theKrylov sequence ) . The estimates to the solution are so formed by minimising the remainder over the subspace formed. The archetypal method is the coupled gradient method ( CG ) . Other methods are the generalised minimum residuary method and the biconjugate gradient method

### EXAMPLE OF DIRECT METHOD

### GAUSS ELIMINATION METHOD: –

In additive algebra, Gaussian eliminationmethod is an algorithm for work outing systems of additive equations, happening the rank of a matrix, and ciphering the opposite of an invertible square matrix. Gaussian riddance is named after German mathematician and scientist Carl Friedrich Gauss.

Elementary row operations are used to cut down a matrix to row echelon signifier. Gauss–Jordan riddance, an extension of this algorithm, reduces the matrix further to cut down row echelon signifier. Gaussian riddance entirely is sufficient for many applications.

Example

Suppose that our end is to happen and depict the solution ( s ) , if any, of the undermentioned system of additive equations:

The algorithm is as follows: extinguish tens from all equations below L1 and so extinguish Ys from all equations below L2.This will organize a triangular form.Using the back permutation

Each unknown can be solved.

In the illustration, x is eliminated from l2 by adding 3/2L1to L2. Ten is so eliminatedmfrom L3 by adding L1 to L3

The consequence is:

Nowyis eliminated fromL3by adding? 4L2toL3:

The consequence is:

This consequence is a system of additive equations in triangular signifier, and so the first portion of the algorithm is complete.The 2nd portion, back-substitution, consists of work outing for the terra incognitas in rearward order. It can be seen that

Then, zcan be substituted intoL2, which can so be solved to obtain

Following, zandycan be substituted intoL1, which can be solved to obtain

The system is solved.

Some systems can non be reduced to triangular signifier, yet still have at least one valid solution: for illustration, ifyhad non occurred inL2andL3after the first measure above, the algorithm would be unable to cut down the system to triangular signifier. However, it would still hold reduced the system to echelon signifier. In this instance, the system does non hold a alone solution, as it contains at least one free variable. The solution set can so be expressed parametrically.

In pattern, one does non normally cover with the systems in footings of equations but alternatively makes usage of the augmented matrix ( which is besides suited for computing machine uses ) . The Gaussian Elimination algorithm applied to the augmented matrix of the system above, get downing with: which, at the terminal of the first portion of the algorithm

That is, it is in row echelon signifier.

At the terminal of the algorithm, if the Gauss–Jordan riddance is applied:

That is, it is in decreased row echelon signifier, or row canonical signifier.

### EXAMPLE OF ITERATIVE METHOD OF SOLUTION

### A. JACOB METHOD: –

The Jacobi method is a method of work outing a matrix equation on a matrix that has no nothing along its chief diagonal ( Bronshtein and Semendyayev 1997, p.892 ) . Each diagonal component is solved for, and an approximative value taken in. The procedure is so iterated until it converges. This algorithm is a stripped-down version of the Jacobi transmutation method of matrix diagonalisation.

The Jacobi method is easy derived by analyzing each of the equations in the additive system of equations Ax=b in isolation. If, in theith equation solve for the value ofwhile presuming the other entries ofremain fixed. This gives which is the Jacobi method.

In this method, the order in which the equations are examined is irrelevant, since the Jacobi method treats them independently. The definition of the Jacobi method can be expressed with matrices as

### B. Stationary Iterative Methods

Iterative methods that can be expressed in the simple signifier

Where neighter B nor hundred depend upon the iterative count K ) are called stationary iterative method. The four chief stationary iterative method: the Jacobi method, the Gauss Seidel method, Successive Overrelaxation method and the symmetric Successive Overrelaxation method

### C. The Gauss-Seidel Method

We are sing an iterative solution to the additive system

where is ansparse matrix, xandbare vectors of lengthn, and we are work outing forx. Iterative convergent thinkers are an alternate to direct methods that attempt to cipher an exact solution to the system of equations. Iterative methods effort to happen a solution to the system of additive equations by repeatedly work outing the additive system utilizing estimates to the vector. Iterations continue until the solution is within a preset acceptable edge on the mistake.

Iterative methods for general matrices include the Gauss-Jacobi and Gauss-Seidel, while coupled gradient methods exist for positive definite matrices. Use of iterative methods is the convergence of the technique. Gauss-Jacobi uses all values from the old loop, while Gauss-Seidel requires that the most recent values be used in computations. The Gauss-Seidel method has better convergence than the Gauss-Jacobi method, although for dense matrices, the Gauss-Seidel method is consecutive. The convergence of the iterative method must be examined for the application along with algorithm public presentation to guarantee that a utile solution to can be found.

The Gauss-Seidel method can be written as:

where: ?

is theunknown in during theiteration, and,

is the initial conjecture for theunknown in,

is the coefficient ofin therow andcolumn,

is thevalue in.

or

where: ?

K ( K ) is theiterative solution to

is the initial conjecture atx

Dis the diagonal ofA

Lis the of purely lower triangular part ofA

Uis the of purely upper triangular part ofA

bis right-hand-side vector.

EXAMPLE.

10×1?x2+ 2×3= 6,

?x1+ 11×2?x3+ 3×4= 25,

2×1?x2+ 10×3?x4= ? 11,

3×2?x3+ 8×4= 15.

Solving forx1, x2, x3andx4gives:

x1=x2/ 10 ?x3/ 5 + 3 / 5,

x2=x1/ 11 +x3/ 11 ? 3×4/ 11 + 25 / 11,

x3= ?x1/ 5 +x2/ 10 +x4/ 10 ? 11 / 10,

x4= ? 3×2/ 8 +x3/ 8 + 15 / 8

Suppose we choose ( 0,0,0,0 ) as the initial estimate, so the first approximative solution is given by

x1= 3 / 5 = 0.6,

x2= ( 3 / 5 ) / 11 + 25 / 11 = 3 / 55 + 25 / 11 = 2.3272,

x3= ? ( 3 / 5 ) / 5 + ( 2.3272 ) / 10 ? 11 / 10 = ? 3 / 25 + 0.23272 ? 1.1 = ? 0.9873,

x4= ? 3 ( 2.3272 ) / 8 + ( ? 0.9873 ) / 8 + 15 / 8 = 0.8789.

x1 |
x2 |
x3 |
x4 |

0.6 |
2.32727 |
? 0.987273 |
0.878864 |

1.03018 |
2.03694 |
? 1.01446 |
0.984341 |

1.00659 |
2.00356 |
? 1.00253 |
0.998351 |

1.00086 |
2.0003 |
? 1.00031 |
0.99985 |

The exact solution of the system is ( 1,2, -1,1 )

### APPLICATION OF DIRECT AND ITERATIVE METHOD OF SOLUTION

### FRACTIONAL SPLITING METHOD OF FIRST ORDER FOR LINEAR EQUATION

First we describe the simplest operator-splitting, which is calledsequential operator-splitting, for the undermentioned additive system of ordinary differential equations: ( 3.1 ) where the initial status is. The operators and are additive and bounded operators in a Banach infinite

The consecutive operator-splitting method is introduced as a method that solves two subproblems consecutive, where the different subproblems are connected via the initial conditions. This means that we replace the original job with the subproblemswhere the dividing time-step is defined as. The approximated solution is.

The replacing of the original job with the subproblems normally consequences in an mistake, calledsplitting mistake. The dividing mistake of the consecutive operator-splitting method can be derived as whereis the commutator ofAandB The splitting mistake iswhen the operatorsA andB do non transpose, otherwise the method is exact. Hence the consecutive operator-splitting is called thefirst-order splitting method.

### THE ITERATIVE SPLITING

The undermentioned algorithm is based on the loop with fixed dividing discretization step-size. On the clip intervalwe work out the undermentioned subproblems consecutively for: ( 4.1 ) where is the known split estimate at the clip degree.

We can generalise the iterative splitting method to a multi-iterative splitting method by presenting new splitting operators, for illustration, spacial operators. Then we obtain multi-indices to command the splitting procedure ; each iterative splitting method can be solved independently, while linking with farther stairss to the multi-splitting method

## No Comments