Finding special points using matrix-free predictor–corrector methods

Finding special points using matrix-free predictor–corrector methods

Applied Mathematics and Computation 185 (2007) 554–563 www.elsevier.com/locate/amc Finding special points using matrix-free predictor–corrector metho...

169KB Sizes 0 Downloads 13 Views

Applied Mathematics and Computation 185 (2007) 554–563 www.elsevier.com/locate/amc

Finding special points using matrix-free predictor–corrector methods Hani I. Siyyam

a,*

, Mohamad A. Samarah

b

a

b

Department of Mathematics and Physics, University of Qatar, Doha-Qatat, Qatar Department of Mathematics and Statistics, Jordan University of Science and Technology, Irbid-Jordan

Abstract One of the main purposes of numerical continuation methods concerns the accurate determination of certain special points on the solution curve c(s) which is implicitly defined from a smooth map H : Rnþ1 ! Rn . In this paper, matrix-free predictor–corrector methods in conjunction with adaptive steplength control is used to approximate the critical and fixed points of a smooth map. Some of numerical results are presented.  2006 Elsevier Inc. All rights reserved. Keywords: Continuation methods; Homotopy methods; Euler–Newton predictor–corrector method; Matrix-free predictor–corrector methods; Adaptive steplength control; Critical points of a smooth map; Fixed points of a smooth map

1. Introduction In the context of numerical continuation methods, one considers curves which are implicitly defined by undetermined system of equations: H ðuÞ ¼ 0;

ð1:1Þ

where H : Rn+1 ! Rn is a smooth map. We shall mean that a map is smooth if it has as many continuous derivatives as the context of the subsequent discussion requires. Definition 1.1. A point u is called a regular point of H if the Jacobian H 0 (u) has maximal rank. Definition 1.2. Let A be an n · (n + 1) matrix with rank(A) = n. The unique vector t(A) 2 Rn+1, satisfying the three conditions: (1) At = 0; (2) ktk = 1; *

Corresponding author. E-mail address: [email protected] (H.I. Siyyam).

0096-3003/$ - see front matter  2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.07.061

 (3) det

A t



H.I. Siyyam, M.A. Samarah / Applied Mathematics and Computation 185 (2007) 554–563

555

>0

is called the tangent vector induced by A. Here and in the rest of this paper A* denotes the Hermitian transpose of A and kuk is the Euclidean norm of u. Definition 1.3. Let A be an n · (n + 1) matrix with maximal rank, then the Moore–Penrose inverse of A is defined by 1

Aþ ¼ A ðAA Þ : Now, assume that H : Rn+1 ! Rn is a smooth map and u 2 Rnþ1 is a regular point of H such that H(u) = 0. Then it follows from the Implicit Function theorem that the solution set H1(0) can be locally parametrized about u with respect to some parameter, say s. We thus obtain the solution curve c(s) of the equation H(u) = 0. By a re-parametrization (according to arclength), we obtain a smooth curve c : J ! Rn+1 for some open interval J containing zero such that for all a 2 J: (1) (2) (3) (4)

c(0) = u; H(c(a)) = 0; rank (H 0 (c(a))) = n; c 0 (a) 5 0.

By differentiating Eq. (2) it follows that the tangent c 0 (a) satisfies the equation H 0 ðcðaÞÞ  c0 ðaÞ ¼ 0; and hence c 0 (a) is orthogonal to all rows of H 0 (c(a)). One way of tracing the solution curve is to perform the predictor–corrector continuation method. The predictor step used is called the Euler-predictor which is given by v ¼ u þ htðH 0 ðuÞÞ;

ð1:2Þ

where u is a point lying along the solution curve c and h > 0 represents a stepsize. The corrector iteration used is called the Gauss–Newton corrector which is given by solving the equation þ

w ¼ v  H 0 ðvÞ H ðvÞ

ð1:3Þ

for w. It is usual to call such procedure Euler–Newton predictor–corrector path following methods. For more details, see [1,2]. Now assume that A is an n · (n + 1) matrix with rank(A) = n, and that a decomposition   R  A ¼Q  ð1:4Þ 0 is given, where Q is an (n + 1) · (n + 1) orthogonal matrix, and R is a nonsingular n · n upper triangular matrix. If z denotes  the last column of Q, then Az = 0 and kzk = 1. The remaining task is to choose the sign of z A so that det  > 0. z Now   R 0  ðA ; zÞ ¼ Q  ; ð1:5Þ 0 1 implies  det

A z



¼ det ð A ;

z Þ ¼ detðQÞ detðRÞ:

Hence, t(A) = ±z according as the determinant is positive or negative.

ð1:6Þ

556

H.I. Siyyam, M.A. Samarah / Applied Mathematics and Computation 185 (2007) 554–563

  R Since A ¼ Q  and A ¼ ð R ; 0 ! 1 ðR Þ þ A ¼Q : 0

0 ÞQ , then it can be shown that ð1:7Þ

The rest of this paper is organized as follows. In Section 2 we present the matrix-free predictor–corrector method. During tracing of the solution curve c, one can use the fixed stepsize throughout all points. On the other hand, one can use an adaptive steplength control such as the one based upon asymptotic estimates which is due to GEORG (1983) [2]. In Section 3 we present that adaptive steplength control. One of the main purposes of numerical continuation methods concerns the accurate determination of certain points on the solution curve c(s) which are of special interest. In Section 4, we present two of the applications of the numerical continuation methods, namely, the calculation of critical and fixed points of a smooth map. Shawakfeh [5] used the Euler–Newton predictor–corrector method in conjunction with the adaptive steplength control to approximate the critical and fixed points of a smooth map. In Section 5, we present some samples of our numerical results and an analysis of the results is presented. Moreover, comparisons of our results to Shawakfeh results are also discussed in this section. 2. Matrix-free predictor–corrector methods A numerical continuation method traces solution branches of a nonlinear system H ðk; xÞ ¼ 0;

H : R  Rn ! Rn ;

ð2:1Þ

which is typically obtained by discretizing a parameter dependent operator equation. The method is called matrix-free if the Jacobian of H is not calculated explicitly, but its action on a vector is given via a difference approximation of a directional derivative. Given a nonlinear system of equations F(x) = 0 where F : Rn ! Rn , then an iterative method for finding its solution is called matrix-free if the Jacobian F 0 (x) at a point x is only implemented through its action on a vector v. That is, only an implementation of the function (x, v) ! F 0 (x)v is exploited. Now assume that H(un) = 0, where H : Rnþ1 ! Rn is a smooth map, and assume that c is the smooth curve implicitly defined from H. Then the matrix-free predictor–corrector method repeats two steps: (1) A predictor step generates an approximate point further along the solution curve, typically by linear extrapolation. The predictor used is given by the formula vn ¼ un þ hn tðH 0 ðun ÞÞ; where un is a point lying along the solution curve c, hn > 0 represents a stepsize and tn be the current tangent vector. (2) A corrector step finds a point approximately on the solution curve and close to the predicated point, which is given by the formula unþ1 ¼ vn þ Dvn ; where Dvn is chosen so that        0  H ðvn Þ   H ðvn Þ  H ðvn Þ     6 g Dv þ n   0  tn 0 via a transpose-free iterative linear solver, and 1  g > 0. Once un+1 has been generated, the new tangent vector tn+1 = t(H 0 (un+1)) which will be used in the next predictor step is then approximated via the formula tnþ1 :¼

unþ1  un : kunþ1  un k

H.I. Siyyam, M.A. Samarah / Applied Mathematics and Computation 185 (2007) 554–563

557

In the matrix-free predictor–corrector method, it can be seen that [3]: (1) The corrector step solves approximately   H ðuÞ ¼0 tn ðu  pnþ1 Þ for u. (2) A new approximate tangent can also be found by solving the linear system  0    H ðunþ1 Þ 0 t ¼ tn 1 t for t and then setting tnþ1 :¼ ktk .

3. Adaptive steplength control When tracing the solution curve c defined implicitly from a smooth map H : Rnþ1 ! Rn , there are two different kinds of stepsizes used in finding all points, the first approach is by using fixed stepsize throughout all points and the second approach is by using adaptive steplength control. For the last approach, one can use the steplength adaptation based upon asymptotic expansion which is due to GEORG (1983) [1,2]. The basic idea in this approach is to observe the performance of the corrector procedure in dependence on a given steplength h. This dependence is typically expressed by some asymptotic expansions in h. The aim is then to adapt the steplength in such a way as to strive toward a desired performance. We briefly described here that adaptive steplength control. So suppose that a point u on the solution curve has been approximated. Further, suppose that a steplength h > 0 and a predictor point v0 ðhÞ ¼ u þ htðH 0 ðuÞÞ

ð3:1Þ

are given, then a Gauss–Newton iterative corrector process is performed þ

viþ1 ðhÞ ¼ vi ðhÞ  H 0 ðv0 ðhÞÞ H ðvi ðhÞÞ:

ð3:2Þ

The sequence {vi(h)} converges to the next point on the curve. Define the contraction rate of the corrector process denoted by j(u, h) to be þ

jðu; hÞ ¼

kH 0 ðv0 ðhÞÞ H ðv1 ðhÞÞk : kH 0 ðv0 ðhÞÞþ H ðv0 ðhÞÞ

ð3:3Þ

Since Newton’s method is locally quadratically convergent, then j(u, h) will decrease if h decreases. The following lemma characterizes the asymptotic behavior of j(u, h) with respect to h Lemma 3.1. Suppose that H00 (u)[t(H 0 (u)),t(H 0 (u))] 5 0, then jðu; hÞ ¼ j2 ðuÞh2 þ Oðh3 Þ

ð3:4Þ

for some constant j2(u) P 0 which is independent of h and depends smoothly on u. For the proof, see [1,2]. In order to have an efficient method we want to continually adapt the steplength h so that a nominal pre~ is maintained. The choice of j ~ will generally depend upon the nature of the problem scribed contraction rate j at hand, and on the desired security with which we want to traverse the curve. It can be observed that the ~ is chosen, the greater will be the security with which the method will follow the curve. Once j ~ smaller j ~ ~ has been chosen, we will consider a steplength h to be suitable if jðu; hÞ  j. By using formula (3.4) and neglecting higher order terms we obtain the formula

558

H.I. Siyyam, M.A. Samarah / Applied Mathematics and Computation 185 (2007) 554–563

sffiffiffiffiffiffiffiffiffiffiffiffiffiffi ~ j ~ h¼h jðu; hÞ

ð3:5Þ

as the steplength for the next predictor step. For the matrix-free predictor–corrector method, we have redefined the contraction rate of the corrector process to be jðu; hÞ ¼

kDvðhÞH ðwðhÞÞk ; kDvðhÞH ðvðhÞÞk

ð3:6Þ

where vðhÞ ¼ u þ htðH 0 ðuÞÞ; and wðhÞ ¼ vðhÞ þ DvðhÞ with u is a point lying on the solution curve, and h > 0 is the stepsize used to generate the new point w on the solution curve. The new steplength for the next predictor step is then chosen as above to be sffiffiffiffiffiffiffiffiffiffiffiffiffiffi ~ j ~ h¼h jðu; hÞ with j(u, h) is given in the formula (3.6). We have implemented this modified strategy and tested many different numerical examples, the results we have obtained indicate that it is efficient and works properly. 4. Special points on the curve One frequent application of path following methods involves homotopy methods. These methods are often resorted to in zero point or fixed point problems when no suitable starting point is available for an iterative method such as Newton’s method, or when contraction methods do not apply. In these situations one constructs a homotopy map which deforms from a map which is trivial (or at least its solution points are known) to the map of interest. In many applications of the numerical homotopy methods, it is possible to avoid degeneracies in the solution curve by introducing suitable parameters. The theoretical basis of this approach lies in Sard’s theorem for maps with additional parameters. We present here the following general form. Theorem 4.1 (Sard:). Let A, B, C be smooth manifolds of finite dimensions with dim (A) P dim (C), and let F: A · B ! C be a smooth map. Assume that c 2 C is a regular value of F, i.e. for F(a, b) = c we have that the total derivative F 0 (a, b): TaA · TbB ! TcC has maximal rank. Here TaA denotes the tangent space of A at a. Then for almost all b 2 B (in the sense of some lebesgue measure on B) the restricted map F(., b): A ! C has c as a regular value. 4.1. Finding a fixed point of a bounded smooth map Let f : Rn ! Rn be a smooth map which is bounded, then the map f has at least one fixed point, let us make the assumption that the map x ! x  f(x) has zero as a regular value. This implies that the fixed points of f are isolated. We therefore consider the homotopy: H ðx; k; pÞ ¼ x  p  kðf ðxÞ  pÞ:

ð4:1Þ

For the trivial level k = 0, we obtain the trivial map H(x, 0, p) = x  p which has the unique zero point p, our starting point. On the target level k = 1, we obtain the target map H(x, 1, p) = x  f(x) whose zero points are our points of interest, i.e., the fixed points of f. To illustrate how Sard’s theorem is typically employed: The Jacobian of H is given by

H.I. Siyyam, M.A. Samarah / Applied Mathematics and Computation 185 (2007) 554–563

H 0 ðx; k; pÞ ¼ ðId  kf 0 ðxÞ; p  f ðxÞ; ðk  1ÞIdÞ:

559

ð4:2Þ

Due to our assumptions, the first N columns of the Jacobian are linearly independent for H(x, k, p) = 0 and k = 1, and the last N columns are linearly independent for k 5 1. Consequently, by Sard’s theorem we can conclude that for almost all p 2 Rn , zero is a regular value of the restricted map H(., ., p). By considering the solution curve c(s) = (x(s), k(s)) such that c(0) = (p, 0), the solution point (p, 0) is unique for k = 0. It can be shown that the curve c reaches the level k = 1 after a finite arclength s0, i.e., c(s0) = (x0, 1), and hence x0 is a fixed point of f which can be approximated by tracing the curve c. 4.2. Finding critical points of a smooth map Among the applications of numerical continuation methods is the calculation of critical points of a smooth mapping f : Rn ! R. In general, one chooses a smooth mapping g : Rn ! R with known regular critical points a 2 Rn , i.e., $g(a) = 0 and the Hessian $g 0 (a) has full rank. One then formulates a smooth homotopy map H : Rnþ1 ! Rn such that H ð0; xÞ ¼ rgðxÞ and H ð1; xÞ ¼ rf ðxÞ:

ð4:3Þ

In fact, one can use the convex homotopy H ðk; xÞ ¼ ð1  kÞrgðxÞ þ krf ðxÞ:

ð4:4Þ

It can be point here that the above convex homotopy can be replaced by any other homotopy linking f and g. The numerical aspect then consists of tracing a smooth curve c(s) = (k(s), x(s)) 2 H1(0), with starting point _ _ c(0) = (0, a) for some given critical point a of g, and starting tangent c_ ð0Þ ¼ ðkð0Þ; x_ ð0ÞÞ with kð0Þ > 0. The aim of course is to trace the curve c until the homotopy level k = 1 is reached, at which a critical point of f is obtained. If all critical points of f are regular, then by Sard’s theorem it is generally possible to make a choice of g such that zero is a regular value of H. The following result of Allgower and Georg (1980) [1,2], indicates that the continuation method has an appealing property which can permit targeting critical points having a specific Morse index. Theorem 4.2. Let f ; g : Rn ! R be smooth functions and let H be the convex homotopy of the form (4.4) which has zero as a regular value. Let c(s) = (k(s), x(s)) 2 H1(0) be the smooth solution curve such that c(0) = (0, a) where a is a regular critical point of g. Suppose that k(s) is increasing for s 2 ½0; s, kðsÞ ¼ 1, and that the critical point b ¼ xðsÞ of $f is regular. Then the critical points a, b of g and f, respectively, have the same Morse index, i.e., the Hessians $g 0 (a) and $g 0 (b) has the same number of negative eigenvalues. For the proof, see [2]. 5. Numerical results In this section, we present two numerical examples [4] to show that the matrix-free predictor–corrector method works nicely and efficiently. Also, an analysis of the results is done. Moreover, Shawakfeh [5] used the Euler–Newton predictor–corrector method to approximate fixed and critical points of smooth maps. Comparisons of our results to his results are also discussed. Example 5.1. Let f : R2 ! R be the smooth map defined by f ðx; yÞ ¼ 1  2x  x2 þ 4y  2y 2 ; then it is easy to verify that the critical point of f is (1, 1). Define the homotopy H ðk; vÞ ¼ ð1  kÞrgðvÞ þ krf ðvÞ; where v = (x, y) and g(x, y) = 1  x2  y2 whose critical point is (0, 0). Then H ¼ ðH 1 ; H 2 Þ : R3 ! R2 is the smooth map defined by

560

H.I. Siyyam, M.A. Samarah / Applied Mathematics and Computation 185 (2007) 554–563

H 1 ðk; x; yÞ ¼ 2k  2x; H 2 ðk; x; yÞ ¼ 4k  2y  2ky: 0 0 By finding the quantities   $f (v) and  $g (v) we see that they have the same Morse-index=2, where 2 0 2 0 rf 0 ¼ ; rg0 ¼ . Now observe that 0 4 0 2 



H ð0; x; yÞ ¼ ½2x; 2y ¼ rgðx; yÞ ; H ð1; x; yÞ ¼ ½2  2x; 4  4y ¼ rf ðx; yÞ : 0 1 0 Let u0 ¼ @ 0 A be the starting point, then we will approximate the critical point of f(x, y) by generating points 0 along the solution curve H1(0) starting from the point u0 until the level target k = 1 is reached. i.e., until un = (1, xn, yn)T is reached. First we used the matrix-free predictor–corrector method with fixed stepsize throughout all points generated along the solution curve H1(0) and by taking four different initial stepsizes, the results we have obtained are given in Table 5.1. From Table 5.1, it can be seen that the results we have obtained are satisfactory, especially when the stepsize is chosen to be small. Moreover, it can be seen that the number of points needed to generate along the solution curve and the flops are doubled when the stepsize h is replaced by h2. As a matter of fact, we have to mention here that the following notation are used throughout this section: Approximate: is the approximation of the exact critical point of f in Example 5.1, and is the approximation of the exact fixed point of f in Example 5.2. Error: The Euclidean norm of the difference between the exact critical (or fixed) point of f and the approximation. Relative error: the ratio of the error divided by the Euclidean norm of the exact point. n: The number of points generated along the solution curve until the stopping criterion is satisfied. Flops: The number of operations we need to follow the solution curve until the stopping criterion is satisfied. Shawakfeh [5] considered the same example to approximate the critical points of f but with using the Euler– Newton predictor–corrector method with fixed stepsize throughout all points. The results he has obtained are given in Table 5.2. By comparing our results to Shawakfeh results, we see that both results are very closed to each other, but the number of flops needed in our method is more less than the number of flops needed for the corresponding Shawakfeh results, for example, when the stepsize is chosen to be 0.002 we required only 157,145 flops to get

Table 5.1 Approximating critical points using matrixfree predictor-corrector with fixed stepsize h

Approximate

n

Error

Relative error 2

Flops

3

7.55764 · 10

21,625

0.016

1.00956883153909 1.00476163381156

111

1.06881 · 10

0.008

1.00421598232972 1.00210355686548

221

4.71163 · 103

3.33163 · 103

40,985

0.004

1.00154459940878 1.00077170371658

441

1.72665 · 103

1.22092 · 103

79,705

0.002

1.00021016982927 1.00010507387288

881

2.34972 · 104

1.66150 · 104

157,145

H.I. Siyyam, M.A. Samarah / Applied Mathematics and Computation 185 (2007) 554–563

561

Table 5.2 Shawakfeh [5] results for approximating critical points using the Euler-Newton predictor-corrector with fixed stepsize h

Approximate

n

Error

Relative error

0.016

1.00953544540592 1.00474509938490

111

1.06509 · 102

7.53129 · 103

Flops 177,495

0.008

1.00420764610455 1.00209940627296

221

4.70232 · 103

3.32504 · 103

344,838

0.004

1.00154251663645 1.00077066393725

441

1.72432 · 103

1.21905 · 103

679,524

0.002

1.00020964929640 1.00010481366114

881

2.3439 · 104

1.65739 · 104

1,414,819

an error 2.34972 · 104 in approximation of the critical point of f(x, y), while he needed 1,414,819 flops to get the approximation within the same error. So we just need only 11% of his total flops. Unlike the first method, it is known that using the adaptive steplength control presented in Section 3 is not affected that much by choosing different initial stepsizes. Thus instead of taking different initial stepsizes we considered different minimal stepsizes hmin. The minimal stepsizes we have chosen are .001 and 106 and the initial stepsize is taken to be 0.004. We have approximated the critical points of f(x, y) using the matrix-free predictor–corrector method with the adaptive stepsize control, the results we have obtained are given in Table 5.3. From Table 5.3, we can see that the results we have obtained are more satisfactory than the results we have obtained by using the matrix-free predictor–corrector with fixed stepsize, for example, the least error in the approximation we have obtained in Table 5.1 was 2.34972 · 104 which needs 157,145 flops, while we get an error of 4.40372 · 105 from Table 5.3 with only 9366 flops which is approximately 6% of the total flops needed. Shawakfeh [5] used the Euler–Newton predictor–corrector method with the adaptive steplength control to approximate the critical points of the same smooth map f(x, y). He considered the same minimal stepsizes and the same initial stepsize, the results he has obtained are given in Table 5.4.

Table 5.3 Approximating critical points using matrixfree predictor-corrector with adaptive steplength control hmin

Approximate

n

Error

Relative error 4

5

Flops

0.001

1.00011816661995 1.00005907981934

25

1.32113 · 10

9.34178 · 10

9094

0.000001

1.00003938818385 1.00001969370408

26

4.40372 · 105

3.11390 · 105

9366

Table 5.4 Shawakfeh [5] results for approximating critical points using Euler-Newton predictor-corrector with adaptive steplength control hmin

Approximate

n

Error

Relative error 4

Flops

5

9799

10,062

0.001

1.00009916592235 1.0000495805456

25

1.10870 · 10

7.83968 · 10

0.000001

1.00009916592235 1.00004958050456

26

1.10870 · 104

7.83968 · 105

562

H.I. Siyyam, M.A. Samarah / Applied Mathematics and Computation 185 (2007) 554–563

By comparing the results obtained in Table 5.3 with the corresponding results in Table 5.4, we see that both results are very closed with a little improvement for our results than Shawakfeh results, but still with less flops we need than Shawakfeh needed for the same purpose. Example 5.2. Let f : R3 ! R3 be the smooth map defined by 2 3 1 cosðyzÞ þ 16 3 6 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 7 f ðx; y; zÞ ¼ 4 19 x2 þ sinðzÞ þ 1:06  0:1 5: 1 xy 2 3 2 e  10p3 20 60 0:5 Then it is easy to verify that the fixed point of f is 4 0 5 ¼ 4

3 0:5 5. Define the homotopy 0 p 0:523809523 6 H(k, v) = v  kf(v), where v = (x, y, z). Let u0 = (0, 0, 0, 0)T be the starting point. We will approximate the fixed point of f by generating points along the solution curve c 2 H1(0) using the matrix-free predictor–corrector method with fixed stepsize throughout all points, starting from u0 until the target level k = 1 is reached, i.e., until un = (1, xn, yn, zn)T is reached. The results we have obtained are given in Table 5.5. From Table 5.5, we see that the results we have obtained are satisfactory. Moreover, the number of points and the flops from each line to the previous line is approximately doubled. Also, we can see that the approximation gets better by taking a smaller initial stepsize. Shawakfeh [5] considered the same example for the purpose of approximating the fixed point of the smooth map f(x, y, z). He used the Euler–Newton predictor– corrector method with fixed stepsize throughout all points, the results he has obtained are given in Table 5.6. By comparing our results in Table 5.5 to Shawakfeh results in Table 5.6, we see that both results are very close even though that the number of flops needed for our method is less than the number of flops he was

Table 5.5 Approximating fixed points using matrixfree predictor-corrector with fixed stepsize h

Approximate

n

Error

Relative error 3

Flops

8.06406 · 10

2

1.1138 · 10

35,448

0.016

0.50556940388534 0.00003486354579 0.52943054476358

78

0.008

0.50226252291249 0.00001348074801 0.52596784120395

155

3.27592 · 103

4.5248 · 103

67,692

0.004

0.50060907726227 0.00000353786083 0.52423653529584

309

8.81887 · 104

1.2181 · 103

132,144

Table 5.6 Shawakfeh [5] results for approximating fixed points using Euler-Newton predictor-corrector with fixed stepsize h

Approximate

n

Error

Relative error 3

Flops

7.87028 · 10

2

1.0871 · 10

90,874

0.016

0.50543576771066 0.00003397052875 0.52929023046908

78

0.008

0.50219569866525 0.00001307173338 0.52589777620329

155

3.17910 · 103

4.3911 · 103

170,721

0.004

0.50057566365084 0.00000334263103 0.52420152540607

309

8.33491 · 104

1.1513 · 103

271,746

H.I. Siyyam, M.A. Samarah / Applied Mathematics and Computation 185 (2007) 554–563

563

Table 5.7 Approximating fixed points using matrixfree predictor-corrector with adaptive steplength control hmin

Approximate

n

Error

Relative error

Flops

0.001

0.50004504487316 0.00000025901824 0.52364594000324

17

6.52195 · 105

9.0084 · 105

16,216

0.000001

0.499999995634899 0.00000000025120 0.52359872989335

20

4.61101 · 108

6.3691 · 108

18,270

Table 5.8 Shawakfeh [5] results for approximating fixed points using Euler-Newton predictor-corrector with adaptive steplength control hmin

Approximate

n

Error

Relative error

Flops

0.001

0.53459626171869 0.00031372266654 0.55981893667906

17

5.00889 · 102

6.9185 · 102

18,065

0.000001

0.50000136386162 0.00000000784868 0.52360020363463

20

1.97471 · 106

2.7276 · 106

20,205

needed, for example, we got approximately the same error when the stepsize h is taken to be 0.004 with only we half of the number of the flops he needed in his approach. Similarly, as in the last example, we used the matrix-free predictor–corrector method with the adaptive steplength control presented in Section 3 to approximate the fixed point of the smooth map f(x, y, z). We took the same minimal stepsizes as below and the initial stepsize taken here is h = 0.008, the results we have obtained are given in Table 5.7. From Table 5.7, we can see that the results we have obtained are more satisfactory than the results we have obtained in Table 5.5. Moreover, the best approximation we got in Table 5.5 needs 132,144 flops, while we got an error of 4.61101 · 108 in the approximation of the fixed point of the smooth map f(x, y, z) in Table 5.7 with only 18,270 flops with error 8.81887 · 104 which is approximately 14% of the total flops needed. Shawakfeh used the Euler–Newton predictor–corrector method with the adaptive steplength control presented in Section 3 to approximate the fixed point of f with taking the same minimal stepsizes and initial stepsize as we took, the results he has obtained are given in Table 5.8. By comparing our results given in Table 5.7 with the corresponding results of Shawakfeh given in Table 5.8, we see that the results we have obtained are better than Shawakfeh results with also less flops needed than he needs. References [1] E.L. Allgower, K. Georg, Numerical Continuation Methods. An Introduction, Series in Computational Mathematics, vol. 13, Springer Verlag, Heidelberg, 1990. [2] E.L. Allgower, K. Georg, Numerical path following, in: P.G. Ciarlet, J.L. Lions (Eds.), Handbook of Numerical Analysis, vol. 5, North-Holland, 1997, pp. 3–207. [3] K. Georg, Matrix-free numerical continuation and bifurcation, Numerical Functional Analysis and Optimization 22 (2001) 303–320. [4] M.A. Samarah, Finding Special Points by Using the Matrix-Free Numerical Continuation, M.Sc. Thesis, Jordan University of Science and Technology, 2004. [5] S. Shawakfeh, An Iterative Method for Approximating Critical Points of a Smooth Map Via Continuation Methods, M.Sc. Thesis, Al-Bayt University, 2003.