# Lanczos maximal algorithm for unsymmetric eigenvalue problems

## Lanczos maximal algorithm for unsymmetric eigenvalue problems

Applied Numerical North-Holland Mathematics 179 7 (1991) 179-193 Lanczos maximal algorithm for unsymmetric eigenvalue problems Mohamed Khelifi L...

Applied Numerical North-Holland

Mathematics

179

7 (1991) 179-193

Lanczos maximal algorithm for unsymmetric eigenvalue problems Mohamed

Khelifi

Laboratoire d’cinalyse NumPrique et d’optimisation, Batiment M3, UniversitP des Sciences et Techniques de Lille Flandres Artois, 59655 Villeneuve d’clscq Cedex, France

Abstract Khelifi, M., Lanczos 7 (1991) 179-193.

maximal

algorithm

for unsymmetric

eigenvalue

problems,

Applied

In this paper we recall the importance of angles between p, and q, (1 Q i < computation of eigenvalue approximations of a n x n matrix B. With this result the Look-ahead Lanczos (LAL) algorithm of Parlett and Taylor that computes sense for these angles. Experiments with large matrices are described and the compared with LAL and standard Lanczos algorithms.

Numerical

Mathematics

n) the Lanczos vectors in the we present an improvement of the best value in some precise Lanczos maximal algorithm is

1. Introduction

An important number of applications in applied sciences and engineering require the numerical solution of large unsymmetric matrix eigenvalue problems. When the matrix B is not large, the most popular way to obtain the eigenvalues of B is to use the QR algorithm, but as the order n increases the QR algorithm becomes less and less attractive, especially if only few eigenvalues are wanted. For the large sparse eigenvalue problem, there are two well-known algorithms that generalize the symmetric Lanczos method to unsymmetric matrices: (1) Arnoldi’s method, (2) the Lanczos biorthogonalization

algorithm.

For an analysis of these algorithms, see Wilkinson [8, pp. 382-3941. found in [6,7]. The Lanczos algorithm can be condensed in matrix form as

BQ, = QjT, + rj+ ,e,F, whereQj=[q

,,...,

q,],

% P2 T,=

P,=[pI

Pj*B = ?;P,* ,...,

p,],

+ e,s,*,,,

e,*=(O,O,...,l)and

0

Y2 ff2

. . .

.

.

y. J

0

016%9274/91/\$03.50

.

.

p,.

.

aj

I

Q 1991 - Elsevier Science Publishers

B.V. (North-Holland)

More recent analysis

can be

0)

180

M. Khelifi / L.unczos maximal algorithm

The algoritm must terminate at the n th step with T, = P,,*BQ,,

f’,*Q, = Id,,

but it may stop sooner at step j < n in two ways: (1) either r,+, = 0 or s,+i = 0 or both, (2) r,,, f 0, s,+i f 0 but w,+, = s,*+ir,+, = 0. The first case is very interesting because then, the eigenvalues of 7; are also eigenvalues of the matrix B. The second case is more serious and called breakdown. Parlett and Taylor  suggest a way of dealing with the most common types of breakdowns. Their idea is based on the fact that the vectors r,,, and s,+~ can still be defined in a certain way even when q+ 1 and sj+ 1 are orthogonal. Their algorithm is called LAL (Look-ahead Lanczos) algoritm and may be resumed as follows: At the step j - 1 the n-vectors ‘; and s, are computed with (1). Instead of normalizing 5 and s, to get qj and p, we set: and

rj+ 1 = Br, - wjq,_,

s,+, = B*s, - ojp,_,.

With the notations ‘=(Pj? o

=

Pj+l)?

Pj*q,>

Q=

(417

‘=P;c+14j+l~

s=

qj+l),

8 =p,*4,+1=

tsjY

sj+l),

R

=

(5,

q,,),

4,*P,+13

we have S*R=W=

; [

f

w1

=

vu

with

The choice of Parlett and Taylor for the vectors p,, q,, P,+~ and qj+, is and

p=sv-* The factorization

Q = RU-‘.

of W is the LU factorization

with interchange and may be used only if 0 # 0.

2. The condition number Let us now give a result that shows how the eigenvalues of the matrix B are affected by a perturbation of B. Proposition 2.1 (Kahan, Parlett and Jiang ). Let C = B - E and let J = FBF-’ be the Jordan form of B. To any eigenvalue p of C there corresponds an eigenvaiue X of B such that ]x-/.~]~/(l+

JX-CL])“-‘<

I] FEF-’

11

where m is the order of the largest Jordan block to which A belongs. Let us now apply this result to the matrix T, obtained by Lanczos algorithm.

181

M. Khelifi / L.ancros maximal algorithm

Proposition 2.2. In the Lunczos algorithm, if we have T, = P,,*( B - E )Q where P/Q,, = I,, and E is a perturbation. Then to any eigenvalue p of T, there corresponds an eigenvalue X of B such that I~-PIm/(l+

I~-Pl)m-1

with J = F( P,,*BQ,)F-’

G cond(f’)

*

II P,,II . II Q, II . II E II

a Jordan form of P,,*BQ,.

Proof. We apply Proposition 2.1 where C, B, E are replaced respectively by T,, P,,*BQ,, and we use the fact that P,,*BQ, and B have the same eigenvalues. Cl Remark. We see the influence is perturbed.

1)Q, 11over the eigenvalues

of 11P,, II and

Proposition 2.3. If the Lanczos vectors p1 and q, are such that where 0, = L( pi, q, ), then we have:

IIP;IIG

min j cos

e, and IIQjll G in/ose. i
i
P,*EQ,,

of T, when the matrix

11p, )I = 11q, I( = l//q,

B

i
I

Proof.

.i min cos

i
and similarly

for Q,.

e,

0

Remark. Because of Proposition 2.2 we want to obtain the smallest value of 11P, 11and 11Qj 11.By Proposition 2.3 a solution is to compute the Lanczos vectors such that mini ~ I c j cos 0, has the largest possible value. It is the objective of the next section.

3. The maximal factorization The beginning of the Lanczos maximal (LM) algorithm is identical to that of the LAL algorithm of Parlett and Taylor, but the factorization of the matrix W = ( sj, sj+i) *( rj, rJ+l) differs from the LU factorization and depends on the iteration j of the algorithm. The factorization matrix is dependent on a scalar x.

)y=l x

[0

x 1

1

and

V, = WU,- ’ =

with x E R. The matrix U, is always nonsingular. Then we have: Proposition factorization

qJ >=

e

nonsingular

3.1. Let x E R. If the Lanczos W = V,U,, then

‘Os(PJ)

e-xw G - xe

w

and V, is nonsingular

vectors ( pj, pJ+,, q,, q,+,)

1d-e*

hJ - es,,,

1

1 >

-xtesJ-ws,+,)//

iff the matrix

are computed

W is

with the

182

M. Khelifi / Lunczos maximal algorithm

and

Proof.

Since b?J,

and

PJ%J

=

q,+d

PJ:

dJ+

1

=

bJ’

r,+,)u,-‘,

=

1 we obtain

(PJ,

pJ+,>

the result.

=

b,,

‘,+,)K-*

0

Remarks.

(a) If, instead of matrices the U’ and I’,, we had taken V,D- ’ and DU, as the matrices in the factorization, when D is diagonal, the Proposition 3.1 would remain true because D only makes the norms of vectors vary but not the cosines of L( pJ, qJ) and L( pJ+ 1, q,+ 1). (b) The matrix U, has been chosen as an upper triangular matrix in order to keep the vector qJ colinear to the vector r~ to get a upper Hessenberg matrix. The best choice of x is the one that maximizes that such a choice is possible. Proposition

3.2. Let f(x)

= cos( pJ, qJ ) and g(x) = cos( pJ+ 1, q,+ 1) be the cosines obtained W = V,U, (x E W) where W is nonsingular, then there exists a 2 such that

the factorization tin{ Proof.

min{ cos( p,, q,), cos( pJ+ 1, qJ+,)}. Let us show

f(z),

g(z)}

= y>;

Since W is nonsingular,

‘;+

kn{f(x), ,

with (3)

g(xk

(respectively

sJ + i ) is not colinear

to rJ (respectively

sJ) and

therefore

lim x+w There

cos(Pj+12

qj+l)

=

1,G - f?* 1

,Iil

1)

r,+1 - X’/ II . II as,+1 - dSJII = O

exists an (Y> 0 such that max xt[-a,a]

min{ f(x),

g(x)}

= y>;

tin{

f(x),

g(x)}.

is continuous by Propositions 3.1, this one is Since the function h : x - min{ f(x), g(x)} continuous over the compact interval [ -a, a] and therefore the existence of x” is proved because the function h attains these bounds. 0 When the factorization is worked out with x = 2, we will say that the factorization whence the name of the L.M. algorithm. Proposition

3.3. If K is defined by (3), then max{ _yi(x”), y2( 2)) = min,,Rmax{ where y, and y2 are second-degree polynomials. Proof.

From the relations

cos2(PJ~

cos2(PJ+ly

of Proposition

J+(X), uZ( x)}

3.1, we have

(WC2-

4j)

=

q.,+l)

is maximal,

8*)*

,,~~sJ-HiJ+,)-:/BsJ-wa,+~): where K= VJI12’ =

, ,+,:Lr,

1,

J

2

8*)*

(cd; -

where K’ = iI

2 osJ+1

-

8sJ

11

.

Ifwewrite cos*( p,+,,

u=(&~~-t9s,+~)and q,+,) we obtain:

Ylb) =

M. Khelifi / Lanczos maximal algorithm

183

) , when taking

the inverses of cos2( pj, qJ) and

u=(Os,-US,,,

l

cos2( P/Y 4,)

=i(

1

1 Y2(4

=

COS'(P,,,~ 4,1-l) K

and therefore, the value 2 that maximizes Cl minimizes max( y,( x), y2( x)). 3.4. Practical

+ II 24II 2)>

IIul12x2- +*4x

f(lI 5.II 2x2 - 2(rj*rj+I)x

+ II r,+l II ‘)3

min{ cos( p,, q,), cos( P,+~, q,,+,)}

is also the value that

choice of i. We have:

II IJII * ET=II 5 II 2 K

II ‘; II 2. II es, - as,+1 II 2 > o

K

(wG-e2)2

where v, K, K' are defined in Proposition Then, there exists a value X E US’U {co}

II 24II 2. ‘/II 2-

3.3. that

=

X) and

r/+1II 2. u II 2

2(rJ*r+11iul12-u*G’II~,I12) Let

pj,

As a

5? is equal to the number

ii x1

that is between

1

%x, x Fig. 1.

= cos(

,,

4, + 1

the two others among

x,, x2 and X.

M. Khelifi

184

/ Lanczos

maximal

algorithm

4. The maximal factorization is the best of all For the study of the maximal factorization, we shall now present a particular factorization made with x = x1 (see Choice 3.4). For the factorization of the matrix W with x = x2 the presentation is the same. First we introduce a proposition that will help us in the geometric interpretation of the x,-factorization. Proposition 4.1. Let (p,, P,+~, q,, q, +,) be the Lanczos vectors computed with the x,-factorization. If P is the orthogonal projection over span( s,, s, + , ), then (a) p, is cofinear to P(q,), (b) p, is orthogonal to p, +,, (c) P,il is colinear to P( q,+,). Proof. See [3, 111.4.31and [3,111.4.4].

0

Remark 4.2. (1) If we choose x = x2 in the factorization we can show that the vector q,+l is colinear orthogonal projection of pJ+ , over span( r,, ‘J+1) and in this case we have cos( P,+,2 q,+J

= ma,{ (p,+,?

u): u E span(r,,

to the

rj+l)}.

(2) Usually we have x, # x2 but when x1 = x2 we can show (see [3,111.4.5]) that dim span(s,, s,+i,. r/, r,+l ) 6 3 and in this case we have cos( p,, q,) = 1 or cos( P,+~, q,+l) = 1. In order to determine the relation between zation we introduce the following definition:

cos( p,, q,) and cos( p,+ 1, q,+ 1) with an xt-factori-

Definition 4.3 . Let II, and IIT, be respectively assume that (zq, u2) (resp. ( vl, uZ)) is orthonormal. @ = (q,

%)* (u,,

the linear spans of ( ul, u2) and (vi, vz). We The matrix

%)

is called the matrix angle between IIT, and III,. Moreover singular values o1 and u2 of the matrix @ are called the acute principal angles between the subspaces II, and III,. Then we have @ = XZY * where 2 = diag( q, a,), u, > uZ, and it easy to show (see [3,111.5.5]) that the singular values of @ are invariant when the matrix is computed with another orthonorma1 basis (vi, vi) and (ur’, ZA;).Then we have: Proposition 4.4. If the Lanczos vectors ( p,, p, +, , q,, q,+ ,) are computed with a maximal factorization, then u, and u2, the singular values of the matrix @ between the two planes span( p,, p,+ ,) and span(q,,

q,+A verifv 1 1 1 -+ <2+x cos%, cos%,+ 1 u,

where 8, = 4p,,

q,h

8,+, = L(P,+,,

1 u2

q,+,).

185

M. Khelifi / Lancros maximal algorithm

Proof. See [3,111.5.7] and [3,111.5.9].

0

Proposition 4.5. With the same notations as in Proposition by a maximal factorization, then

4.4, if the Lanczos vectors are computed

min(cos tlj,COSe,+~) 2 ** 1

2

Proof. Let us give the proof for cos ej 2 cos 6’,+i (it is identic a1 1‘f cos 6, G cos e,,,). There exists a number a such that 0 < a G 1 and a. cos ej = cos 6)j+1. Then, by Proposition 4.4, we obtain a2

c0s2e, +,

+

a; + u2’

1 G2

c02e,+,

c082e,+1 2

=

(1 + a’)+~ u;

0102

+

u;

, ’

ufu: u;

+

u,’

Cl

and since we have duf + a; < ui + u2 we have the result. 4. I. Conclusions

Taylor  showed that when the vector qj is not longer forced to be colinear to r,, the maximum of min{cos(p,, q,), cos( P,+~, 4,+1)) is equal to the value 2u,u,/( ui + u2). We see that if the Lanczos vectors are computed by a maximal factorization the we have *

G 1

2

fin{4

P,-,

qj)? cos(PJ+*Y

4j+l)}

G

2*

2

and as a consequence we are very near to the best possible value although we kept the vector ‘J colinear to the vector qj and therefore the matrix 7; is an upper Hessenberg matrix. 5. Computation

of the matrix q

As the LAL algorithm, the LM algorithm produces a block tridiagonal matrix T/ written as

The matrices A, are either 1 x 1 or 2 X 2 and the matrices Bj and c are shaped conformably. As in the LAL algorithm, the iteration i is called a double step if the Lanczos vectors pkr pk+ ,, qkr qk+ 1 are computed in the same iteration i, and a single step if only pk, qk are computed. Since the single step is a step of standard Lanczos algorithm, we now consider only the case of the double step. Then we have at step i:

(qk, qk+l)= h-k, (Pk,

Pk+l)

with

= hk

u2rk

+ u3rk+l),

+ 'ZSk+l.

'3'k

+ '4'k+l),

=” u2 3 u,-‘D-1 1 [

and D = diag(d,, d2).

0

u3

(4

186

A-l. Khelifi / Lancros

maximal algorithm

The choice of the scalars d, and d, is such that IIP,II = Ilqrll =1//m, Since we choose

t=k,

UXp’D-’ as an upper

triangular

k+l. matrix,

the matrix

B, has the form

if the step (i - 1) is double, where + stands for a positive quantity, and therefore the matrix matrix which is more convenient for finding the eigenvalues of T,. In the LAL algorithm of Parlett and Taylor we have s k+2

=

B*pk

-

(&++lB*pk)Pk+l

-

(&+B*Pk)Pk

-

T, is an upper

Hessenberg

(~/if-\$*!‘kbk-1

because the vector pk is colinear to the vector sk+i but in the LM algorithm, neither pk nor pk+l are usually colinears to the vector sk+ 1 and therefore we can use pk and pkel for the computation of sk + 2. Choice 2. The vector pk is used as in the LAL algorithm. Choice 2. The vector pk+ , is used and we have. s k+z =B*Pk

-

(&++lB*pk+lbk+l

-

(qk*B*!‘k+hk

-

but in this case we obtain qz_, B *pk+ i = 0 and therefore is computed with (n - 1) additions and n multiplications

(&-lB*pk+lhk-l

with the second choice the vector s~+~ less.

Since it is better to choose the vector which has the greatest component in the direction of sk+ ,, the choice between pk and pk+ 1 is directed by the two numbers v2 and v, of the matrix I’; *D, and for p > 0 we have: 1v2 1 > p I u, ( *

lu21

Choice 1, (5)

=+Choice2.

In practice we took ,U= 2 for computing sk +* since it leads to Choice 2 which needs less computations. With this strategy we hope to make an economy in the computation of sk+2. Proposition

Proof.

5.1. When sk + 2

Since rk*+z Bs,* = 0 and by (4) we have: P:+,Brk+I

=rk*+~B*Pk+, =v4rz+2B*

= r,*,,B*b,%

+&%+I)

(pk--vlsk)i

=

rk*+2B*b4Sk+1)

= ;rz+2B*p,.

Cl

i 5.2. Computation

of Ai

pzBq,=

u,p~Br,=u,p,*r,+,

Pk*+ ,Bq,

= UI

= u,(u,~~+v~o~+,)

7.f.

We have

Pk*+, Br,

=

UI pk*+

1 ‘k

+, =

UI

(

[email protected]

+

u4wk

+ 1)

=

ddd~

M. Kheiifi

/ Lanczos

maximal

187

algorithm

And in the same way we obtain P&i B qk+l

--(a;

=

- 6’).

Therefore, the matrices B,, c,

(; - T?d)/(k

- 8) + \$q;+lR*pk.

A may be computed with only one supplementary inner product

qk*+lB*pke

6. The Lanczos maximal algorithm Let us define the matrices P, and Qj by:

Q,=qk, ei=

if step i is single,

‘i=Pkr

(qk, qk+i)?

P;= (Pkr Pk+i)r

if step i is double.

Moreover, we set Rj= (rk, rk+l) and .S, =(.sk, sk+i) at step i. Since step i may be single or double we note that i is not the dimension of the matrix T produced at step i. 6. I. Step i of LA4 We know: p,_,,

Qj_, (both are null if i = 1);

Z,, 2

x

1 matrix if step (i - 1) is double (1

rk, Sk, w = wk = sk*rk, 11rk

(1)

(4

rk+l

=Brk-

Sk+1

=

8

=

113 11 ‘k

11.

Q,-,Z,W, 4 = bk?

B*sk

-

wpk-l,

‘%

sk*rk+l =sk*+,rk,

=

1 matrix if not), Z, = 1;

X

bk,

‘k+l)?

sk+l)e

3 = sk*+lrk+,r

compute the inner products rk*rk+,, szsk+,, (3)

+l=w/(IIrkIIo

(4

u

=

IbkIIbcosbk9

chk

-

~~=u*~/II~I12, x

=

(

11 u

II 2 11 rk

‘kb

v =

eSktl,

x2

II -

II ‘k+l

IIs~+~ 11, IIrk+, II.

OS,

=

-

WSk+,,

‘frk+l/ll

rk

11 ‘7

II 2 II v II 2)/2(rk*rk-

1

u*v11

It u 112-

rk

11 2)’

use Choice 3.4 of 2. 1wo - e211 11rk II * IIu - lv II ’

1oc;,-

e2 1

(5)

+I =

(6) (7)

If I \$I I < To1 and +2 < Tol, then STOP with error message. If 1+I I 2 2 - +2r then take a single step, otherwise take a double step.

‘2 =

II v 11’ Ii rk+l

+2=fin{ -

grk

II

lh I. l\t,l>.

188

M. Khelifi

(8) (a) Factorization

of W

/ Lanczos

maximal

algorilhm

Single step

Double

L%= Ilq/G7

A=diag(IIrkII~,

Yk =

step

IIrk+I-~rll~)

1 u-', 1 -2 A-' 0 [I

w/P,,

A

1

(b) Compute

(c) Compute

P, and Q,

r, and B,

qk =

Q,= RJ-

‘k/&c

s,v- *

Pk = Sk/Yk

P, =

r, = Z,Y,

r, = Z,[w,

B,=[?] (d) Compute vectors

the new ‘k+l

=

Zw]A-'

19or

21

[;

[; 1

f-k+z=(Bf?,-Q,-,r:)

rk+l/Pk

(B*P,-LW[;]. sk+2

=

i (e) Compute

A,

(f) Biorthogonalization

Qk =

w/t?

‘k+l

+‘k+l

B*f’,

;

[

1

-akqk

‘k+Z=

I “2

I > 2 I ~4

I

otherwise

,

Form A, by Computation

if

5.2

-Q,A,[;]

‘k+Z

1. if I+l>21u41 1, (g) Inner products step i + 1

for

Compute 11‘k iisk+, IIsk+l

iI +

Z, + 1

Z ,+, = PI

+ 2 II

I12-2akS:Sk+l+a:IIskI12 yk

1 (h) Compute

otherwise

Z ,+I

[ u4/u2 u2/“4

[

1

1 1 ’

if

I u2 I > 2 I u4 I

otherwise

4.2. Total cost of step i Let us call VOP (vectorial operation), a vectorial addition or a scalar-vector product. With this notation an inner product between two n-vectors is equivalent to two VOPs because

189

M. Khelifi / Lanczos maximal algorithm + **- + unu,, (n multiplications single, then the total cost for step i is: u*u=141ul

LA L algorithm: 4 matrix-vector LA4 algorithm: 4 matrix-vector 4 matrix-vector

Suppose that step i - 1 is

products and 44 VOPs; products and 46 VOPs, if Iv2 1 > 2 (v4 1, products and 44 VOPs, if 1vz 1 < 2 1v, I ;

because we have v-*

ul [ u2

=

I

u3 04

and usually none of u, is null. Remarks. (1) The matrix-vector product has been counted separately because ts cost depends on the number of nonzero elements of the matrix B and not on the LAL or LM algorithm. (The same subroutine for a matrix-vector product is used in the two algorithms.) When two single steps are made successively the two algorithms LAL and LM require 4 (2) matrix-vector products and 48 VOPs. (3) The cost for one double step by the LM algorithm is slightly superior to that of LAL. But since LM makes a maximal factorization (the factorization of LAL is a particular case of it) it makes the value of min{cos(pj,

4;)9 cos(Pj+~~

4,+1)}

the largest possible. Therefore LM will require more often double steps than LAL and generally we will have a number of operations equal to or less than with the LAL algorithm.

7. Eigenelement

approximations

At each step i, the LM algorithm produces left and right Lanczos vectors P,, Q, which are n x 1 or n x 2; We can write &, = (Q,,.. ., Q,) and p, = (PI ,..., P,) with dimension of F1= dimension of Qi, i = 1,. . . , j. Then we can generalize the relations (1) for the LM algorithm. Proposition 7.1. In the LM algorithm, Bb, = OjT,+

if we note k + 1 = order( ij) = order( Qj), then

rk+2eAl,

q,*

+

elc+1e+23

qFj*

+

(e,,

if step j is single,

i;*B = i

e,+,>Z,sk*+2,

if step j is double.

190

Proof.

hf. Khelifi

See 13, Proposition

Therefore as follows:

the computable

Proposition

7.2,. Let

/ Lanczos maximal

algorithm

111.7.21 0 error estimates

(p, u,~u*)

of Ritz eigenelements

in the LM algorithm

be an

Proof.

As we prove the first and the second equalities us consider the case when step j is double. We have: B*FJu = pJ*T,*u +sk+2ZJ*(ek,

as in the standard

EI

Remark.

algorithm,

we can obtain

and

[sl, B*.sr ,.__, B*“-lsl]

As in the standard it explicitly.

8. Connection

Lanczos

with Amoldi’s

If the two Krylov

Lanczos

a residue

approximation

Proposition

B”-‘r,]

S=

(6) algorithm

to obtain

4,): 1
8.1. With the notation of (6), if W = S*R and Arnoldi’s method give the same result.

Proof. Since span( ql, q2,. . . , qJ) = span( rr, Br,, that there exists a factorization such that pJ = Gram-Schmidt process, there exists an upper orthogonal matrix, thus Q*Q = 1,. But we have have P = Q.

9. Numerical

without

method

are nonsingular, we can use a natural generalization of the Lanczos maximal a maximal factorization of the n X n matrix W= S*R which maximizes

algorithm

let

matrices

R = [rl, Br, ,...,

min{cos(p,,

algorithm,

ek+l)*o_

Since q.*u = PU we have the result.

calculating

becomes

is nonsingular,

then the Lanczos

maximal

. . . , BJ-‘r,) for all j < n, it is sufficient to show qJ for all j < n. Since R is nonsingular, by the triangular matrix U such that Q = RU-’ is an P*Q = I, with P = S( WU-‘) and therefore we

examples

We give three examples which compare, for the first one, the performance algorithms and, for the others, the LM and the standard Lanczos algorithms

of the LAL and LM for large unsymmet-

191

M. Khelifi / L_unczo.smaximal algorithm Table

1

Significant

eigenvalues

of matrix

B

LAL algorithms

LM algorithm

- 0,1243449787580175D-13

O.l776356839400250D-14

0.5000000000004762D

+ 01

0.4999999999999988D

+ 01

0.6667000074960642D

+ 01

0.6666000000000784D

+ 01

0.7500013762325628D

+ 01

0.7499999987407891D

+ 01

0.7999476587787392D

+ 01

0.8000054937628480D

+ 01

ric matrices. All results were obtained (mantissa of 52 digits). Example 9.1 (LAL eigenvalues:

and

Lit4 algorithms).

h, = {approximation

on the IBM PC computer,

using double precision

B is a 100 x 100 upper triangular matrix with distinct

of lO(1 - l/i)

with 4 digits},

i = 1,. . . ,100,

for example Xi = 0, X, = 5.0, X, = 6.667. The superdiagonal elements were randomly chosen numbers in [ - 10, + lo]. ri = S, = [I, 1,. . ., l]* and the number of steps taken is 20. A few of twenty eigenvalues computed are complex but in Table 1 we give the five significant eigenvalues which are real. We see that the LM algorithm gives one to five digits more than the LAL algorithm. Moreover we have: min

cos( p,, qi) = O.llD-02

with LAL,

rnin

cos( p,,

with LM.

1
q,) =

0.67D-02

Example 9.2 . the matrix B represents the transition matrix of the Markov chain describing a random walk on an (k + 1) x (k + 1) triangular grid. The probability of jumping from the node (i, j) to either of the nodes (i - 1, j) or (i, j - 1) is given by: pd(i, j) = i(i

+j)/k.

This probability is being doubled if either of i or j is 0. The probability of jumping from the node (i, j) to either of the nodes (i + 1, j) or (i, j + 1) is given by: pu(i, j) = + - +(i +j)/k. (This transition can occur only for i + j c k.) The nodes are numbered in the order (0, 0), (1, 0), (2, 0), . . . , (k, 0), (0, l), (1, l), . . . and the dimension of B is \$( k + l)( k + 2). We are interested in the computation of the left eigenvector corresponding to the eigenvalue unity because it represents the steady state probabilities of the chain. The LM and LAL algorithms are used with a technique of partial reorthogonalization of the Lanczos vectors described in . The dimension of the matrix B is 496 (k = 30), the maximum step before restart is 50 and the stopping criterion is that the residual norms of the left and right eigenvectors (computed by Proposition 7.2) is less than lo-’ (see Table 2): r,=s,=(]sin(2i+l)]),,

l
192

M. Khelifi / L.anczos maximal algorithm

Table 2 Number LANC LM

of steps

82 85

Total cost (VOPS)

max. left residual norm

Max. right residual norm

5346 5254

0.98~10~’ 0.14X10~5

0.23 x lo- 5 0.25 x 1O-6

Total cost (VOPS)

Max. left residual norm

Max. right residual norm

3452 3302

0.36x 10m6 0.62 x 1O-6

0.31 x 10-6 0.41 x 10-6

Table 3 Number LANC LM

of steps

59 55

Although LM requires more steps, its total cost is less. The explanation Lanczos algorithm LANC requires more orthogonalizations than LM.

is that the standard

Example 9.3 . Consider the matrix arising from the discretization by centered differences of the elliptic partial differential operator on a uniform n x n grid, with h = l/85

on the unit square with T(x, y) = (x - 3,~)~ eXwy and u(x, r) = 0 on the boundary. Taking n = 84 yields a matrix B of dimension 7056. We computed the five greatest eigenvalues of B by the LANC and LM algorithms and obtained the following results (see Table 3): X, = -0.110107D

+ 04,

X2 = -0.954296D

+ 03,

X, = - 0.842304D

+ 03,

h, = - 0.833378D

+ 03.

r, = s1 = (sin(2i + l))i,

h, = -0.944085D

+ 03,

1 < i < 7056.

10. Conclusions In comparison with the LAL algorithm, the LM algorithm is not more complicated and it presents the advantage to have the best possible value for min{ cos( pi, q,): 1 < i
M. Khelifi / Lanczos maximal algorithm

193

References [l] G.H. Golub and C.F. van Loan, Matrix Computations (North Oxford Academic, Oxford, 1983).  W. Kahan, B.N. Parlett and E. Jiang, Residual bound on approximate eigensystems of non-normal matrices, Siam J. Numer. Anal. 19 (1982) 470-484.  M. Khelifi, Algorithme de Lanczos pour le calcul des valeurs et vecteurs propres de matrices non symetriques de grande taille, Thesis, University of Lille, France (1989).  R.S. Martin, G. Peters and J.H. Wilkinson, the QR algorithm for real Hessenberg matrices, in: Handbook for Automatic Computation 2 (Springer, New York, 1971).  B.N. Parlett, D.R. Taylor and Z.A. Liu, A look-ahead Lanczos algorithm for unsymmetric matrices, Math. Comp. 44 (1985) 105-124.  Y. Saad, Variations on Amoldi’s method for computing eigenelements of large unsymmetric matrices, Linear Algebra Appl. 34 (1980) 269-295.  Y. Saad, Chebyshev acceleration techniques for solving nonsymmetric eigenvalue problems, Math. Camp. 42 (1984) 567-588.  J.H. Wilkinson, The Algebraic Eigenuafue Problem (Clarendon Press, Oxford, 1965).