Krylov subspace
From Wikipedia, the free encyclopedia
In linear algebra, the order-r Krylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspacespanned by the images of b under the first r powers of A (starting from A0 = I), that is,
It is named after Russian applied mathematician and naval engineer Alexei Krylov, who published a paper on this issue in 1931.[1]
Modern iterative methods for finding one (or a few) eigenvalues of large sparse matrices or solving large systems of linear equations avoid matrix-matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. Starting with a vector, b, one computes Ab, then one multiplies that vector by A to find A2b and so on. All algorithms that work this way are referred to as Krylov subspace methods; they are among the most successful methods currently available in numerical linear algebra.
Because the vectors tend very quickly to become almost linearly dependent, methods relying on Krylov subspace frequently involve some orthogonalization scheme, such as Lanczos iteration for Hermitian matrices or Arnoldi iteration for more general matrices.
The best known Krylov subspace methods are the Arnoldi, Lanczos, Conjugate gradient, GMRES (generalized minimum residual),BiCGSTAB (biconjugate gradient stabilized), QMR (quasi minimal residual), TFQMR (transpose-free QMR), and MINRES (minimal residual) methods.
References
References
- Nevanlinna, Olavi (1993). Convergence of iterations for linear equations. Lectures in Mathematics ETH Zürich. Basel: Birkhäuser Verlag. pp. viii+177 pp.. MR1217705. ISBN 3-7643-2865-7.
- Saad, Yousef (2003). Iterative methods for sparse linear systems (2nd ed.). SIAM. ISBN 0898715342. OCLC 51266114.
- ^ Mike Botchev (2002). "A.N.Krylov, a short biography".
No hay comentarios.:
Publicar un comentario