2011-02-17

flops in Matlab


Somebody asked how one may count the number of floating point operations in a MATLAB program.
Prior to version 6, one used to be able to do this with the command flops, but this command is no longer available with the newer versions of MATLAB.
flops is a relic from the LINPACK days of MATLAB (LINPACK has since been replaced by LAPACK). With the use of LAPACK in MATLAB, it will be more approrpiate to use tic andtoc to count elapsed CPU time instead (cf. tic,toc).
If you're interested to know why flops is obsolete, you may wish to read the exchanges in NA digest regarding flops.
Nevertheless, if you feel that you really do need a command to count floating point operations in MATLAB, what you can do is to install Tom Minka's Lightspeed MATLAB toolbox and use the flops counting operations therein.

2011-02-13

x = A \ b; in Matlab

x = A \ b;
  1. Is A square?
    no => use QR to solve least squares problem.
  2. Is A triangular or permuted triangular?
    yes => sparse triangular solve
  3. Is A symmetric with positive diagonal elements?
    yes => attempt Cholesky after symmetric minimum degree.
  4. Otherwise
    => use LU on A (:, colamd(A))

2011-02-09

Nodal Discontinuos Galerkin methods

http://www.caam.rice.edu/~timwar/Book/NodalDG.html
http://www.caam.rice.edu/~timwar/TimWarburton/HomePage.html

2011-02-04

dot product


Vector formulation

The law of cosines is equivalent to the formula
\vec b\cdot \vec c = \Vert \vec b\Vert\Vert\vec c\Vert\cos \theta
in the theory of vectors, which expresses the dot product of two vectors in terms of their respective lengths and the angle they enclose.

Fig. 10 — Vector triangle
Proof of equivalence. Referring to Figure 10, note that
\vec a=\vec b-\vec c\,,
and so we may calculate:

\begin{align}
\Vert\vec a\Vert^2 & = \Vert\vec b - \vec c\Vert^2 \\
& = (\vec b - \vec c)\cdot(\vec b - \vec c) \\
& = \Vert\vec b \Vert^2 + \Vert\vec c \Vert^2 - 2 \vec b\cdot\vec c.
\end{align}
The law of cosines formulated in this notation states:
\Vert\vec a\Vert^2 = \Vert\vec b \Vert^2 + \Vert\vec c \Vert^2 - 2 \Vert \vec b\Vert\Vert\vec c\Vert\cos(\theta), \,
which is equivalent to the above formula from the theory of vectors.


  1. (by definition of dot product)

    If you think of the length of the 3 vectors |A|,|B| and |B-A| as the lengths of the sides of a triangle, you can apply the law of cosines here too (To visualize this, draw the 2 vectors A and B onto a graph, now the vector from A to B will be given by B-A. The triangle formed by these 3 vectors is applied to the law of cosines for a triangle)

    In this case, we substitute: |B-A| for c, |A| for a, |B| for b
    and we obtain:


  2.   (by law of cosines)


Remember now, that Theta is the angle between the 2 vectors A, B.
Notice the common term |A||B|cos(Theta) in both equations. We now equate equation (1) and (2), and obtain


and hence

(by pythagorean length of a vector) and thus

cross product



http://www.physics.orst.edu/bridge/mathml/dot+cross.xhtml

2011-02-01

Rotation matrix


Rotation matrix

From Wikipedia, the free encyclopedia
In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix
R = 
\begin{bmatrix}
\cos \theta & -\sin \theta \\
\sin \theta & \cos \theta \\
\end{bmatrix}
rotates points in the xy-Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details).
In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometryphysics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space.
Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, that is an orthogonal matrix whose determinant is 1:
R^{T} = R^{-1}, \det R = 1\,.
The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1.

http://en.wikipedia.org/wiki/Rotation_matrix


As in two dimensions a matrix can be used to rotate a point (xyz) to a point (x′, y′, z′). The matrix used is a 3 × 3 matrix,
\mathbf{A} = \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i  \end{pmatrix}
This is multiplied by a vector representing the point to give the result

 \mathbf{A}
 \begin{pmatrix} x \\ y \\ z \end{pmatrix} =
 \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i  \end{pmatrix}
 \begin{pmatrix} x \\ y \\ z \end{pmatrix} =
 \begin{pmatrix} x' \\ y' \\ z' \end{pmatrix}
The matrix A is a member of the three dimensional special orthogonal group, SO(3), that is it is an orthogonal matrix withdeterminant 1. That it is an orthogonal matrix means that its rows are a set of orthogonal unit vectors (so they are an orthonormal basis) as are its columns, making it easy to spot and check if a matrix is a valid rotation matrix. The determinant must be 1 as if it is -1 (the only other possibility for an orthogonal matrix) then the transformation given by it is a reflectionimproper rotation or inversion in a point, i.e. not a rotation.
Matrices are often used for doing transformations, especially when a large number of points are being transformed, as they are a direct representation of the linear operator. Rotations represented in other ways are often converted to matrices before being used. They can be extended to represent rotations and transformations at the same time using Homogeneous coordinates. Transformations in this space are represented by 4 × 4 matrices, which are not rotation matrices but which have a 3 × 3 rotation matrix in the upper left corner.
The main disadvantage of matrices is that they are more expensive to calculate and do calculations with. Also in calculations wherenumerical instability is a concern matrices can be more prone to it, so calculations to restore orthonormality, which are expensive to do for matrices, need to be done more often.

Unitary matrix


Unitary matrix

From Wikipedia, the free encyclopedia
In mathematics, a unitary matrix is an n\times n complex matrix U satisfying the condition
U^{\dagger} U = UU^{\dagger} = I_n\,
where In is the identity matrix in n dimensions and U^{\dagger} is the conjugate transpose (also called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if and only if it has an inverse which is equal to its conjugate transpose U^{\dagger} \,
U^{-1} = U^{\dagger} \,\;
A unitary matrix in which all entries are real is an orthogonal matrix. Just as an orthogonal matrix G preserves the (realinner productof two real vectors,
\langle Gx, Gy \rangle = \langle x, y \rangle
so also a unitary matrix U satisfies
\langle Ux, Uy \rangle = \langle x, y \rangle
for all complex vectors x and y, where \langle\cdot,\cdot\rangle stands now for the standard inner product on \mathbb{C}^n.
If U \, is an n by n matrix then the following are all equivalent conditions:
  1. U \, is unitary
  2. U^{\dagger} \, is unitary
  3. the columns of U \, form an orthonormal basis of \mathbb{C}^n with respect to this inner product
  4. the rows of U \, form an orthonormal basis of \mathbb{C}^n with respect to this inner product
  5. U \, is an isometry with respect to the norm from this inner product
  6. U \, is a normal matrix with eigenvalues lying on the unit circle.

Normal matrix


Normal matrix

From Wikipedia, the free encyclopedia
complex square matrix A is a normal matrix if
A^*A=AA^* \
where A* is the conjugate transpose of A. That is, a matrix is normal if it commutes with its conjugate transpose.
If A is a real matrix, then A*=AT; it is normal if ATA = AAT.
Normality is a convenient test for diagonalizability: every normal matrix can be converted to a diagonal matrix by a unitary transform, and every matrix which can be made diagonal by a unitary transform is also normal, but finding the desired transform requires much more work than simply testing to see whether the matrix is normal.
The concept of normal matrices can be extended to normal operators on infinite dimensional Hilbert spaces and to normal elements in C*-algebras. As in the matrix case, normality means commutativity is preserved, to the extent possible, in the noncommutative setting. This makes normal operators, and normal elements of C*-algebras, more amenable to analysis.

Hermitian matrix


Hermitian matrix

From Wikipedia, the free encyclopedia
In mathematics, a Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries that is equal to its ownconjugate transpose – that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:
a_{i,j} = \overline{a_{j,i}}\,.
If the conjugate transpose of a matrix A is denoted by A^\dagger, then the Hermitian property can be written concisely as
 A = A^\dagger\,.
Hermitian matrices can be understood as the complex extension of real symmetric matrices.
Hermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of having eigenvalues always real.

Spectral radius


Spectral radius

From Wikipedia, the free encyclopedia
In mathematics, the spectral radius of a matrix or a bounded linear operator is the supremum among the absolute values of the elements in its spectrum, which is sometimes denoted by ρ(·).

Krylov subspace


Krylov subspace

From Wikipedia, the free encyclopedia
In linear algebra, the order-r Krylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspacespanned by the images of b under the first r powers of A (starting from A0 = I), that is,
\mathcal{K}_r(A,b) = \operatorname{span} \, \{ b, Ab, A^2b, \ldots, A^{r-1}b \}. \,
It is named after Russian applied mathematician and naval engineer Alexei Krylov, who published a paper on this issue in 1931.[1]
Modern iterative methods for finding one (or a few) eigenvalues of large sparse matrices or solving large systems of linear equations avoid matrix-matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. Starting with a vector, b, one computes Ab, then one multiplies that vector by A to find A2b and so on. All algorithms that work this way are referred to as Krylov subspace methods; they are among the most successful methods currently available in numerical linear algebra.
Because the vectors tend very quickly to become almost linearly dependent, methods relying on Krylov subspace frequently involve some orthogonalization scheme, such as Lanczos iteration for Hermitian matrices or Arnoldi iteration for more general matrices.
The best known Krylov subspace methods are the ArnoldiLanczosConjugate gradientGMRES (generalized minimum residual),BiCGSTAB (biconjugate gradient stabilized), QMR (quasi minimal residual), TFQMR (transpose-free QMR), and MINRES (minimal residual) methods.

References
  1. ^ Mike Botchev (2002). "A.N.Krylov, a short biography".

そろばん

The soroban (算盤, そろばん?, counting tray) is an abacus developed in Japan. It is derived from the suanpan, imported from China to Japan around 1600.[1] Like the suanpan, the soroban is still used today, despite the proliferation of practical and affordable pocketelectronic calculators.


http://en.wikipedia.org/wiki/Soroban

2011-01-30

Metzler matrix


 a Metzler matrix is a matrix in which all the off-diagonal components are nonnegative (equal to or greater than zero)
\qquad \forall_{i\neq j}\, x_{ij} \geq 0.
Metzler matrices appear in stability analysis of time delayed differential equations and positive linear dynamical systems. Their properties can be derived by applying the properties of Nonnegative matrices to matrices of the form M + aI where M is a Metzler matrix.

P-matrix


P-matrix is a complex square matrix with every principal minor > 0. A closely related class is that of P0-matrices, which are the closure of the class of P-matrices, with every principal minor \geq 0.


Spectra of P-matrices

By a theorem of Kellogg, the eigenvalues of P- and P0- matrices are bounded away from a wedge about the negative real axis as follows:
If {u1,...,un} are the eigenvalues of an n-dimensional P-matrix, then
|arg(u_i)| < \pi - \frac{\pi}{n}, i = 1,...,n
If {u1,...,un}u_i \neq 0i = 1,...,n are the eigenvalues of an n-dimensional P0-matrix, then
|arg(u_i)| \leq \pi - \frac{\pi}{n}, i = 1,...,n

Notes

The class of nonsingular M-matrices is a subset of the class of P-matrices. More precisely, all matrices that are both P-matrices and Z-matrices are nonsingular M-matrices.
If the Jacobian of a function is a P-matrix, then the function is injective on any rectangular region of \mathbb{R}^n.
A related class of interest, particularly with reference to stability, is that of P( − )-matrices, sometimes also referred to as N − P-matrices. A matrix A is a P( − )-matrix if and only if ( − A) is a P-matrix (similarly for P0-matrices). Since σ(A) = − σ( − A), the eigenvalues of these matrices are bounded away from the positive real axis.


References

  • R. B. Kellogg, On complex eigenvalues of M and P matrices, Numer. Math. 19:170-175 (1972)
  • Li Fang, On the Spectra of P- and P0-Matrices, Linear Algebra and its Applications 119:1-25 (1989)
  • D. Gale and H. Nikaido, The Jacobian matrix and global univalence of mappings, Math. Ann. 159:81-93 (1965)