Somebody asked how one may count the number of floating point operations in a MATLAB program. Prior to version 6, one used to be able to do this with the command flops
, but this command is no longer available with the newer versions of MATLAB.flops
is a relic from the LINPACK days of MATLAB (LINPACK has since been replaced by LAPACK). With the use of LAPACK in MATLAB, it will be more approrpiate to usetic
andtoc
to count elapsed CPU time instead (cf.tic
,toc
). If you're interested to know whyflops
is obsolete, you may wish to read the exchanges in NA digest regardingflops
. Nevertheless, if you feel that you really do need a command to count floating point operations in MATLAB, what you can do is to install Tom Minka's Lightspeed MATLAB toolbox and use the flops counting operations therein.
Nube de etiquetas
acertijo
Agricultura
algebra
álgebra
algoritmo
Antropología
aritmética
Arnoldi
astronomía
biología
biotecnología
Canada
cerebro
ciencia
complejidad
complex
computo
concentración
conjugate
cross
cuántica
datos
dot
eigenvalue
ejercicio
equation
espacio
evolución
Feynman
Física
flops
Galerkin
geología
GMP
Gram-Schmidt
Hermitian
historia
Inglaterra
Internet
iterative
Krylov
Lanczos
lenguaje
lineal
linear
Matlab
matrices
matrix
maya
mechanics
memoria
Mersenne
method
metric
mexica
México
mingw
multiplicación
náhuatl
NASA
normal
NP
numeric
numero
operador
Paleontología
pde
pi
power
precisión arbitraria
primo
product
python
química
quotient
Rayleigh
Republica Domincana
rotation
Rusia
software
soroban
sparse matrix
spectral
square
subspace
symmetric
system
tecnología
Trachtenberg
transgénicos
transpose
trigonometry
unitary
vector
2011-02-17
flops in Matlab
2011-02-13
x = A \ b; in Matlab
x = A \ b;
- Is A square?
no => use QR to solve least squares problem.
- Is A triangular or permuted triangular?
yes => sparse triangular solve
- Is A symmetric with positive diagonal elements?
yes => attempt Cholesky after symmetric minimum degree.
- Otherwise
=> use LU on A (:, colamd(A))
2011-02-09
2011-02-04
dot product
Vector formulation
The law of cosines is equivalent to the formula
in the theory of vectors, which expresses the dot product of two vectors in terms of their respective lengths and the angle they enclose.
Proof of equivalence. Referring to Figure 10, note that
and so we may calculate:
The law of cosines formulated in this notation states:
which is equivalent to the above formula from the theory of vectors.
- (by definition of dot product)
If you think of the length of the 3 vectors |A|,|B| and |B-A| as the lengths of the sides of a triangle, you can apply the law of cosines here too (To visualize this, draw the 2 vectors A and B onto a graph, now the vector from A to B will be given by B-A. The triangle formed by these 3 vectors is applied to the law of cosines for a triangle)
In this case, we substitute: |B-A| for c, |A| for a, |B| for b
and we obtain:
- (by law of cosines)
Remember now, that Theta is the angle between the 2 vectors A, B.
Notice the common term |A||B|cos(Theta) in both equations. We now equate equation (1) and (2), and obtain
and hence
(by pythagorean length of a vector) and thus
2011-02-01
Rotation matrix
Rotation matrix
From Wikipedia, the free encyclopedia
In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix
rotates points in the xy-Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details).
In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space.
Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, that is an orthogonal matrix whose determinant is 1:
- .
The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1.
http://en.wikipedia.org/wiki/Rotation_matrix
As in two dimensions a matrix can be used to rotate a point (x, y, z) to a point (x′, y′, z′). The matrix used is a 3 × 3 matrix,
This is multiplied by a vector representing the point to give the result
The matrix A is a member of the three dimensional special orthogonal group, SO(3), that is it is an orthogonal matrix withdeterminant 1. That it is an orthogonal matrix means that its rows are a set of orthogonal unit vectors (so they are an orthonormal basis) as are its columns, making it easy to spot and check if a matrix is a valid rotation matrix. The determinant must be 1 as if it is -1 (the only other possibility for an orthogonal matrix) then the transformation given by it is a reflection, improper rotation or inversion in a point, i.e. not a rotation.
Matrices are often used for doing transformations, especially when a large number of points are being transformed, as they are a direct representation of the linear operator. Rotations represented in other ways are often converted to matrices before being used. They can be extended to represent rotations and transformations at the same time using Homogeneous coordinates. Transformations in this space are represented by 4 × 4 matrices, which are not rotation matrices but which have a 3 × 3 rotation matrix in the upper left corner.
The main disadvantage of matrices is that they are more expensive to calculate and do calculations with. Also in calculations wherenumerical instability is a concern matrices can be more prone to it, so calculations to restore orthonormality, which are expensive to do for matrices, need to be done more often.
Unitary matrix
Unitary matrix
From Wikipedia, the free encyclopedia
where In is the identity matrix in n dimensions and is the conjugate transpose (also called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if and only if it has an inverse which is equal to its conjugate transpose
A unitary matrix in which all entries are real is an orthogonal matrix. Just as an orthogonal matrix G preserves the (real) inner productof two real vectors,
so also a unitary matrix U satisfies
for all complex vectors x and y, where stands now for the standard inner product on .
If is an n by n matrix then the following are all equivalent conditions:
- is unitary
- is unitary
- the columns of form an orthonormal basis of with respect to this inner product
- the rows of form an orthonormal basis of with respect to this inner product
- is an isometry with respect to the norm from this inner product
- is a normal matrix with eigenvalues lying on the unit circle.
Normal matrix
Normal matrix
From Wikipedia, the free encyclopedia
where A* is the conjugate transpose of A. That is, a matrix is normal if it commutes with its conjugate transpose.
If A is a real matrix, then A*=AT; it is normal if ATA = AAT.
Normality is a convenient test for diagonalizability: every normal matrix can be converted to a diagonal matrix by a unitary transform, and every matrix which can be made diagonal by a unitary transform is also normal, but finding the desired transform requires much more work than simply testing to see whether the matrix is normal.
The concept of normal matrices can be extended to normal operators on infinite dimensional Hilbert spaces and to normal elements in C*-algebras. As in the matrix case, normality means commutativity is preserved, to the extent possible, in the noncommutative setting. This makes normal operators, and normal elements of C*-algebras, more amenable to analysis.
Hermitian matrix
Hermitian matrix
From Wikipedia, the free encyclopedia
In mathematics, a Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries that is equal to its ownconjugate transpose – that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:
If the conjugate transpose of a matrix A is denoted by , then the Hermitian property can be written concisely as
Hermitian matrices can be understood as the complex extension of real symmetric matrices.
Hermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of having eigenvalues always real.
Spectral radius
Spectral radius
From Wikipedia, the free encyclopedia
In mathematics, the spectral radius of a matrix or a bounded linear operator is the supremum among the absolute values of the elements in its spectrum, which is sometimes denoted by ρ(·).
Krylov subspace
Krylov subspace
From Wikipedia, the free encyclopedia
In linear algebra, the order-r Krylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspacespanned by the images of b under the first r powers of A (starting from A0 = I), that is,
It is named after Russian applied mathematician and naval engineer Alexei Krylov, who published a paper on this issue in 1931.[1]
Modern iterative methods for finding one (or a few) eigenvalues of large sparse matrices or solving large systems of linear equations avoid matrix-matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. Starting with a vector, b, one computes Ab, then one multiplies that vector by A to find A2b and so on. All algorithms that work this way are referred to as Krylov subspace methods; they are among the most successful methods currently available in numerical linear algebra.
Because the vectors tend very quickly to become almost linearly dependent, methods relying on Krylov subspace frequently involve some orthogonalization scheme, such as Lanczos iteration for Hermitian matrices or Arnoldi iteration for more general matrices.
The best known Krylov subspace methods are the Arnoldi, Lanczos, Conjugate gradient, GMRES (generalized minimum residual),BiCGSTAB (biconjugate gradient stabilized), QMR (quasi minimal residual), TFQMR (transpose-free QMR), and MINRES (minimal residual) methods.
References
References
- Nevanlinna, Olavi (1993). Convergence of iterations for linear equations. Lectures in Mathematics ETH Zürich. Basel: Birkhäuser Verlag. pp. viii+177 pp.. MR1217705. ISBN 3-7643-2865-7.
- Saad, Yousef (2003). Iterative methods for sparse linear systems (2nd ed.). SIAM. ISBN 0898715342. OCLC 51266114.
- ^ Mike Botchev (2002). "A.N.Krylov, a short biography".
そろばん
The soroban (算盤, そろばん?, counting tray) is an abacus developed in Japan. It is derived from the suanpan, imported from China to Japan around 1600.[1] Like the suanpan, the soroban is still used today, despite the proliferation of practical and affordable pocketelectronic calculators.
http://en.wikipedia.org/wiki/Soroban
http://en.wikipedia.org/wiki/Soroban
Suscribirse a:
Entradas (Atom)