DOUBLE PRECISION routines for general (i.e., unsymmetric, in some cases rectangular) matrix
dgebak
USAGE:
info, v = NumRu::Lapack.dgebak( job, side, ilo, ihi, scale, v, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEBAK( JOB, SIDE, N, ILO, IHI, SCALE, M, V, LDV, INFO )
* Purpose
* =======
*
* DGEBAK forms the right or left eigenvectors of a real general matrix
* by backward transformation on the computed eigenvectors of the
* balanced matrix output by DGEBAL.
*
* Arguments
* =========
*
* JOB (input) CHARACTER*1
* Specifies the type of backward transformation required:
* = 'N', do nothing, return immediately;
* = 'P', do backward transformation for permutation only;
* = 'S', do backward transformation for scaling only;
* = 'B', do backward transformations for both permutation and
* scaling.
* JOB must be the same as the argument JOB supplied to DGEBAL.
*
* SIDE (input) CHARACTER*1
* = 'R': V contains right eigenvectors;
* = 'L': V contains left eigenvectors.
*
* N (input) INTEGER
* The number of rows of the matrix V. N >= 0.
*
* ILO (input) INTEGER
* IHI (input) INTEGER
* The integers ILO and IHI determined by DGEBAL.
* 1 <= ILO <= IHI <= N, if N > 0; ILO=1 and IHI=0, if N=0.
*
* SCALE (input) DOUBLE PRECISION array, dimension (N)
* Details of the permutation and scaling factors, as returned
* by DGEBAL.
*
* M (input) INTEGER
* The number of columns of the matrix V. M >= 0.
*
* V (input/output) DOUBLE PRECISION array, dimension (LDV,M)
* On entry, the matrix of right or left eigenvectors to be
* transformed, as returned by DHSEIN or DTREVC.
* On exit, V is overwritten by the transformed eigenvectors.
*
* LDV (input) INTEGER
* The leading dimension of the array V. LDV >= max(1,N).
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value.
*
* =====================================================================
*
go to the page top
dgebal
USAGE:
ilo, ihi, scale, info, a = NumRu::Lapack.dgebal( job, a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEBAL( JOB, N, A, LDA, ILO, IHI, SCALE, INFO )
* Purpose
* =======
*
* DGEBAL balances a general real matrix A. This involves, first,
* permuting A by a similarity transformation to isolate eigenvalues
* in the first 1 to ILO-1 and last IHI+1 to N elements on the
* diagonal; and second, applying a diagonal similarity transformation
* to rows and columns ILO to IHI to make the rows and columns as
* close in norm as possible. Both steps are optional.
*
* Balancing may reduce the 1-norm of the matrix, and improve the
* accuracy of the computed eigenvalues and/or eigenvectors.
*
* Arguments
* =========
*
* JOB (input) CHARACTER*1
* Specifies the operations to be performed on A:
* = 'N': none: simply set ILO = 1, IHI = N, SCALE(I) = 1.0
* for i = 1,...,N;
* = 'P': permute only;
* = 'S': scale only;
* = 'B': both permute and scale.
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the input matrix A.
* On exit, A is overwritten by the balanced matrix.
* If JOB = 'N', A is not referenced.
* See Further Details.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* ILO (output) INTEGER
* IHI (output) INTEGER
* ILO and IHI are set to integers such that on exit
* A(i,j) = 0 if i > j and j = 1,...,ILO-1 or I = IHI+1,...,N.
* If JOB = 'N' or 'S', ILO = 1 and IHI = N.
*
* SCALE (output) DOUBLE PRECISION array, dimension (N)
* Details of the permutations and scaling factors applied to
* A. If P(j) is the index of the row and column interchanged
* with row and column j and D(j) is the scaling factor
* applied to row and column j, then
* SCALE(j) = P(j) for j = 1,...,ILO-1
* = D(j) for j = ILO,...,IHI
* = P(j) for j = IHI+1,...,N.
* The order in which the interchanges are made is N to IHI+1,
* then 1 to ILO-1.
*
* INFO (output) INTEGER
* = 0: successful exit.
* < 0: if INFO = -i, the i-th argument had an illegal value.
*
* Further Details
* ===============
*
* The permutations consist of row and column interchanges which put
* the matrix in the form
*
* ( T1 X Y )
* P A P = ( 0 B Z )
* ( 0 0 T2 )
*
* where T1 and T2 are upper triangular matrices whose eigenvalues lie
* along the diagonal. The column indices ILO and IHI mark the starting
* and ending columns of the submatrix B. Balancing consists of applying
* a diagonal similarity transformation inv(D) * B * D to make the
* 1-norms of each row of B and its corresponding column nearly equal.
* The output matrix is
*
* ( T1 X*D Y )
* ( 0 inv(D)*B*D inv(D)*Z ).
* ( 0 0 T2 )
*
* Information about the permutations P and the diagonal matrix D is
* returned in the vector SCALE.
*
* This subroutine is based on the EISPACK routine BALANC.
*
* Modified by Tzu-Yi Chen, Computer Science Division, University of
* California at Berkeley, USA
*
* =====================================================================
*
go to the page top
dgebd2
USAGE:
d, e, tauq, taup, info, a = NumRu::Lapack.dgebd2( m, a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEBD2( M, N, A, LDA, D, E, TAUQ, TAUP, WORK, INFO )
* Purpose
* =======
*
* DGEBD2 reduces a real general m by n matrix A to upper or lower
* bidiagonal form B by an orthogonal transformation: Q' * A * P = B.
*
* If m >= n, B is upper bidiagonal; if m < n, B is lower bidiagonal.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows in the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns in the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the m by n general matrix to be reduced.
* On exit,
* if m >= n, the diagonal and the first superdiagonal are
* overwritten with the upper bidiagonal matrix B; the
* elements below the diagonal, with the array TAUQ, represent
* the orthogonal matrix Q as a product of elementary
* reflectors, and the elements above the first superdiagonal,
* with the array TAUP, represent the orthogonal matrix P as
* a product of elementary reflectors;
* if m < n, the diagonal and the first subdiagonal are
* overwritten with the lower bidiagonal matrix B; the
* elements below the first subdiagonal, with the array TAUQ,
* represent the orthogonal matrix Q as a product of
* elementary reflectors, and the elements above the diagonal,
* with the array TAUP, represent the orthogonal matrix P as
* a product of elementary reflectors.
* See Further Details.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* D (output) DOUBLE PRECISION array, dimension (min(M,N))
* The diagonal elements of the bidiagonal matrix B:
* D(i) = A(i,i).
*
* E (output) DOUBLE PRECISION array, dimension (min(M,N)-1)
* The off-diagonal elements of the bidiagonal matrix B:
* if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1;
* if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1.
*
* TAUQ (output) DOUBLE PRECISION array dimension (min(M,N))
* The scalar factors of the elementary reflectors which
* represent the orthogonal matrix Q. See Further Details.
*
* TAUP (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors which
* represent the orthogonal matrix P. See Further Details.
*
* WORK (workspace) DOUBLE PRECISION array, dimension (max(M,N))
*
* INFO (output) INTEGER
* = 0: successful exit.
* < 0: if INFO = -i, the i-th argument had an illegal value.
*
* Further Details
* ===============
*
* The matrices Q and P are represented as products of elementary
* reflectors:
*
* If m >= n,
*
* Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1)
*
* Each H(i) and G(i) has the form:
*
* H(i) = I - tauq * v * v' and G(i) = I - taup * u * u'
*
* where tauq and taup are real scalars, and v and u are real vectors;
* v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in A(i+1:m,i);
* u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in A(i,i+2:n);
* tauq is stored in TAUQ(i) and taup in TAUP(i).
*
* If m < n,
*
* Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m)
*
* Each H(i) and G(i) has the form:
*
* H(i) = I - tauq * v * v' and G(i) = I - taup * u * u'
*
* where tauq and taup are real scalars, and v and u are real vectors;
* v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(i+2:m,i);
* u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(i,i+1:n);
* tauq is stored in TAUQ(i) and taup in TAUP(i).
*
* The contents of A on exit are illustrated by the following examples:
*
* m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n):
*
* ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 )
* ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 )
* ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 )
* ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 )
* ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 )
* ( v1 v2 v3 v4 v5 )
*
* where d and e denote diagonal and off-diagonal elements of B, vi
* denotes an element of the vector defining H(i), and ui an element of
* the vector defining G(i).
*
* =====================================================================
*
go to the page top
dgebrd
USAGE:
d, e, tauq, taup, work, info, a = NumRu::Lapack.dgebrd( m, a, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEBRD( M, N, A, LDA, D, E, TAUQ, TAUP, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGEBRD reduces a general real M-by-N matrix A to upper or lower
* bidiagonal form B by an orthogonal transformation: Q**T * A * P = B.
*
* If m >= n, B is upper bidiagonal; if m < n, B is lower bidiagonal.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows in the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns in the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N general matrix to be reduced.
* On exit,
* if m >= n, the diagonal and the first superdiagonal are
* overwritten with the upper bidiagonal matrix B; the
* elements below the diagonal, with the array TAUQ, represent
* the orthogonal matrix Q as a product of elementary
* reflectors, and the elements above the first superdiagonal,
* with the array TAUP, represent the orthogonal matrix P as
* a product of elementary reflectors;
* if m < n, the diagonal and the first subdiagonal are
* overwritten with the lower bidiagonal matrix B; the
* elements below the first subdiagonal, with the array TAUQ,
* represent the orthogonal matrix Q as a product of
* elementary reflectors, and the elements above the diagonal,
* with the array TAUP, represent the orthogonal matrix P as
* a product of elementary reflectors.
* See Further Details.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* D (output) DOUBLE PRECISION array, dimension (min(M,N))
* The diagonal elements of the bidiagonal matrix B:
* D(i) = A(i,i).
*
* E (output) DOUBLE PRECISION array, dimension (min(M,N)-1)
* The off-diagonal elements of the bidiagonal matrix B:
* if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1;
* if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1.
*
* TAUQ (output) DOUBLE PRECISION array dimension (min(M,N))
* The scalar factors of the elementary reflectors which
* represent the orthogonal matrix Q. See Further Details.
*
* TAUP (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors which
* represent the orthogonal matrix P. See Further Details.
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The length of the array WORK. LWORK >= max(1,M,N).
* For optimum performance LWORK >= (M+N)*NB, where NB
* is the optimal blocksize.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value.
*
* Further Details
* ===============
*
* The matrices Q and P are represented as products of elementary
* reflectors:
*
* If m >= n,
*
* Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1)
*
* Each H(i) and G(i) has the form:
*
* H(i) = I - tauq * v * v' and G(i) = I - taup * u * u'
*
* where tauq and taup are real scalars, and v and u are real vectors;
* v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in A(i+1:m,i);
* u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in A(i,i+2:n);
* tauq is stored in TAUQ(i) and taup in TAUP(i).
*
* If m < n,
*
* Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m)
*
* Each H(i) and G(i) has the form:
*
* H(i) = I - tauq * v * v' and G(i) = I - taup * u * u'
*
* where tauq and taup are real scalars, and v and u are real vectors;
* v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(i+2:m,i);
* u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(i,i+1:n);
* tauq is stored in TAUQ(i) and taup in TAUP(i).
*
* The contents of A on exit are illustrated by the following examples:
*
* m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n):
*
* ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 )
* ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 )
* ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 )
* ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 )
* ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 )
* ( v1 v2 v3 v4 v5 )
*
* where d and e denote diagonal and off-diagonal elements of B, vi
* denotes an element of the vector defining H(i), and ui an element of
* the vector defining G(i).
*
* =====================================================================
*
go to the page top
dgecon
USAGE:
rcond, info = NumRu::Lapack.dgecon( norm, a, anorm, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGECON( NORM, N, A, LDA, ANORM, RCOND, WORK, IWORK, INFO )
* Purpose
* =======
*
* DGECON estimates the reciprocal of the condition number of a general
* real matrix A, in either the 1-norm or the infinity-norm, using
* the LU factorization computed by DGETRF.
*
* An estimate is obtained for norm(inv(A)), and the reciprocal of the
* condition number is computed as
* RCOND = 1 / ( norm(A) * norm(inv(A)) ).
*
* Arguments
* =========
*
* NORM (input) CHARACTER*1
* Specifies whether the 1-norm condition number or the
* infinity-norm condition number is required:
* = '1' or 'O': 1-norm;
* = 'I': Infinity-norm.
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* A (input) DOUBLE PRECISION array, dimension (LDA,N)
* The factors L and U from the factorization A = P*L*U
* as computed by DGETRF.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* ANORM (input) DOUBLE PRECISION
* If NORM = '1' or 'O', the 1-norm of the original matrix A.
* If NORM = 'I', the infinity-norm of the original matrix A.
*
* RCOND (output) DOUBLE PRECISION
* The reciprocal of the condition number of the matrix A,
* computed as RCOND = 1/(norm(A) * norm(inv(A))).
*
* WORK (workspace) DOUBLE PRECISION array, dimension (4*N)
*
* IWORK (workspace) INTEGER array, dimension (N)
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* =====================================================================
*
go to the page top
dgeequ
USAGE:
r, c, rowcnd, colcnd, amax, info = NumRu::Lapack.dgeequ( a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEEQU( M, N, A, LDA, R, C, ROWCND, COLCND, AMAX, INFO )
* Purpose
* =======
*
* DGEEQU computes row and column scalings intended to equilibrate an
* M-by-N matrix A and reduce its condition number. R returns the row
* scale factors and C the column scale factors, chosen to try to make
* the largest element in each row and column of the matrix B with
* elements B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.
*
* R(i) and C(j) are restricted to be between SMLNUM = smallest safe
* number and BIGNUM = largest safe number. Use of these scaling
* factors is not guaranteed to reduce the condition number of A but
* works well in practice.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input) DOUBLE PRECISION array, dimension (LDA,N)
* The M-by-N matrix whose equilibration factors are
* to be computed.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* R (output) DOUBLE PRECISION array, dimension (M)
* If INFO = 0 or INFO > M, R contains the row scale factors
* for A.
*
* C (output) DOUBLE PRECISION array, dimension (N)
* If INFO = 0, C contains the column scale factors for A.
*
* ROWCND (output) DOUBLE PRECISION
* If INFO = 0 or INFO > M, ROWCND contains the ratio of the
* smallest R(i) to the largest R(i). If ROWCND >= 0.1 and
* AMAX is neither too large nor too small, it is not worth
* scaling by R.
*
* COLCND (output) DOUBLE PRECISION
* If INFO = 0, COLCND contains the ratio of the smallest
* C(i) to the largest C(i). If COLCND >= 0.1, it is not
* worth scaling by C.
*
* AMAX (output) DOUBLE PRECISION
* Absolute value of largest matrix element. If AMAX is very
* close to overflow or very close to underflow, the matrix
* should be scaled.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
* > 0: if INFO = i, and i is
* <= M: the i-th row of A is exactly zero
* > M: the (i-M)-th column of A is exactly zero
*
* =====================================================================
*
go to the page top
dgeequb
USAGE:
r, c, rowcnd, colcnd, amax, info = NumRu::Lapack.dgeequb( a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEEQUB( M, N, A, LDA, R, C, ROWCND, COLCND, AMAX, INFO )
* Purpose
* =======
*
* DGEEQUB computes row and column scalings intended to equilibrate an
* M-by-N matrix A and reduce its condition number. R returns the row
* scale factors and C the column scale factors, chosen to try to make
* the largest element in each row and column of the matrix B with
* elements B(i,j)=R(i)*A(i,j)*C(j) have an absolute value of at most
* the radix.
*
* R(i) and C(j) are restricted to be a power of the radix between
* SMLNUM = smallest safe number and BIGNUM = largest safe number. Use
* of these scaling factors is not guaranteed to reduce the condition
* number of A but works well in practice.
*
* This routine differs from DGEEQU by restricting the scaling factors
* to a power of the radix. Baring over- and underflow, scaling by
* these factors introduces no additional rounding errors. However, the
* scaled entries' magnitured are no longer approximately 1 but lie
* between sqrt(radix) and 1/sqrt(radix).
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input) DOUBLE PRECISION array, dimension (LDA,N)
* The M-by-N matrix whose equilibration factors are
* to be computed.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* R (output) DOUBLE PRECISION array, dimension (M)
* If INFO = 0 or INFO > M, R contains the row scale factors
* for A.
*
* C (output) DOUBLE PRECISION array, dimension (N)
* If INFO = 0, C contains the column scale factors for A.
*
* ROWCND (output) DOUBLE PRECISION
* If INFO = 0 or INFO > M, ROWCND contains the ratio of the
* smallest R(i) to the largest R(i). If ROWCND >= 0.1 and
* AMAX is neither too large nor too small, it is not worth
* scaling by R.
*
* COLCND (output) DOUBLE PRECISION
* If INFO = 0, COLCND contains the ratio of the smallest
* C(i) to the largest C(i). If COLCND >= 0.1, it is not
* worth scaling by C.
*
* AMAX (output) DOUBLE PRECISION
* Absolute value of largest matrix element. If AMAX is very
* close to overflow or very close to underflow, the matrix
* should be scaled.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
* > 0: if INFO = i, and i is
* <= M: the i-th row of A is exactly zero
* > M: the (i-M)-th column of A is exactly zero
*
* =====================================================================
*
go to the page top
dgees
USAGE:
sdim, wr, wi, vs, work, info, a = NumRu::Lapack.dgees( jobvs, sort, a, [:lwork => lwork, :usage => usage, :help => help]){|a,b| ... }
FORTRAN MANUAL
SUBROUTINE DGEES( JOBVS, SORT, SELECT, N, A, LDA, SDIM, WR, WI, VS, LDVS, WORK, LWORK, BWORK, INFO )
* Purpose
* =======
*
* DGEES computes for an N-by-N real nonsymmetric matrix A, the
* eigenvalues, the real Schur form T, and, optionally, the matrix of
* Schur vectors Z. This gives the Schur factorization A = Z*T*(Z**T).
*
* Optionally, it also orders the eigenvalues on the diagonal of the
* real Schur form so that selected eigenvalues are at the top left.
* The leading columns of Z then form an orthonormal basis for the
* invariant subspace corresponding to the selected eigenvalues.
*
* A matrix is in real Schur form if it is upper quasi-triangular with
* 1-by-1 and 2-by-2 blocks. 2-by-2 blocks will be standardized in the
* form
* [ a b ]
* [ c a ]
*
* where b*c < 0. The eigenvalues of such a block are a +- sqrt(bc).
*
* Arguments
* =========
*
* JOBVS (input) CHARACTER*1
* = 'N': Schur vectors are not computed;
* = 'V': Schur vectors are computed.
*
* SORT (input) CHARACTER*1
* Specifies whether or not to order the eigenvalues on the
* diagonal of the Schur form.
* = 'N': Eigenvalues are not ordered;
* = 'S': Eigenvalues are ordered (see SELECT).
*
* SELECT (external procedure) LOGICAL FUNCTION of two DOUBLE PRECISION arguments
* SELECT must be declared EXTERNAL in the calling subroutine.
* If SORT = 'S', SELECT is used to select eigenvalues to sort
* to the top left of the Schur form.
* If SORT = 'N', SELECT is not referenced.
* An eigenvalue WR(j)+sqrt(-1)*WI(j) is selected if
* SELECT(WR(j),WI(j)) is true; i.e., if either one of a complex
* conjugate pair of eigenvalues is selected, then both complex
* eigenvalues are selected.
* Note that a selected complex eigenvalue may no longer
* satisfy SELECT(WR(j),WI(j)) = .TRUE. after ordering, since
* ordering may change the value of complex eigenvalues
* (especially if the eigenvalue is ill-conditioned); in this
* case INFO is set to N+2 (see INFO below).
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the N-by-N matrix A.
* On exit, A has been overwritten by its real Schur form T.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* SDIM (output) INTEGER
* If SORT = 'N', SDIM = 0.
* If SORT = 'S', SDIM = number of eigenvalues (after sorting)
* for which SELECT is true. (Complex conjugate
* pairs for which SELECT is true for either
* eigenvalue count as 2.)
*
* WR (output) DOUBLE PRECISION array, dimension (N)
* WI (output) DOUBLE PRECISION array, dimension (N)
* WR and WI contain the real and imaginary parts,
* respectively, of the computed eigenvalues in the same order
* that they appear on the diagonal of the output Schur form T.
* Complex conjugate pairs of eigenvalues will appear
* consecutively with the eigenvalue having the positive
* imaginary part first.
*
* VS (output) DOUBLE PRECISION array, dimension (LDVS,N)
* If JOBVS = 'V', VS contains the orthogonal matrix Z of Schur
* vectors.
* If JOBVS = 'N', VS is not referenced.
*
* LDVS (input) INTEGER
* The leading dimension of the array VS. LDVS >= 1; if
* JOBVS = 'V', LDVS >= N.
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) contains the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= max(1,3*N).
* For good performance, LWORK must generally be larger.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* BWORK (workspace) LOGICAL array, dimension (N)
* Not referenced if SORT = 'N'.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value.
* > 0: if INFO = i, and i is
* <= N: the QR algorithm failed to compute all the
* eigenvalues; elements 1:ILO-1 and i+1:N of WR and WI
* contain those eigenvalues which have converged; if
* JOBVS = 'V', VS contains the matrix which reduces A
* to its partially converged Schur form.
* = N+1: the eigenvalues could not be reordered because some
* eigenvalues were too close to separate (the problem
* is very ill-conditioned);
* = N+2: after reordering, roundoff changed values of some
* complex eigenvalues so that leading eigenvalues in
* the Schur form no longer satisfy SELECT=.TRUE. This
* could also be caused by underflow due to scaling.
*
* =====================================================================
*
go to the page top
dgeesx
USAGE:
sdim, wr, wi, vs, rconde, rcondv, work, iwork, info, a = NumRu::Lapack.dgeesx( jobvs, sort, sense, a, liwork, [:lwork => lwork, :usage => usage, :help => help]){|a,b| ... }
FORTRAN MANUAL
SUBROUTINE DGEESX( JOBVS, SORT, SELECT, SENSE, N, A, LDA, SDIM, WR, WI, VS, LDVS, RCONDE, RCONDV, WORK, LWORK, IWORK, LIWORK, BWORK, INFO )
* Purpose
* =======
*
* DGEESX computes for an N-by-N real nonsymmetric matrix A, the
* eigenvalues, the real Schur form T, and, optionally, the matrix of
* Schur vectors Z. This gives the Schur factorization A = Z*T*(Z**T).
*
* Optionally, it also orders the eigenvalues on the diagonal of the
* real Schur form so that selected eigenvalues are at the top left;
* computes a reciprocal condition number for the average of the
* selected eigenvalues (RCONDE); and computes a reciprocal condition
* number for the right invariant subspace corresponding to the
* selected eigenvalues (RCONDV). The leading columns of Z form an
* orthonormal basis for this invariant subspace.
*
* For further explanation of the reciprocal condition numbers RCONDE
* and RCONDV, see Section 4.10 of the LAPACK Users' Guide (where
* these quantities are called s and sep respectively).
*
* A real matrix is in real Schur form if it is upper quasi-triangular
* with 1-by-1 and 2-by-2 blocks. 2-by-2 blocks will be standardized in
* the form
* [ a b ]
* [ c a ]
*
* where b*c < 0. The eigenvalues of such a block are a +- sqrt(bc).
*
* Arguments
* =========
*
* JOBVS (input) CHARACTER*1
* = 'N': Schur vectors are not computed;
* = 'V': Schur vectors are computed.
*
* SORT (input) CHARACTER*1
* Specifies whether or not to order the eigenvalues on the
* diagonal of the Schur form.
* = 'N': Eigenvalues are not ordered;
* = 'S': Eigenvalues are ordered (see SELECT).
*
* SELECT (external procedure) LOGICAL FUNCTION of two DOUBLE PRECISION arguments
* SELECT must be declared EXTERNAL in the calling subroutine.
* If SORT = 'S', SELECT is used to select eigenvalues to sort
* to the top left of the Schur form.
* If SORT = 'N', SELECT is not referenced.
* An eigenvalue WR(j)+sqrt(-1)*WI(j) is selected if
* SELECT(WR(j),WI(j)) is true; i.e., if either one of a
* complex conjugate pair of eigenvalues is selected, then both
* are. Note that a selected complex eigenvalue may no longer
* satisfy SELECT(WR(j),WI(j)) = .TRUE. after ordering, since
* ordering may change the value of complex eigenvalues
* (especially if the eigenvalue is ill-conditioned); in this
* case INFO may be set to N+3 (see INFO below).
*
* SENSE (input) CHARACTER*1
* Determines which reciprocal condition numbers are computed.
* = 'N': None are computed;
* = 'E': Computed for average of selected eigenvalues only;
* = 'V': Computed for selected right invariant subspace only;
* = 'B': Computed for both.
* If SENSE = 'E', 'V' or 'B', SORT must equal 'S'.
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA, N)
* On entry, the N-by-N matrix A.
* On exit, A is overwritten by its real Schur form T.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* SDIM (output) INTEGER
* If SORT = 'N', SDIM = 0.
* If SORT = 'S', SDIM = number of eigenvalues (after sorting)
* for which SELECT is true. (Complex conjugate
* pairs for which SELECT is true for either
* eigenvalue count as 2.)
*
* WR (output) DOUBLE PRECISION array, dimension (N)
* WI (output) DOUBLE PRECISION array, dimension (N)
* WR and WI contain the real and imaginary parts, respectively,
* of the computed eigenvalues, in the same order that they
* appear on the diagonal of the output Schur form T. Complex
* conjugate pairs of eigenvalues appear consecutively with the
* eigenvalue having the positive imaginary part first.
*
* VS (output) DOUBLE PRECISION array, dimension (LDVS,N)
* If JOBVS = 'V', VS contains the orthogonal matrix Z of Schur
* vectors.
* If JOBVS = 'N', VS is not referenced.
*
* LDVS (input) INTEGER
* The leading dimension of the array VS. LDVS >= 1, and if
* JOBVS = 'V', LDVS >= N.
*
* RCONDE (output) DOUBLE PRECISION
* If SENSE = 'E' or 'B', RCONDE contains the reciprocal
* condition number for the average of the selected eigenvalues.
* Not referenced if SENSE = 'N' or 'V'.
*
* RCONDV (output) DOUBLE PRECISION
* If SENSE = 'V' or 'B', RCONDV contains the reciprocal
* condition number for the selected right invariant subspace.
* Not referenced if SENSE = 'N' or 'E'.
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= max(1,3*N).
* Also, if SENSE = 'E' or 'V' or 'B',
* LWORK >= N+2*SDIM*(N-SDIM), where SDIM is the number of
* selected eigenvalues computed by this routine. Note that
* N+2*SDIM*(N-SDIM) <= N+N*N/2. Note also that an error is only
* returned if LWORK < max(1,3*N), but if SENSE = 'E' or 'V' or
* 'B' this may not be large enough.
* For good performance, LWORK must generally be larger.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates upper bounds on the optimal sizes of the
* arrays WORK and IWORK, returns these values as the first
* entries of the WORK and IWORK arrays, and no error messages
* related to LWORK or LIWORK are issued by XERBLA.
*
* IWORK (workspace/output) INTEGER array, dimension (MAX(1,LIWORK))
* On exit, if INFO = 0, IWORK(1) returns the optimal LIWORK.
*
* LIWORK (input) INTEGER
* The dimension of the array IWORK.
* LIWORK >= 1; if SENSE = 'V' or 'B', LIWORK >= SDIM*(N-SDIM).
* Note that SDIM*(N-SDIM) <= N*N/4. Note also that an error is
* only returned if LIWORK < 1, but if SENSE = 'V' or 'B' this
* may not be large enough.
*
* If LIWORK = -1, then a workspace query is assumed; the
* routine only calculates upper bounds on the optimal sizes of
* the arrays WORK and IWORK, returns these values as the first
* entries of the WORK and IWORK arrays, and no error messages
* related to LWORK or LIWORK are issued by XERBLA.
*
* BWORK (workspace) LOGICAL array, dimension (N)
* Not referenced if SORT = 'N'.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value.
* > 0: if INFO = i, and i is
* <= N: the QR algorithm failed to compute all the
* eigenvalues; elements 1:ILO-1 and i+1:N of WR and WI
* contain those eigenvalues which have converged; if
* JOBVS = 'V', VS contains the transformation which
* reduces A to its partially converged Schur form.
* = N+1: the eigenvalues could not be reordered because some
* eigenvalues were too close to separate (the problem
* is very ill-conditioned);
* = N+2: after reordering, roundoff changed values of some
* complex eigenvalues so that leading eigenvalues in
* the Schur form no longer satisfy SELECT=.TRUE. This
* could also be caused by underflow due to scaling.
*
* =====================================================================
*
go to the page top
dgeev
USAGE:
wr, wi, vl, vr, work, info, a = NumRu::Lapack.dgeev( jobvl, jobvr, a, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEEV( JOBVL, JOBVR, N, A, LDA, WR, WI, VL, LDVL, VR, LDVR, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGEEV computes for an N-by-N real nonsymmetric matrix A, the
* eigenvalues and, optionally, the left and/or right eigenvectors.
*
* The right eigenvector v(j) of A satisfies
* A * v(j) = lambda(j) * v(j)
* where lambda(j) is its eigenvalue.
* The left eigenvector u(j) of A satisfies
* u(j)**H * A = lambda(j) * u(j)**H
* where u(j)**H denotes the conjugate transpose of u(j).
*
* The computed eigenvectors are normalized to have Euclidean norm
* equal to 1 and largest component real.
*
* Arguments
* =========
*
* JOBVL (input) CHARACTER*1
* = 'N': left eigenvectors of A are not computed;
* = 'V': left eigenvectors of A are computed.
*
* JOBVR (input) CHARACTER*1
* = 'N': right eigenvectors of A are not computed;
* = 'V': right eigenvectors of A are computed.
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the N-by-N matrix A.
* On exit, A has been overwritten.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* WR (output) DOUBLE PRECISION array, dimension (N)
* WI (output) DOUBLE PRECISION array, dimension (N)
* WR and WI contain the real and imaginary parts,
* respectively, of the computed eigenvalues. Complex
* conjugate pairs of eigenvalues appear consecutively
* with the eigenvalue having the positive imaginary part
* first.
*
* VL (output) DOUBLE PRECISION array, dimension (LDVL,N)
* If JOBVL = 'V', the left eigenvectors u(j) are stored one
* after another in the columns of VL, in the same order
* as their eigenvalues.
* If JOBVL = 'N', VL is not referenced.
* If the j-th eigenvalue is real, then u(j) = VL(:,j),
* the j-th column of VL.
* If the j-th and (j+1)-st eigenvalues form a complex
* conjugate pair, then u(j) = VL(:,j) + i*VL(:,j+1) and
* u(j+1) = VL(:,j) - i*VL(:,j+1).
*
* LDVL (input) INTEGER
* The leading dimension of the array VL. LDVL >= 1; if
* JOBVL = 'V', LDVL >= N.
*
* VR (output) DOUBLE PRECISION array, dimension (LDVR,N)
* If JOBVR = 'V', the right eigenvectors v(j) are stored one
* after another in the columns of VR, in the same order
* as their eigenvalues.
* If JOBVR = 'N', VR is not referenced.
* If the j-th eigenvalue is real, then v(j) = VR(:,j),
* the j-th column of VR.
* If the j-th and (j+1)-st eigenvalues form a complex
* conjugate pair, then v(j) = VR(:,j) + i*VR(:,j+1) and
* v(j+1) = VR(:,j) - i*VR(:,j+1).
*
* LDVR (input) INTEGER
* The leading dimension of the array VR. LDVR >= 1; if
* JOBVR = 'V', LDVR >= N.
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= max(1,3*N), and
* if JOBVL = 'V' or JOBVR = 'V', LWORK >= 4*N. For good
* performance, LWORK must generally be larger.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value.
* > 0: if INFO = i, the QR algorithm failed to compute all the
* eigenvalues, and no eigenvectors have been computed;
* elements i+1:N of WR and WI contain eigenvalues which
* have converged.
*
* =====================================================================
*
go to the page top
dgeevx
USAGE:
wr, wi, vl, vr, ilo, ihi, scale, abnrm, rconde, rcondv, work, info, a = NumRu::Lapack.dgeevx( balanc, jobvl, jobvr, sense, a, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEEVX( BALANC, JOBVL, JOBVR, SENSE, N, A, LDA, WR, WI, VL, LDVL, VR, LDVR, ILO, IHI, SCALE, ABNRM, RCONDE, RCONDV, WORK, LWORK, IWORK, INFO )
* Purpose
* =======
*
* DGEEVX computes for an N-by-N real nonsymmetric matrix A, the
* eigenvalues and, optionally, the left and/or right eigenvectors.
*
* Optionally also, it computes a balancing transformation to improve
* the conditioning of the eigenvalues and eigenvectors (ILO, IHI,
* SCALE, and ABNRM), reciprocal condition numbers for the eigenvalues
* (RCONDE), and reciprocal condition numbers for the right
* eigenvectors (RCONDV).
*
* The right eigenvector v(j) of A satisfies
* A * v(j) = lambda(j) * v(j)
* where lambda(j) is its eigenvalue.
* The left eigenvector u(j) of A satisfies
* u(j)**H * A = lambda(j) * u(j)**H
* where u(j)**H denotes the conjugate transpose of u(j).
*
* The computed eigenvectors are normalized to have Euclidean norm
* equal to 1 and largest component real.
*
* Balancing a matrix means permuting the rows and columns to make it
* more nearly upper triangular, and applying a diagonal similarity
* transformation D * A * D**(-1), where D is a diagonal matrix, to
* make its rows and columns closer in norm and the condition numbers
* of its eigenvalues and eigenvectors smaller. The computed
* reciprocal condition numbers correspond to the balanced matrix.
* Permuting rows and columns will not change the condition numbers
* (in exact arithmetic) but diagonal scaling will. For further
* explanation of balancing, see section 4.10.2 of the LAPACK
* Users' Guide.
*
* Arguments
* =========
*
* BALANC (input) CHARACTER*1
* Indicates how the input matrix should be diagonally scaled
* and/or permuted to improve the conditioning of its
* eigenvalues.
* = 'N': Do not diagonally scale or permute;
* = 'P': Perform permutations to make the matrix more nearly
* upper triangular. Do not diagonally scale;
* = 'S': Diagonally scale the matrix, i.e. replace A by
* D*A*D**(-1), where D is a diagonal matrix chosen
* to make the rows and columns of A more equal in
* norm. Do not permute;
* = 'B': Both diagonally scale and permute A.
*
* Computed reciprocal condition numbers will be for the matrix
* after balancing and/or permuting. Permuting does not change
* condition numbers (in exact arithmetic), but balancing does.
*
* JOBVL (input) CHARACTER*1
* = 'N': left eigenvectors of A are not computed;
* = 'V': left eigenvectors of A are computed.
* If SENSE = 'E' or 'B', JOBVL must = 'V'.
*
* JOBVR (input) CHARACTER*1
* = 'N': right eigenvectors of A are not computed;
* = 'V': right eigenvectors of A are computed.
* If SENSE = 'E' or 'B', JOBVR must = 'V'.
*
* SENSE (input) CHARACTER*1
* Determines which reciprocal condition numbers are computed.
* = 'N': None are computed;
* = 'E': Computed for eigenvalues only;
* = 'V': Computed for right eigenvectors only;
* = 'B': Computed for eigenvalues and right eigenvectors.
*
* If SENSE = 'E' or 'B', both left and right eigenvectors
* must also be computed (JOBVL = 'V' and JOBVR = 'V').
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the N-by-N matrix A.
* On exit, A has been overwritten. If JOBVL = 'V' or
* JOBVR = 'V', A contains the real Schur form of the balanced
* version of the input matrix A.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* WR (output) DOUBLE PRECISION array, dimension (N)
* WI (output) DOUBLE PRECISION array, dimension (N)
* WR and WI contain the real and imaginary parts,
* respectively, of the computed eigenvalues. Complex
* conjugate pairs of eigenvalues will appear consecutively
* with the eigenvalue having the positive imaginary part
* first.
*
* VL (output) DOUBLE PRECISION array, dimension (LDVL,N)
* If JOBVL = 'V', the left eigenvectors u(j) are stored one
* after another in the columns of VL, in the same order
* as their eigenvalues.
* If JOBVL = 'N', VL is not referenced.
* If the j-th eigenvalue is real, then u(j) = VL(:,j),
* the j-th column of VL.
* If the j-th and (j+1)-st eigenvalues form a complex
* conjugate pair, then u(j) = VL(:,j) + i*VL(:,j+1) and
* u(j+1) = VL(:,j) - i*VL(:,j+1).
*
* LDVL (input) INTEGER
* The leading dimension of the array VL. LDVL >= 1; if
* JOBVL = 'V', LDVL >= N.
*
* VR (output) DOUBLE PRECISION array, dimension (LDVR,N)
* If JOBVR = 'V', the right eigenvectors v(j) are stored one
* after another in the columns of VR, in the same order
* as their eigenvalues.
* If JOBVR = 'N', VR is not referenced.
* If the j-th eigenvalue is real, then v(j) = VR(:,j),
* the j-th column of VR.
* If the j-th and (j+1)-st eigenvalues form a complex
* conjugate pair, then v(j) = VR(:,j) + i*VR(:,j+1) and
* v(j+1) = VR(:,j) - i*VR(:,j+1).
*
* LDVR (input) INTEGER
* The leading dimension of the array VR. LDVR >= 1, and if
* JOBVR = 'V', LDVR >= N.
*
* ILO (output) INTEGER
* IHI (output) INTEGER
* ILO and IHI are integer values determined when A was
* balanced. The balanced A(i,j) = 0 if I > J and
* J = 1,...,ILO-1 or I = IHI+1,...,N.
*
* SCALE (output) DOUBLE PRECISION array, dimension (N)
* Details of the permutations and scaling factors applied
* when balancing A. If P(j) is the index of the row and column
* interchanged with row and column j, and D(j) is the scaling
* factor applied to row and column j, then
* SCALE(J) = P(J), for J = 1,...,ILO-1
* = D(J), for J = ILO,...,IHI
* = P(J) for J = IHI+1,...,N.
* The order in which the interchanges are made is N to IHI+1,
* then 1 to ILO-1.
*
* ABNRM (output) DOUBLE PRECISION
* The one-norm of the balanced matrix (the maximum
* of the sum of absolute values of elements of any column).
*
* RCONDE (output) DOUBLE PRECISION array, dimension (N)
* RCONDE(j) is the reciprocal condition number of the j-th
* eigenvalue.
*
* RCONDV (output) DOUBLE PRECISION array, dimension (N)
* RCONDV(j) is the reciprocal condition number of the j-th
* right eigenvector.
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. If SENSE = 'N' or 'E',
* LWORK >= max(1,2*N), and if JOBVL = 'V' or JOBVR = 'V',
* LWORK >= 3*N. If SENSE = 'V' or 'B', LWORK >= N*(N+6).
* For good performance, LWORK must generally be larger.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* IWORK (workspace) INTEGER array, dimension (2*N-2)
* If SENSE = 'N' or 'E', not referenced.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value.
* > 0: if INFO = i, the QR algorithm failed to compute all the
* eigenvalues, and no eigenvectors or condition numbers
* have been computed; elements 1:ILO-1 and i+1:N of WR
* and WI contain eigenvalues which have converged.
*
* =====================================================================
*
go to the page top
dgegs
USAGE:
alphar, alphai, beta, vsl, vsr, work, info, a, b = NumRu::Lapack.dgegs( jobvsl, jobvsr, a, b, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEGS( JOBVSL, JOBVSR, N, A, LDA, B, LDB, ALPHAR, ALPHAI, BETA, VSL, LDVSL, VSR, LDVSR, WORK, LWORK, INFO )
* Purpose
* =======
*
* This routine is deprecated and has been replaced by routine DGGES.
*
* DGEGS computes the eigenvalues, real Schur form, and, optionally,
* left and or/right Schur vectors of a real matrix pair (A,B).
* Given two square matrices A and B, the generalized real Schur
* factorization has the form
*
* A = Q*S*Z**T, B = Q*T*Z**T
*
* where Q and Z are orthogonal matrices, T is upper triangular, and S
* is an upper quasi-triangular matrix with 1-by-1 and 2-by-2 diagonal
* blocks, the 2-by-2 blocks corresponding to complex conjugate pairs
* of eigenvalues of (A,B). The columns of Q are the left Schur vectors
* and the columns of Z are the right Schur vectors.
*
* If only the eigenvalues of (A,B) are needed, the driver routine
* DGEGV should be used instead. See DGEGV for a description of the
* eigenvalues of the generalized nonsymmetric eigenvalue problem
* (GNEP).
*
* Arguments
* =========
*
* JOBVSL (input) CHARACTER*1
* = 'N': do not compute the left Schur vectors;
* = 'V': compute the left Schur vectors (returned in VSL).
*
* JOBVSR (input) CHARACTER*1
* = 'N': do not compute the right Schur vectors;
* = 'V': compute the right Schur vectors (returned in VSR).
*
* N (input) INTEGER
* The order of the matrices A, B, VSL, and VSR. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA, N)
* On entry, the matrix A.
* On exit, the upper quasi-triangular matrix S from the
* generalized real Schur factorization.
*
* LDA (input) INTEGER
* The leading dimension of A. LDA >= max(1,N).
*
* B (input/output) DOUBLE PRECISION array, dimension (LDB, N)
* On entry, the matrix B.
* On exit, the upper triangular matrix T from the generalized
* real Schur factorization.
*
* LDB (input) INTEGER
* The leading dimension of B. LDB >= max(1,N).
*
* ALPHAR (output) DOUBLE PRECISION array, dimension (N)
* The real parts of each scalar alpha defining an eigenvalue
* of GNEP.
*
* ALPHAI (output) DOUBLE PRECISION array, dimension (N)
* The imaginary parts of each scalar alpha defining an
* eigenvalue of GNEP. If ALPHAI(j) is zero, then the j-th
* eigenvalue is real; if positive, then the j-th and (j+1)-st
* eigenvalues are a complex conjugate pair, with
* ALPHAI(j+1) = -ALPHAI(j).
*
* BETA (output) DOUBLE PRECISION array, dimension (N)
* The scalars beta that define the eigenvalues of GNEP.
* Together, the quantities alpha = (ALPHAR(j),ALPHAI(j)) and
* beta = BETA(j) represent the j-th eigenvalue of the matrix
* pair (A,B), in one of the forms lambda = alpha/beta or
* mu = beta/alpha. Since either lambda or mu may overflow,
* they should not, in general, be computed.
*
* VSL (output) DOUBLE PRECISION array, dimension (LDVSL,N)
* If JOBVSL = 'V', the matrix of left Schur vectors Q.
* Not referenced if JOBVSL = 'N'.
*
* LDVSL (input) INTEGER
* The leading dimension of the matrix VSL. LDVSL >=1, and
* if JOBVSL = 'V', LDVSL >= N.
*
* VSR (output) DOUBLE PRECISION array, dimension (LDVSR,N)
* If JOBVSR = 'V', the matrix of right Schur vectors Z.
* Not referenced if JOBVSR = 'N'.
*
* LDVSR (input) INTEGER
* The leading dimension of the matrix VSR. LDVSR >= 1, and
* if JOBVSR = 'V', LDVSR >= N.
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= max(1,4*N).
* For good performance, LWORK must generally be larger.
* To compute the optimal value of LWORK, call ILAENV to get
* blocksizes (for DGEQRF, DORMQR, and DORGQR.) Then compute:
* NB -- MAX of the blocksizes for DGEQRF, DORMQR, and DORGQR
* The optimal LWORK is 2*N + N*(NB+1).
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value.
* = 1,...,N:
* The QZ iteration failed. (A,B) are not in Schur
* form, but ALPHAR(j), ALPHAI(j), and BETA(j) should
* be correct for j=INFO+1,...,N.
* > N: errors that usually indicate LAPACK problems:
* =N+1: error return from DGGBAL
* =N+2: error return from DGEQRF
* =N+3: error return from DORMQR
* =N+4: error return from DORGQR
* =N+5: error return from DGGHRD
* =N+6: error return from DHGEQZ (other than failed
* iteration)
* =N+7: error return from DGGBAK (computing VSL)
* =N+8: error return from DGGBAK (computing VSR)
* =N+9: error return from DLASCL (various places)
*
* =====================================================================
*
go to the page top
dgegv
USAGE:
alphar, alphai, beta, vl, vr, work, info, a, b = NumRu::Lapack.dgegv( jobvl, jobvr, a, b, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEGV( JOBVL, JOBVR, N, A, LDA, B, LDB, ALPHAR, ALPHAI, BETA, VL, LDVL, VR, LDVR, WORK, LWORK, INFO )
* Purpose
* =======
*
* This routine is deprecated and has been replaced by routine DGGEV.
*
* DGEGV computes the eigenvalues and, optionally, the left and/or right
* eigenvectors of a real matrix pair (A,B).
* Given two square matrices A and B,
* the generalized nonsymmetric eigenvalue problem (GNEP) is to find the
* eigenvalues lambda and corresponding (non-zero) eigenvectors x such
* that
*
* A*x = lambda*B*x.
*
* An alternate form is to find the eigenvalues mu and corresponding
* eigenvectors y such that
*
* mu*A*y = B*y.
*
* These two forms are equivalent with mu = 1/lambda and x = y if
* neither lambda nor mu is zero. In order to deal with the case that
* lambda or mu is zero or small, two values alpha and beta are returned
* for each eigenvalue, such that lambda = alpha/beta and
* mu = beta/alpha.
*
* The vectors x and y in the above equations are right eigenvectors of
* the matrix pair (A,B). Vectors u and v satisfying
*
* u**H*A = lambda*u**H*B or mu*v**H*A = v**H*B
*
* are left eigenvectors of (A,B).
*
* Note: this routine performs "full balancing" on A and B -- see
* "Further Details", below.
*
* Arguments
* =========
*
* JOBVL (input) CHARACTER*1
* = 'N': do not compute the left generalized eigenvectors;
* = 'V': compute the left generalized eigenvectors (returned
* in VL).
*
* JOBVR (input) CHARACTER*1
* = 'N': do not compute the right generalized eigenvectors;
* = 'V': compute the right generalized eigenvectors (returned
* in VR).
*
* N (input) INTEGER
* The order of the matrices A, B, VL, and VR. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA, N)
* On entry, the matrix A.
* If JOBVL = 'V' or JOBVR = 'V', then on exit A
* contains the real Schur form of A from the generalized Schur
* factorization of the pair (A,B) after balancing.
* If no eigenvectors were computed, then only the diagonal
* blocks from the Schur form will be correct. See DGGHRD and
* DHGEQZ for details.
*
* LDA (input) INTEGER
* The leading dimension of A. LDA >= max(1,N).
*
* B (input/output) DOUBLE PRECISION array, dimension (LDB, N)
* On entry, the matrix B.
* If JOBVL = 'V' or JOBVR = 'V', then on exit B contains the
* upper triangular matrix obtained from B in the generalized
* Schur factorization of the pair (A,B) after balancing.
* If no eigenvectors were computed, then only those elements of
* B corresponding to the diagonal blocks from the Schur form of
* A will be correct. See DGGHRD and DHGEQZ for details.
*
* LDB (input) INTEGER
* The leading dimension of B. LDB >= max(1,N).
*
* ALPHAR (output) DOUBLE PRECISION array, dimension (N)
* The real parts of each scalar alpha defining an eigenvalue of
* GNEP.
*
* ALPHAI (output) DOUBLE PRECISION array, dimension (N)
* The imaginary parts of each scalar alpha defining an
* eigenvalue of GNEP. If ALPHAI(j) is zero, then the j-th
* eigenvalue is real; if positive, then the j-th and
* (j+1)-st eigenvalues are a complex conjugate pair, with
* ALPHAI(j+1) = -ALPHAI(j).
*
* BETA (output) DOUBLE PRECISION array, dimension (N)
* The scalars beta that define the eigenvalues of GNEP.
*
* Together, the quantities alpha = (ALPHAR(j),ALPHAI(j)) and
* beta = BETA(j) represent the j-th eigenvalue of the matrix
* pair (A,B), in one of the forms lambda = alpha/beta or
* mu = beta/alpha. Since either lambda or mu may overflow,
* they should not, in general, be computed.
*
* VL (output) DOUBLE PRECISION array, dimension (LDVL,N)
* If JOBVL = 'V', the left eigenvectors u(j) are stored
* in the columns of VL, in the same order as their eigenvalues.
* If the j-th eigenvalue is real, then u(j) = VL(:,j).
* If the j-th and (j+1)-st eigenvalues form a complex conjugate
* pair, then
* u(j) = VL(:,j) + i*VL(:,j+1)
* and
* u(j+1) = VL(:,j) - i*VL(:,j+1).
*
* Each eigenvector is scaled so that its largest component has
* abs(real part) + abs(imag. part) = 1, except for eigenvectors
* corresponding to an eigenvalue with alpha = beta = 0, which
* are set to zero.
* Not referenced if JOBVL = 'N'.
*
* LDVL (input) INTEGER
* The leading dimension of the matrix VL. LDVL >= 1, and
* if JOBVL = 'V', LDVL >= N.
*
* VR (output) DOUBLE PRECISION array, dimension (LDVR,N)
* If JOBVR = 'V', the right eigenvectors x(j) are stored
* in the columns of VR, in the same order as their eigenvalues.
* If the j-th eigenvalue is real, then x(j) = VR(:,j).
* If the j-th and (j+1)-st eigenvalues form a complex conjugate
* pair, then
* x(j) = VR(:,j) + i*VR(:,j+1)
* and
* x(j+1) = VR(:,j) - i*VR(:,j+1).
*
* Each eigenvector is scaled so that its largest component has
* abs(real part) + abs(imag. part) = 1, except for eigenvalues
* corresponding to an eigenvalue with alpha = beta = 0, which
* are set to zero.
* Not referenced if JOBVR = 'N'.
*
* LDVR (input) INTEGER
* The leading dimension of the matrix VR. LDVR >= 1, and
* if JOBVR = 'V', LDVR >= N.
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= max(1,8*N).
* For good performance, LWORK must generally be larger.
* To compute the optimal value of LWORK, call ILAENV to get
* blocksizes (for DGEQRF, DORMQR, and DORGQR.) Then compute:
* NB -- MAX of the blocksizes for DGEQRF, DORMQR, and DORGQR;
* The optimal LWORK is:
* 2*N + MAX( 6*N, N*(NB+1) ).
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value.
* = 1,...,N:
* The QZ iteration failed. No eigenvectors have been
* calculated, but ALPHAR(j), ALPHAI(j), and BETA(j)
* should be correct for j=INFO+1,...,N.
* > N: errors that usually indicate LAPACK problems:
* =N+1: error return from DGGBAL
* =N+2: error return from DGEQRF
* =N+3: error return from DORMQR
* =N+4: error return from DORGQR
* =N+5: error return from DGGHRD
* =N+6: error return from DHGEQZ (other than failed
* iteration)
* =N+7: error return from DTGEVC
* =N+8: error return from DGGBAK (computing VL)
* =N+9: error return from DGGBAK (computing VR)
* =N+10: error return from DLASCL (various calls)
*
* Further Details
* ===============
*
* Balancing
* ---------
*
* This driver calls DGGBAL to both permute and scale rows and columns
* of A and B. The permutations PL and PR are chosen so that PL*A*PR
* and PL*B*R will be upper triangular except for the diagonal blocks
* A(i:j,i:j) and B(i:j,i:j), with i and j as close together as
* possible. The diagonal scaling matrices DL and DR are chosen so
* that the pair DL*PL*A*PR*DR, DL*PL*B*PR*DR have elements close to
* one (except for the elements that start out zero.)
*
* After the eigenvalues and eigenvectors of the balanced matrices
* have been computed, DGGBAK transforms the eigenvectors back to what
* they would have been (in perfect arithmetic) if they had not been
* balanced.
*
* Contents of A and B on Exit
* -------- -- - --- - -- ----
*
* If any eigenvectors are computed (either JOBVL='V' or JOBVR='V' or
* both), then on exit the arrays A and B will contain the real Schur
* form[*] of the "balanced" versions of A and B. If no eigenvectors
* are computed, then only the diagonal blocks will be correct.
*
* [*] See DHGEQZ, DGEGS, or read the book "Matrix Computations",
* by Golub & van Loan, pub. by Johns Hopkins U. Press.
*
* =====================================================================
*
go to the page top
dgehd2
USAGE:
tau, info, a = NumRu::Lapack.dgehd2( ilo, ihi, a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEHD2( N, ILO, IHI, A, LDA, TAU, WORK, INFO )
* Purpose
* =======
*
* DGEHD2 reduces a real general matrix A to upper Hessenberg form H by
* an orthogonal similarity transformation: Q' * A * Q = H .
*
* Arguments
* =========
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* ILO (input) INTEGER
* IHI (input) INTEGER
* It is assumed that A is already upper triangular in rows
* and columns 1:ILO-1 and IHI+1:N. ILO and IHI are normally
* set by a previous call to DGEBAL; otherwise they should be
* set to 1 and N respectively. See Further Details.
* 1 <= ILO <= IHI <= max(1,N).
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the n by n general matrix to be reduced.
* On exit, the upper triangle and the first subdiagonal of A
* are overwritten with the upper Hessenberg matrix H, and the
* elements below the first subdiagonal, with the array TAU,
* represent the orthogonal matrix Q as a product of elementary
* reflectors. See Further Details.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* TAU (output) DOUBLE PRECISION array, dimension (N-1)
* The scalar factors of the elementary reflectors (see Further
* Details).
*
* WORK (workspace) DOUBLE PRECISION array, dimension (N)
*
* INFO (output) INTEGER
* = 0: successful exit.
* < 0: if INFO = -i, the i-th argument had an illegal value.
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of (ihi-ilo) elementary
* reflectors
*
* Q = H(ilo) H(ilo+1) . . . H(ihi-1).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(1:i) = 0, v(i+1) = 1 and v(ihi+1:n) = 0; v(i+2:ihi) is stored on
* exit in A(i+2:ihi,i), and tau in TAU(i).
*
* The contents of A are illustrated by the following example, with
* n = 7, ilo = 2 and ihi = 6:
*
* on entry, on exit,
*
* ( a a a a a a a ) ( a a h h h h a )
* ( a a a a a a ) ( a h h h h a )
* ( a a a a a a ) ( h h h h h h )
* ( a a a a a a ) ( v2 h h h h h )
* ( a a a a a a ) ( v2 v3 h h h h )
* ( a a a a a a ) ( v2 v3 v4 h h h )
* ( a ) ( a )
*
* where a denotes an element of the original matrix A, h denotes a
* modified element of the upper Hessenberg matrix H, and vi denotes an
* element of the vector defining H(i).
*
* =====================================================================
*
go to the page top
dgehrd
USAGE:
tau, work, info, a = NumRu::Lapack.dgehrd( ilo, ihi, a, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEHRD( N, ILO, IHI, A, LDA, TAU, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGEHRD reduces a real general matrix A to upper Hessenberg form H by
* an orthogonal similarity transformation: Q' * A * Q = H .
*
* Arguments
* =========
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* ILO (input) INTEGER
* IHI (input) INTEGER
* It is assumed that A is already upper triangular in rows
* and columns 1:ILO-1 and IHI+1:N. ILO and IHI are normally
* set by a previous call to DGEBAL; otherwise they should be
* set to 1 and N respectively. See Further Details.
* 1 <= ILO <= IHI <= N, if N > 0; ILO=1 and IHI=0, if N=0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the N-by-N general matrix to be reduced.
* On exit, the upper triangle and the first subdiagonal of A
* are overwritten with the upper Hessenberg matrix H, and the
* elements below the first subdiagonal, with the array TAU,
* represent the orthogonal matrix Q as a product of elementary
* reflectors. See Further Details.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* TAU (output) DOUBLE PRECISION array, dimension (N-1)
* The scalar factors of the elementary reflectors (see Further
* Details). Elements 1:ILO-1 and IHI:N-1 of TAU are set to
* zero.
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK)
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The length of the array WORK. LWORK >= max(1,N).
* For optimum performance LWORK >= N*NB, where NB is the
* optimal blocksize.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value.
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of (ihi-ilo) elementary
* reflectors
*
* Q = H(ilo) H(ilo+1) . . . H(ihi-1).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(1:i) = 0, v(i+1) = 1 and v(ihi+1:n) = 0; v(i+2:ihi) is stored on
* exit in A(i+2:ihi,i), and tau in TAU(i).
*
* The contents of A are illustrated by the following example, with
* n = 7, ilo = 2 and ihi = 6:
*
* on entry, on exit,
*
* ( a a a a a a a ) ( a a h h h h a )
* ( a a a a a a ) ( a h h h h a )
* ( a a a a a a ) ( h h h h h h )
* ( a a a a a a ) ( v2 h h h h h )
* ( a a a a a a ) ( v2 v3 h h h h )
* ( a a a a a a ) ( v2 v3 v4 h h h )
* ( a ) ( a )
*
* where a denotes an element of the original matrix A, h denotes a
* modified element of the upper Hessenberg matrix H, and vi denotes an
* element of the vector defining H(i).
*
* This file is a slight modification of LAPACK-3.0's DGEHRD
* subroutine incorporating improvements proposed by Quintana-Orti and
* Van de Geijn (2006). (See DLAHR2.)
*
* =====================================================================
*
go to the page top
dgejsv
USAGE:
sva, u, v, iwork, info, work = NumRu::Lapack.dgejsv( joba, jobu, jobv, jobr, jobt, jobp, m, a, work, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEJSV( JOBA, JOBU, JOBV, JOBR, JOBT, JOBP, M, N, A, LDA, SVA, U, LDU, V, LDV, WORK, LWORK, IWORK, INFO )
* Purpose
* =======
*
* DGEJSV computes the singular value decomposition (SVD) of a real M-by-N
* matrix [A], where M >= N. The SVD of [A] is written as
*
* [A] = [U] * [SIGMA] * [V]^t,
*
* where [SIGMA] is an N-by-N (M-by-N) matrix which is zero except for its N
* diagonal elements, [U] is an M-by-N (or M-by-M) orthonormal matrix, and
* [V] is an N-by-N orthogonal matrix. The diagonal elements of [SIGMA] are
* the singular values of [A]. The columns of [U] and [V] are the left and
* the right singular vectors of [A], respectively. The matrices [U] and [V]
* are computed and stored in the arrays U and V, respectively. The diagonal
* of [SIGMA] is computed and stored in the array SVA.
*
* Arguments
* =========
*
* JOBA (input) CHARACTER*1
* Specifies the level of accuracy:
* = 'C': This option works well (high relative accuracy) if A = B * D,
* with well-conditioned B and arbitrary diagonal matrix D.
* The accuracy cannot be spoiled by COLUMN scaling. The
* accuracy of the computed output depends on the condition of
* B, and the procedure aims at the best theoretical accuracy.
* The relative error max_{i=1:N}|d sigma_i| / sigma_i is
* bounded by f(M,N)*epsilon* cond(B), independent of D.
* The input matrix is preprocessed with the QRF with column
* pivoting. This initial preprocessing and preconditioning by
* a rank revealing QR factorization is common for all values of
* JOBA. Additional actions are specified as follows:
* = 'E': Computation as with 'C' with an additional estimate of the
* condition number of B. It provides a realistic error bound.
* = 'F': If A = D1 * C * D2 with ill-conditioned diagonal scalings
* D1, D2, and well-conditioned matrix C, this option gives
* higher accuracy than the 'C' option. If the structure of the
* input matrix is not known, and relative accuracy is
* desirable, then this option is advisable. The input matrix A
* is preprocessed with QR factorization with FULL (row and
* column) pivoting.
* = 'G' Computation as with 'F' with an additional estimate of the
* condition number of B, where A=D*B. If A has heavily weighted
* rows, then using this condition number gives too pessimistic
* error bound.
* = 'A': Small singular values are the noise and the matrix is treated
* as numerically rank defficient. The error in the computed
* singular values is bounded by f(m,n)*epsilon*||A||.
* The computed SVD A = U * S * V^t restores A up to
* f(m,n)*epsilon*||A||.
* This gives the procedure the licence to discard (set to zero)
* all singular values below N*epsilon*||A||.
* = 'R': Similar as in 'A'. Rank revealing property of the initial
* QR factorization is used do reveal (using triangular factor)
* a gap sigma_{r+1} < epsilon * sigma_r in which case the
* numerical RANK is declared to be r. The SVD is computed with
* absolute error bounds, but more accurately than with 'A'.
*
* JOBU (input) CHARACTER*1
* Specifies whether to compute the columns of U:
* = 'U': N columns of U are returned in the array U.
* = 'F': full set of M left sing. vectors is returned in the array U.
* = 'W': U may be used as workspace of length M*N. See the description
* of U.
* = 'N': U is not computed.
*
* JOBV (input) CHARACTER*1
* Specifies whether to compute the matrix V:
* = 'V': N columns of V are returned in the array V; Jacobi rotations
* are not explicitly accumulated.
* = 'J': N columns of V are returned in the array V, but they are
* computed as the product of Jacobi rotations. This option is
* allowed only if JOBU .NE. 'N', i.e. in computing the full SVD.
* = 'W': V may be used as workspace of length N*N. See the description
* of V.
* = 'N': V is not computed.
*
* JOBR (input) CHARACTER*1
* Specifies the RANGE for the singular values. Issues the licence to
* set to zero small positive singular values if they are outside
* specified range. If A .NE. 0 is scaled so that the largest singular
* value of c*A is around DSQRT(BIG), BIG=SLAMCH('O'), then JOBR issues
* the licence to kill columns of A whose norm in c*A is less than
* DSQRT(SFMIN) (for JOBR.EQ.'R'), or less than SMALL=SFMIN/EPSLN,
* where SFMIN=SLAMCH('S'), EPSLN=SLAMCH('E').
* = 'N': Do not kill small columns of c*A. This option assumes that
* BLAS and QR factorizations and triangular solvers are
* implemented to work in that range. If the condition of A
* is greater than BIG, use DGESVJ.
* = 'R': RESTRICTED range for sigma(c*A) is [DSQRT(SFMIN), DSQRT(BIG)]
* (roughly, as described above). This option is recommended.
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~
* For computing the singular values in the FULL range [SFMIN,BIG]
* use DGESVJ.
*
* JOBT (input) CHARACTER*1
* If the matrix is square then the procedure may determine to use
* transposed A if A^t seems to be better with respect to convergence.
* If the matrix is not square, JOBT is ignored. This is subject to
* changes in the future.
* The decision is based on two values of entropy over the adjoint
* orbit of A^t * A. See the descriptions of WORK(6) and WORK(7).
* = 'T': transpose if entropy test indicates possibly faster
* convergence of Jacobi process if A^t is taken as input. If A is
* replaced with A^t, then the row pivoting is included automatically.
* = 'N': do not speculate.
* This option can be used to compute only the singular values, or the
* full SVD (U, SIGMA and V). For only one set of singular vectors
* (U or V), the caller should provide both U and V, as one of the
* matrices is used as workspace if the matrix A is transposed.
* The implementer can easily remove this constraint and make the
* code more complicated. See the descriptions of U and V.
*
* JOBP (input) CHARACTER*1
* Issues the licence to introduce structured perturbations to drown
* denormalized numbers. This licence should be active if the
* denormals are poorly implemented, causing slow computation,
* especially in cases of fast convergence (!). For details see [1,2].
* For the sake of simplicity, this perturbations are included only
* when the full SVD or only the singular values are requested. The
* implementer/user can easily add the perturbation for the cases of
* computing one set of singular vectors.
* = 'P': introduce perturbation
* = 'N': do not perturb
*
* M (input) INTEGER
* The number of rows of the input matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the input matrix A. M >= N >= 0.
*
* A (input/workspace) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* SVA (workspace/output) DOUBLE PRECISION array, dimension (N)
* On exit,
* - For WORK(1)/WORK(2) = ONE: The singular values of A. During the
* computation SVA contains Euclidean column norms of the
* iterated matrices in the array A.
* - For WORK(1) .NE. WORK(2): The singular values of A are
* (WORK(1)/WORK(2)) * SVA(1:N). This factored form is used if
* sigma_max(A) overflows or if small singular values have been
* saved from underflow by scaling the input matrix A.
* - If JOBR='R' then some of the singular values may be returned
* as exact zeros obtained by "set to zero" because they are
* below the numerical rank threshold or are denormalized numbers.
*
* U (workspace/output) DOUBLE PRECISION array, dimension ( LDU, N )
* If JOBU = 'U', then U contains on exit the M-by-N matrix of
* the left singular vectors.
* If JOBU = 'F', then U contains on exit the M-by-M matrix of
* the left singular vectors, including an ONB
* of the orthogonal complement of the Range(A).
* If JOBU = 'W' .AND. (JOBV.EQ.'V' .AND. JOBT.EQ.'T' .AND. M.EQ.N),
* then U is used as workspace if the procedure
* replaces A with A^t. In that case, [V] is computed
* in U as left singular vectors of A^t and then
* copied back to the V array. This 'W' option is just
* a reminder to the caller that in this case U is
* reserved as workspace of length N*N.
* If JOBU = 'N' U is not referenced.
*
* LDU (input) INTEGER
* The leading dimension of the array U, LDU >= 1.
* IF JOBU = 'U' or 'F' or 'W', then LDU >= M.
*
* V (workspace/output) DOUBLE PRECISION array, dimension ( LDV, N )
* If JOBV = 'V', 'J' then V contains on exit the N-by-N matrix of
* the right singular vectors;
* If JOBV = 'W', AND (JOBU.EQ.'U' AND JOBT.EQ.'T' AND M.EQ.N),
* then V is used as workspace if the pprocedure
* replaces A with A^t. In that case, [U] is computed
* in V as right singular vectors of A^t and then
* copied back to the U array. This 'W' option is just
* a reminder to the caller that in this case V is
* reserved as workspace of length N*N.
* If JOBV = 'N' V is not referenced.
*
* LDV (input) INTEGER
* The leading dimension of the array V, LDV >= 1.
* If JOBV = 'V' or 'J' or 'W', then LDV >= N.
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension at least LWORK.
* On exit,
* WORK(1) = SCALE = WORK(2) / WORK(1) is the scaling factor such
* that SCALE*SVA(1:N) are the computed singular values
* of A. (See the description of SVA().)
* WORK(2) = See the description of WORK(1).
* WORK(3) = SCONDA is an estimate for the condition number of
* column equilibrated A. (If JOBA .EQ. 'E' or 'G')
* SCONDA is an estimate of DSQRT(||(R^t * R)^(-1)||_1).
* It is computed using DPOCON. It holds
* N^(-1/4) * SCONDA <= ||R^(-1)||_2 <= N^(1/4) * SCONDA
* where R is the triangular factor from the QRF of A.
* However, if R is truncated and the numerical rank is
* determined to be strictly smaller than N, SCONDA is
* returned as -1, thus indicating that the smallest
* singular values might be lost.
*
* If full SVD is needed, the following two condition numbers are
* useful for the analysis of the algorithm. They are provied for
* a developer/implementer who is familiar with the details of
* the method.
*
* WORK(4) = an estimate of the scaled condition number of the
* triangular factor in the first QR factorization.
* WORK(5) = an estimate of the scaled condition number of the
* triangular factor in the second QR factorization.
* The following two parameters are computed if JOBT .EQ. 'T'.
* They are provided for a developer/implementer who is familiar
* with the details of the method.
*
* WORK(6) = the entropy of A^t*A :: this is the Shannon entropy
* of diag(A^t*A) / Trace(A^t*A) taken as point in the
* probability simplex.
* WORK(7) = the entropy of A*A^t.
*
* LWORK (input) INTEGER
* Length of WORK to confirm proper allocation of work space.
* LWORK depends on the job:
*
* If only SIGMA is needed ( JOBU.EQ.'N', JOBV.EQ.'N' ) and
* -> .. no scaled condition estimate required ( JOBE.EQ.'N'):
* LWORK >= max(2*M+N,4*N+1,7). This is the minimal requirement.
* For optimal performance (blocked code) the optimal value
* is LWORK >= max(2*M+N,3*N+(N+1)*NB,7). Here NB is the optimal
* block size for xGEQP3/xGEQRF.
* -> .. an estimate of the scaled condition number of A is
* required (JOBA='E', 'G'). In this case, LWORK is the maximum
* of the above and N*N+4*N, i.e. LWORK >= max(2*M+N,N*N+4N,7).
*
* If SIGMA and the right singular vectors are needed (JOBV.EQ.'V'),
* -> the minimal requirement is LWORK >= max(2*N+M,7).
* -> For optimal performance, LWORK >= max(2*N+M,2*N+N*NB,7),
* where NB is the optimal block size.
*
* If SIGMA and the left singular vectors are needed
* -> the minimal requirement is LWORK >= max(2*N+M,7).
* -> For optimal performance, LWORK >= max(2*N+M,2*N+N*NB,7),
* where NB is the optimal block size.
*
* If full SVD is needed ( JOBU.EQ.'U' or 'F', JOBV.EQ.'V' ) and
* -> .. the singular vectors are computed without explicit
* accumulation of the Jacobi rotations, LWORK >= 6*N+2*N*N
* -> .. in the iterative part, the Jacobi rotations are
* explicitly accumulated (option, see the description of JOBV),
* then the minimal requirement is LWORK >= max(M+3*N+N*N,7).
* For better performance, if NB is the optimal block size,
* LWORK >= max(3*N+N*N+M,3*N+N*N+N*NB,7).
*
* IWORK (workspace/output) INTEGER array, dimension M+3*N.
* On exit,
* IWORK(1) = the numerical rank determined after the initial
* QR factorization with pivoting. See the descriptions
* of JOBA and JOBR.
* IWORK(2) = the number of the computed nonzero singular values
* IWORK(3) = if nonzero, a warning message:
* If IWORK(3).EQ.1 then some of the column norms of A
* were denormalized floats. The requested high accuracy
* is not warranted by the data.
*
* INFO (output) INTEGER
* < 0 : if INFO = -i, then the i-th argument had an illegal value.
* = 0 : successfull exit;
* > 0 : DGEJSV did not converge in the maximal allowed number
* of sweeps. The computed values may be inaccurate.
*
* Further Details
* ===============
*
* DGEJSV implements a preconditioned Jacobi SVD algorithm. It uses SGEQP3,
* SGEQRF, and SGELQF as preprocessors and preconditioners. Optionally, an
* additional row pivoting can be used as a preprocessor, which in some
* cases results in much higher accuracy. An example is matrix A with the
* structure A = D1 * C * D2, where D1, D2 are arbitrarily ill-conditioned
* diagonal matrices and C is well-conditioned matrix. In that case, complete
* pivoting in the first QR factorizations provides accuracy dependent on the
* condition number of C, and independent of D1, D2. Such higher accuracy is
* not completely understood theoretically, but it works well in practice.
* Further, if A can be written as A = B*D, with well-conditioned B and some
* diagonal D, then the high accuracy is guaranteed, both theoretically and
* in software, independent of D. For more details see [1], [2].
* The computational range for the singular values can be the full range
* ( UNDERFLOW,OVERFLOW ), provided that the machine arithmetic and the BLAS
* & LAPACK routines called by DGEJSV are implemented to work in that range.
* If that is not the case, then the restriction for safe computation with
* the singular values in the range of normalized IEEE numbers is that the
* spectral condition number kappa(A)=sigma_max(A)/sigma_min(A) does not
* overflow. This code (DGEJSV) is best used in this restricted range,
* meaning that singular values of magnitude below ||A||_2 / SLAMCH('O') are
* returned as zeros. See JOBR for details on this.
* Further, this implementation is somewhat slower than the one described
* in [1,2] due to replacement of some non-LAPACK components, and because
* the choice of some tuning parameters in the iterative part (DGESVJ) is
* left to the implementer on a particular machine.
* The rank revealing QR factorization (in this code: SGEQP3) should be
* implemented as in [3]. We have a new version of SGEQP3 under development
* that is more robust than the current one in LAPACK, with a cleaner cut in
* rank defficient cases. It will be available in the SIGMA library [4].
* If M is much larger than N, it is obvious that the inital QRF with
* column pivoting can be preprocessed by the QRF without pivoting. That
* well known trick is not used in DGEJSV because in some cases heavy row
* weighting can be treated with complete pivoting. The overhead in cases
* M much larger than N is then only due to pivoting, but the benefits in
* terms of accuracy have prevailed. The implementer/user can incorporate
* this extra QRF step easily. The implementer can also improve data movement
* (matrix transpose, matrix copy, matrix transposed copy) - this
* implementation of DGEJSV uses only the simplest, naive data movement.
*
* Contributors
*
* Zlatko Drmac (Zagreb, Croatia) and Kresimir Veselic (Hagen, Germany)
*
* References
*
* [1] Z. Drmac and K. Veselic: New fast and accurate Jacobi SVD algorithm I.
* SIAM J. Matrix Anal. Appl. Vol. 35, No. 2 (2008), pp. 1322-1342.
* LAPACK Working note 169.
* [2] Z. Drmac and K. Veselic: New fast and accurate Jacobi SVD algorithm II.
* SIAM J. Matrix Anal. Appl. Vol. 35, No. 2 (2008), pp. 1343-1362.
* LAPACK Working note 170.
* [3] Z. Drmac and Z. Bujanovic: On the failure of rank-revealing QR
* factorization software - a case study.
* ACM Trans. Math. Softw. Vol. 35, No 2 (2008), pp. 1-28.
* LAPACK Working note 176.
* [4] Z. Drmac: SIGMA - mathematical software library for accurate SVD, PSV,
* QSVD, (H,K)-SVD computations.
* Department of Mathematics, University of Zagreb, 2008.
*
* Bugs, examples and comments
*
* Please report all bugs and send interesting examples and/or comments to
* [email protected]. Thank you.
*
* ==========================================================================
*
* .. Local Parameters ..
DOUBLE PRECISION ZERO, ONE
PARAMETER ( ZERO = 0.0D0, ONE = 1.0D0 )
* ..
* .. Local Scalars ..
DOUBLE PRECISION AAPP, AAQQ, AATMAX, AATMIN, BIG, BIG1, COND_OK,
& CONDR1, CONDR2, ENTRA, ENTRAT, EPSLN, MAXPRJ, SCALEM,
& SCONDA, SFMIN, SMALL, TEMP1, USCAL1, USCAL2, XSC
INTEGER IERR, N1, NR, NUMRANK, p, q, WARNING
LOGICAL ALMORT, DEFR, ERREST, GOSCAL, JRACC, KILL, LSVEC,
& L2ABER, L2KILL, L2PERT, L2RANK, L2TRAN,
& NOSCAL, ROWPIV, RSVEC, TRANSP
* ..
* .. Intrinsic Functions ..
INTRINSIC DABS, DLOG, DMAX1, DMIN1, DBLE,
& MAX0, MIN0, IDNINT, DSIGN, DSQRT
* ..
* .. External Functions ..
DOUBLE PRECISION DLAMCH, DNRM2
INTEGER IDAMAX
LOGICAL LSAME
EXTERNAL IDAMAX, LSAME, DLAMCH, DNRM2
* ..
* .. External Subroutines ..
EXTERNAL DCOPY, DGELQF, DGEQP3, DGEQRF, DLACPY, DLASCL,
& DLASET, DLASSQ, DLASWP, DORGQR, DORMLQ,
& DORMQR, DPOCON, DSCAL, DSWAP, DTRSM, XERBLA
*
EXTERNAL DGESVJ
* ..
*
* Test the input arguments
*
LSVEC = LSAME( JOBU, 'U' ) .OR. LSAME( JOBU, 'F' )
JRACC = LSAME( JOBV, 'J' )
RSVEC = LSAME( JOBV, 'V' ) .OR. JRACC
ROWPIV = LSAME( JOBA, 'F' ) .OR. LSAME( JOBA, 'G' )
L2RANK = LSAME( JOBA, 'R' )
L2ABER = LSAME( JOBA, 'A' )
ERREST = LSAME( JOBA, 'E' ) .OR. LSAME( JOBA, 'G' )
L2TRAN = LSAME( JOBT, 'T' )
L2KILL = LSAME( JOBR, 'R' )
DEFR = LSAME( JOBR, 'N' )
L2PERT = LSAME( JOBP, 'P' )
*
IF ( .NOT.(ROWPIV .OR. L2RANK .OR. L2ABER .OR.
& ERREST .OR. LSAME( JOBA, 'C' ) )) THEN
INFO = - 1
ELSE IF ( .NOT.( LSVEC .OR. LSAME( JOBU, 'N' ) .OR.
& LSAME( JOBU, 'W' )) ) THEN
INFO = - 2
ELSE IF ( .NOT.( RSVEC .OR. LSAME( JOBV, 'N' ) .OR.
& LSAME( JOBV, 'W' )) .OR. ( JRACC .AND. (.NOT.LSVEC) ) ) THEN
INFO = - 3
ELSE IF ( .NOT. ( L2KILL .OR. DEFR ) ) THEN
INFO = - 4
ELSE IF ( .NOT. ( L2TRAN .OR. LSAME( JOBT, 'N' ) ) ) THEN
INFO = - 5
ELSE IF ( .NOT. ( L2PERT .OR. LSAME( JOBP, 'N' ) ) ) THEN
INFO = - 6
ELSE IF ( M .LT. 0 ) THEN
INFO = - 7
ELSE IF ( ( N .LT. 0 ) .OR. ( N .GT. M ) ) THEN
INFO = - 8
ELSE IF ( LDA .LT. M ) THEN
INFO = - 10
ELSE IF ( LSVEC .AND. ( LDU .LT. M ) ) THEN
INFO = - 13
ELSE IF ( RSVEC .AND. ( LDV .LT. N ) ) THEN
INFO = - 14
ELSE IF ( (.NOT.(LSVEC .OR. RSVEC .OR. ERREST).AND.
& (LWORK .LT. MAX0(7,4*N+1,2*M+N))) .OR.
& (.NOT.(LSVEC .OR. LSVEC) .AND. ERREST .AND.
& (LWORK .LT. MAX0(7,4*N+N*N,2*M+N))) .OR.
& (LSVEC .AND. (.NOT.RSVEC) .AND. (LWORK .LT. MAX0(7,2*N+M))) .OR.
& (RSVEC .AND. (.NOT.LSVEC) .AND. (LWORK .LT. MAX0(7,2*N+M))) .OR.
& (LSVEC .AND. RSVEC .AND. .NOT.JRACC .AND. (LWORK.LT.6*N+2*N*N))
& .OR. (LSVEC.AND.RSVEC.AND.JRACC.AND.LWORK.LT.MAX0(7,M+3*N+N*N)))
& THEN
INFO = - 17
ELSE
* #:)
INFO = 0
END IF
*
IF ( INFO .NE. 0 ) THEN
* #:(
CALL XERBLA( 'DGEJSV', - INFO )
END IF
*
* Quick return for void matrix (Y3K safe)
* #:)
IF ( ( M .EQ. 0 ) .OR. ( N .EQ. 0 ) ) RETURN
*
* Determine whether the matrix U should be M x N or M x M
*
IF ( LSVEC ) THEN
N1 = N
IF ( LSAME( JOBU, 'F' ) ) N1 = M
END IF
*
* Set numerical parameters
*
*! NOTE: Make sure DLAMCH() does not fail on the target architecture.
*
EPSLN = DLAMCH('Epsilon')
SFMIN = DLAMCH('SafeMinimum')
SMALL = SFMIN / EPSLN
BIG = DLAMCH('O')
* BIG = ONE / SFMIN
*
* Initialize SVA(1:N) = diag( ||A e_i||_2 )_1^N
*
*(!) If necessary, scale SVA() to protect the largest norm from
* overflow. It is possible that this scaling pushes the smallest
* column norm left from the underflow threshold (extreme case).
*
SCALEM = ONE / DSQRT(DBLE(M)*DBLE(N))
NOSCAL = .TRUE.
GOSCAL = .TRUE.
DO 1874 p = 1, N
AAPP = ZERO
AAQQ = ONE
CALL DLASSQ( M, A(1,p), 1, AAPP, AAQQ )
IF ( AAPP .GT. BIG ) THEN
INFO = - 9
CALL XERBLA( 'DGEJSV', -INFO )
RETURN
END IF
AAQQ = DSQRT(AAQQ)
IF ( ( AAPP .LT. (BIG / AAQQ) ) .AND. NOSCAL ) THEN
SVA(p) = AAPP * AAQQ
ELSE
NOSCAL = .FALSE.
SVA(p) = AAPP * ( AAQQ * SCALEM )
IF ( GOSCAL ) THEN
GOSCAL = .FALSE.
CALL DSCAL( p-1, SCALEM, SVA, 1 )
END IF
END IF
1874 CONTINUE
*
IF ( NOSCAL ) SCALEM = ONE
*
AAPP = ZERO
AAQQ = BIG
DO 4781 p = 1, N
AAPP = DMAX1( AAPP, SVA(p) )
IF ( SVA(p) .NE. ZERO ) AAQQ = DMIN1( AAQQ, SVA(p) )
4781 CONTINUE
*
* Quick return for zero M x N matrix
* #:)
IF ( AAPP .EQ. ZERO ) THEN
IF ( LSVEC ) CALL DLASET( 'G', M, N1, ZERO, ONE, U, LDU )
IF ( RSVEC ) CALL DLASET( 'G', N, N, ZERO, ONE, V, LDV )
WORK(1) = ONE
WORK(2) = ONE
IF ( ERREST ) WORK(3) = ONE
IF ( LSVEC .AND. RSVEC ) THEN
WORK(4) = ONE
WORK(5) = ONE
END IF
IF ( L2TRAN ) THEN
WORK(6) = ZERO
WORK(7) = ZERO
END IF
IWORK(1) = 0
IWORK(2) = 0
RETURN
END IF
*
* Issue warning if denormalized column norms detected. Override the
* high relative accuracy request. Issue licence to kill columns
* (set them to zero) whose norm is less than sigma_max / BIG (roughly).
* #:(
WARNING = 0
IF ( AAQQ .LE. SFMIN ) THEN
L2RANK = .TRUE.
L2KILL = .TRUE.
WARNING = 1
END IF
*
* Quick return for one-column matrix
* #:)
IF ( N .EQ. 1 ) THEN
*
IF ( LSVEC ) THEN
CALL DLASCL( 'G',0,0,SVA(1),SCALEM, M,1,A(1,1),LDA,IERR )
CALL DLACPY( 'A', M, 1, A, LDA, U, LDU )
* computing all M left singular vectors of the M x 1 matrix
IF ( N1 .NE. N ) THEN
CALL DGEQRF( M, N, U,LDU, WORK, WORK(N+1),LWORK-N,IERR )
CALL DORGQR( M,N1,1, U,LDU,WORK,WORK(N+1),LWORK-N,IERR )
CALL DCOPY( M, A(1,1), 1, U(1,1), 1 )
END IF
END IF
IF ( RSVEC ) THEN
V(1,1) = ONE
END IF
IF ( SVA(1) .LT. (BIG*SCALEM) ) THEN
SVA(1) = SVA(1) / SCALEM
SCALEM = ONE
END IF
WORK(1) = ONE / SCALEM
WORK(2) = ONE
IF ( SVA(1) .NE. ZERO ) THEN
IWORK(1) = 1
IF ( ( SVA(1) / SCALEM) .GE. SFMIN ) THEN
IWORK(2) = 1
ELSE
IWORK(2) = 0
END IF
ELSE
IWORK(1) = 0
IWORK(2) = 0
END IF
IF ( ERREST ) WORK(3) = ONE
IF ( LSVEC .AND. RSVEC ) THEN
WORK(4) = ONE
WORK(5) = ONE
END IF
IF ( L2TRAN ) THEN
WORK(6) = ZERO
WORK(7) = ZERO
END IF
RETURN
*
END IF
*
TRANSP = .FALSE.
L2TRAN = L2TRAN .AND. ( M .EQ. N )
*
AATMAX = -ONE
AATMIN = BIG
IF ( ROWPIV .OR. L2TRAN ) THEN
*
* Compute the row norms, needed to determine row pivoting sequence
* (in the case of heavily row weighted A, row pivoting is strongly
* advised) and to collect information needed to compare the
* structures of A * A^t and A^t * A (in the case L2TRAN.EQ..TRUE.).
*
IF ( L2TRAN ) THEN
DO 1950 p = 1, M
XSC = ZERO
TEMP1 = ONE
CALL DLASSQ( N, A(p,1), LDA, XSC, TEMP1 )
* DLASSQ gets both the ell_2 and the ell_infinity norm
* in one pass through the vector
WORK(M+N+p) = XSC * SCALEM
WORK(N+p) = XSC * (SCALEM*DSQRT(TEMP1))
AATMAX = DMAX1( AATMAX, WORK(N+p) )
IF (WORK(N+p) .NE. ZERO) AATMIN = DMIN1(AATMIN,WORK(N+p))
1950 CONTINUE
ELSE
DO 1904 p = 1, M
WORK(M+N+p) = SCALEM*DABS( A(p,IDAMAX(N,A(p,1),LDA)) )
AATMAX = DMAX1( AATMAX, WORK(M+N+p) )
AATMIN = DMIN1( AATMIN, WORK(M+N+p) )
1904 CONTINUE
END IF
*
END IF
*
* For square matrix A try to determine whether A^t would be better
* input for the preconditioned Jacobi SVD, with faster convergence.
* The decision is based on an O(N) function of the vector of column
* and row norms of A, based on the Shannon entropy. This should give
* the right choice in most cases when the difference actually matters.
* It may fail and pick the slower converging side.
*
ENTRA = ZERO
ENTRAT = ZERO
IF ( L2TRAN ) THEN
*
XSC = ZERO
TEMP1 = ONE
CALL DLASSQ( N, SVA, 1, XSC, TEMP1 )
TEMP1 = ONE / TEMP1
*
ENTRA = ZERO
DO 1113 p = 1, N
BIG1 = ( ( SVA(p) / XSC )**2 ) * TEMP1
IF ( BIG1 .NE. ZERO ) ENTRA = ENTRA + BIG1 * DLOG(BIG1)
1113 CONTINUE
ENTRA = - ENTRA / DLOG(DBLE(N))
*
* Now, SVA().^2/Trace(A^t * A) is a point in the probability simplex.
* It is derived from the diagonal of A^t * A. Do the same with the
* diagonal of A * A^t, compute the entropy of the corresponding
* probability distribution. Note that A * A^t and A^t * A have the
* same trace.
*
ENTRAT = ZERO
DO 1114 p = N+1, N+M
BIG1 = ( ( WORK(p) / XSC )**2 ) * TEMP1
IF ( BIG1 .NE. ZERO ) ENTRAT = ENTRAT + BIG1 * DLOG(BIG1)
1114 CONTINUE
ENTRAT = - ENTRAT / DLOG(DBLE(M))
*
* Analyze the entropies and decide A or A^t. Smaller entropy
* usually means better input for the algorithm.
*
TRANSP = ( ENTRAT .LT. ENTRA )
*
* If A^t is better than A, transpose A.
*
IF ( TRANSP ) THEN
* In an optimal implementation, this trivial transpose
* should be replaced with faster transpose.
DO 1115 p = 1, N - 1
DO 1116 q = p + 1, N
TEMP1 = A(q,p)
A(q,p) = A(p,q)
A(p,q) = TEMP1
1116 CONTINUE
1115 CONTINUE
DO 1117 p = 1, N
WORK(M+N+p) = SVA(p)
SVA(p) = WORK(N+p)
1117 CONTINUE
TEMP1 = AAPP
AAPP = AATMAX
AATMAX = TEMP1
TEMP1 = AAQQ
AAQQ = AATMIN
AATMIN = TEMP1
KILL = LSVEC
LSVEC = RSVEC
RSVEC = KILL
IF ( LSVEC ) N1 = N
*
ROWPIV = .TRUE.
END IF
*
END IF
* END IF L2TRAN
*
* Scale the matrix so that its maximal singular value remains less
* than DSQRT(BIG) -- the matrix is scaled so that its maximal column
* has Euclidean norm equal to DSQRT(BIG/N). The only reason to keep
* DSQRT(BIG) instead of BIG is the fact that DGEJSV uses LAPACK and
* BLAS routines that, in some implementations, are not capable of
* working in the full interval [SFMIN,BIG] and that they may provoke
* overflows in the intermediate results. If the singular values spread
* from SFMIN to BIG, then DGESVJ will compute them. So, in that case,
* one should use DGESVJ instead of DGEJSV.
*
BIG1 = DSQRT( BIG )
TEMP1 = DSQRT( BIG / DBLE(N) )
*
CALL DLASCL( 'G', 0, 0, AAPP, TEMP1, N, 1, SVA, N, IERR )
IF ( AAQQ .GT. (AAPP * SFMIN) ) THEN
AAQQ = ( AAQQ / AAPP ) * TEMP1
ELSE
AAQQ = ( AAQQ * TEMP1 ) / AAPP
END IF
TEMP1 = TEMP1 * SCALEM
CALL DLASCL( 'G', 0, 0, AAPP, TEMP1, M, N, A, LDA, IERR )
*
* To undo scaling at the end of this procedure, multiply the
* computed singular values with USCAL2 / USCAL1.
*
USCAL1 = TEMP1
USCAL2 = AAPP
*
IF ( L2KILL ) THEN
* L2KILL enforces computation of nonzero singular values in
* the restricted range of condition number of the initial A,
* sigma_max(A) / sigma_min(A) approx. DSQRT(BIG)/DSQRT(SFMIN).
XSC = DSQRT( SFMIN )
ELSE
XSC = SMALL
*
* Now, if the condition number of A is too big,
* sigma_max(A) / sigma_min(A) .GT. DSQRT(BIG/N) * EPSLN / SFMIN,
* as a precaution measure, the full SVD is computed using DGESVJ
* with accumulated Jacobi rotations. This provides numerically
* more robust computation, at the cost of slightly increased run
* time. Depending on the concrete implementation of BLAS and LAPACK
* (i.e. how they behave in presence of extreme ill-conditioning) the
* implementor may decide to remove this switch.
IF ( ( AAQQ.LT.DSQRT(SFMIN) ) .AND. LSVEC .AND. RSVEC ) THEN
JRACC = .TRUE.
END IF
*
END IF
IF ( AAQQ .LT. XSC ) THEN
DO 700 p = 1, N
IF ( SVA(p) .LT. XSC ) THEN
CALL DLASET( 'A', M, 1, ZERO, ZERO, A(1,p), LDA )
SVA(p) = ZERO
END IF
700 CONTINUE
END IF
*
* Preconditioning using QR factorization with pivoting
*
IF ( ROWPIV ) THEN
* Optional row permutation (Bjoerck row pivoting):
* A result by Cox and Higham shows that the Bjoerck's
* row pivoting combined with standard column pivoting
* has similar effect as Powell-Reid complete pivoting.
* The ell-infinity norms of A are made nonincreasing.
DO 1952 p = 1, M - 1
q = IDAMAX( M-p+1, WORK(M+N+p), 1 ) + p - 1
IWORK(2*N+p) = q
IF ( p .NE. q ) THEN
TEMP1 = WORK(M+N+p)
WORK(M+N+p) = WORK(M+N+q)
WORK(M+N+q) = TEMP1
END IF
1952 CONTINUE
CALL DLASWP( N, A, LDA, 1, M-1, IWORK(2*N+1), 1 )
END IF
*
* End of the preparation phase (scaling, optional sorting and
* transposing, optional flushing of small columns).
*
* Preconditioning
*
* If the full SVD is needed, the right singular vectors are computed
* from a matrix equation, and for that we need theoretical analysis
* of the Businger-Golub pivoting. So we use DGEQP3 as the first RR QRF.
* In all other cases the first RR QRF can be chosen by other criteria
* (eg speed by replacing global with restricted window pivoting, such
* as in SGEQPX from TOMS # 782). Good results will be obtained using
* SGEQPX with properly (!) chosen numerical parameters.
* Any improvement of DGEQP3 improves overal performance of DGEJSV.
*
* A * P1 = Q1 * [ R1^t 0]^t:
DO 1963 p = 1, N
* .. all columns are free columns
IWORK(p) = 0
1963 CONTINUE
CALL DGEQP3( M,N,A,LDA, IWORK,WORK, WORK(N+1),LWORK-N, IERR )
*
* The upper triangular matrix R1 from the first QRF is inspected for
* rank deficiency and possibilities for deflation, or possible
* ill-conditioning. Depending on the user specified flag L2RANK,
* the procedure explores possibilities to reduce the numerical
* rank by inspecting the computed upper triangular factor. If
* L2RANK or L2ABER are up, then DGEJSV will compute the SVD of
* A + dA, where ||dA|| <= f(M,N)*EPSLN.
*
NR = 1
IF ( L2ABER ) THEN
* Standard absolute error bound suffices. All sigma_i with
* sigma_i < N*EPSLN*||A|| are flushed to zero. This is an
* agressive enforcement of lower numerical rank by introducing a
* backward error of the order of N*EPSLN*||A||.
TEMP1 = DSQRT(DBLE(N))*EPSLN
DO 3001 p = 2, N
IF ( DABS(A(p,p)) .GE. (TEMP1*DABS(A(1,1))) ) THEN
NR = NR + 1
ELSE
GO TO 3002
END IF
3001 CONTINUE
3002 CONTINUE
ELSE IF ( L2RANK ) THEN
* .. similarly as above, only slightly more gentle (less agressive).
* Sudden drop on the diagonal of R1 is used as the criterion for
* close-to-rank-defficient.
TEMP1 = DSQRT(SFMIN)
DO 3401 p = 2, N
IF ( ( DABS(A(p,p)) .LT. (EPSLN*DABS(A(p-1,p-1))) ) .OR.
& ( DABS(A(p,p)) .LT. SMALL ) .OR.
& ( L2KILL .AND. (DABS(A(p,p)) .LT. TEMP1) ) ) GO TO 3402
NR = NR + 1
3401 CONTINUE
3402 CONTINUE
*
ELSE
* The goal is high relative accuracy. However, if the matrix
* has high scaled condition number the relative accuracy is in
* general not feasible. Later on, a condition number estimator
* will be deployed to estimate the scaled condition number.
* Here we just remove the underflowed part of the triangular
* factor. This prevents the situation in which the code is
* working hard to get the accuracy not warranted by the data.
TEMP1 = DSQRT(SFMIN)
DO 3301 p = 2, N
IF ( ( DABS(A(p,p)) .LT. SMALL ) .OR.
& ( L2KILL .AND. (DABS(A(p,p)) .LT. TEMP1) ) ) GO TO 3302
NR = NR + 1
3301 CONTINUE
3302 CONTINUE
*
END IF
*
ALMORT = .FALSE.
IF ( NR .EQ. N ) THEN
MAXPRJ = ONE
DO 3051 p = 2, N
TEMP1 = DABS(A(p,p)) / SVA(IWORK(p))
MAXPRJ = DMIN1( MAXPRJ, TEMP1 )
3051 CONTINUE
IF ( MAXPRJ**2 .GE. ONE - DBLE(N)*EPSLN ) ALMORT = .TRUE.
END IF
*
*
SCONDA = - ONE
CONDR1 = - ONE
CONDR2 = - ONE
*
IF ( ERREST ) THEN
IF ( N .EQ. NR ) THEN
IF ( RSVEC ) THEN
* .. V is available as workspace
CALL DLACPY( 'U', N, N, A, LDA, V, LDV )
DO 3053 p = 1, N
TEMP1 = SVA(IWORK(p))
CALL DSCAL( p, ONE/TEMP1, V(1,p), 1 )
3053 CONTINUE
CALL DPOCON( 'U', N, V, LDV, ONE, TEMP1,
& WORK(N+1), IWORK(2*N+M+1), IERR )
ELSE IF ( LSVEC ) THEN
* .. U is available as workspace
CALL DLACPY( 'U', N, N, A, LDA, U, LDU )
DO 3054 p = 1, N
TEMP1 = SVA(IWORK(p))
CALL DSCAL( p, ONE/TEMP1, U(1,p), 1 )
3054 CONTINUE
CALL DPOCON( 'U', N, U, LDU, ONE, TEMP1,
& WORK(N+1), IWORK(2*N+M+1), IERR )
ELSE
CALL DLACPY( 'U', N, N, A, LDA, WORK(N+1), N )
DO 3052 p = 1, N
TEMP1 = SVA(IWORK(p))
CALL DSCAL( p, ONE/TEMP1, WORK(N+(p-1)*N+1), 1 )
3052 CONTINUE
* .. the columns of R are scaled to have unit Euclidean lengths.
CALL DPOCON( 'U', N, WORK(N+1), N, ONE, TEMP1,
& WORK(N+N*N+1), IWORK(2*N+M+1), IERR )
END IF
SCONDA = ONE / DSQRT(TEMP1)
* SCONDA is an estimate of DSQRT(||(R^t * R)^(-1)||_1).
* N^(-1/4) * SCONDA <= ||R^(-1)||_2 <= N^(1/4) * SCONDA
ELSE
SCONDA = - ONE
END IF
END IF
*
L2PERT = L2PERT .AND. ( DABS( A(1,1)/A(NR,NR) ) .GT. DSQRT(BIG1) )
* If there is no violent scaling, artificial perturbation is not needed.
*
* Phase 3:
*
IF ( .NOT. ( RSVEC .OR. LSVEC ) ) THEN
*
* Singular Values only
*
* .. transpose A(1:NR,1:N)
DO 1946 p = 1, MIN0( N-1, NR )
CALL DCOPY( N-p, A(p,p+1), LDA, A(p+1,p), 1 )
1946 CONTINUE
*
* The following two DO-loops introduce small relative perturbation
* into the strict upper triangle of the lower triangular matrix.
* Small entries below the main diagonal are also changed.
* This modification is useful if the computing environment does not
* provide/allow FLUSH TO ZERO underflow, for it prevents many
* annoying denormalized numbers in case of strongly scaled matrices.
* The perturbation is structured so that it does not introduce any
* new perturbation of the singular values, and it does not destroy
* the job done by the preconditioner.
* The licence for this perturbation is in the variable L2PERT, which
* should be .FALSE. if FLUSH TO ZERO underflow is active.
*
IF ( .NOT. ALMORT ) THEN
*
IF ( L2PERT ) THEN
* XSC = DSQRT(SMALL)
XSC = EPSLN / DBLE(N)
DO 4947 q = 1, NR
TEMP1 = XSC*DABS(A(q,q))
DO 4949 p = 1, N
IF ( ( (p.GT.q) .AND. (DABS(A(p,q)).LE.TEMP1) )
& .OR. ( p .LT. q ) )
& A(p,q) = DSIGN( TEMP1, A(p,q) )
4949 CONTINUE
4947 CONTINUE
ELSE
CALL DLASET( 'U', NR-1,NR-1, ZERO,ZERO, A(1,2),LDA )
END IF
*
* .. second preconditioning using the QR factorization
*
CALL DGEQRF( N,NR, A,LDA, WORK, WORK(N+1),LWORK-N, IERR )
*
* .. and transpose upper to lower triangular
DO 1948 p = 1, NR - 1
CALL DCOPY( NR-p, A(p,p+1), LDA, A(p+1,p), 1 )
1948 CONTINUE
*
END IF
*
* Row-cyclic Jacobi SVD algorithm with column pivoting
*
* .. again some perturbation (a "background noise") is added
* to drown denormals
IF ( L2PERT ) THEN
* XSC = DSQRT(SMALL)
XSC = EPSLN / DBLE(N)
DO 1947 q = 1, NR
TEMP1 = XSC*DABS(A(q,q))
DO 1949 p = 1, NR
IF ( ( (p.GT.q) .AND. (DABS(A(p,q)).LE.TEMP1) )
& .OR. ( p .LT. q ) )
& A(p,q) = DSIGN( TEMP1, A(p,q) )
1949 CONTINUE
1947 CONTINUE
ELSE
CALL DLASET( 'U', NR-1, NR-1, ZERO, ZERO, A(1,2), LDA )
END IF
*
* .. and one-sided Jacobi rotations are started on a lower
* triangular matrix (plus perturbation which is ignored in
* the part which destroys triangular form (confusing?!))
*
CALL DGESVJ( 'L', 'NoU', 'NoV', NR, NR, A, LDA, SVA,
& N, V, LDV, WORK, LWORK, INFO )
*
SCALEM = WORK(1)
NUMRANK = IDNINT(WORK(2))
*
*
ELSE IF ( RSVEC .AND. ( .NOT. LSVEC ) ) THEN
*
* -> Singular Values and Right Singular Vectors <-
*
IF ( ALMORT ) THEN
*
* .. in this case NR equals N
DO 1998 p = 1, NR
CALL DCOPY( N-p+1, A(p,p), LDA, V(p,p), 1 )
1998 CONTINUE
CALL DLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
*
CALL DGESVJ( 'L','U','N', N, NR, V,LDV, SVA, NR, A,LDA,
& WORK, LWORK, INFO )
SCALEM = WORK(1)
NUMRANK = IDNINT(WORK(2))
ELSE
*
* .. two more QR factorizations ( one QRF is not enough, two require
* accumulated product of Jacobi rotations, three are perfect )
*
CALL DLASET( 'Lower', NR-1, NR-1, ZERO, ZERO, A(2,1), LDA )
CALL DGELQF( NR, N, A, LDA, WORK, WORK(N+1), LWORK-N, IERR)
CALL DLACPY( 'Lower', NR, NR, A, LDA, V, LDV )
CALL DLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
CALL DGEQRF( NR, NR, V, LDV, WORK(N+1), WORK(2*N+1),
& LWORK-2*N, IERR )
DO 8998 p = 1, NR
CALL DCOPY( NR-p+1, V(p,p), LDV, V(p,p), 1 )
8998 CONTINUE
CALL DLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
*
CALL DGESVJ( 'Lower', 'U','N', NR, NR, V,LDV, SVA, NR, U,
& LDU, WORK(N+1), LWORK, INFO )
SCALEM = WORK(N+1)
NUMRANK = IDNINT(WORK(N+2))
IF ( NR .LT. N ) THEN
CALL DLASET( 'A',N-NR, NR, ZERO,ZERO, V(NR+1,1), LDV )
CALL DLASET( 'A',NR, N-NR, ZERO,ZERO, V(1,NR+1), LDV )
CALL DLASET( 'A',N-NR,N-NR,ZERO,ONE, V(NR+1,NR+1), LDV )
END IF
*
CALL DORMLQ( 'Left', 'Transpose', N, N, NR, A, LDA, WORK,
& V, LDV, WORK(N+1), LWORK-N, IERR )
*
END IF
*
DO 8991 p = 1, N
CALL DCOPY( N, V(p,1), LDV, A(IWORK(p),1), LDA )
8991 CONTINUE
CALL DLACPY( 'All', N, N, A, LDA, V, LDV )
*
IF ( TRANSP ) THEN
CALL DLACPY( 'All', N, N, V, LDV, U, LDU )
END IF
*
ELSE IF ( LSVEC .AND. ( .NOT. RSVEC ) ) THEN
*
* .. Singular Values and Left Singular Vectors ..
*
* .. second preconditioning step to avoid need to accumulate
* Jacobi rotations in the Jacobi iterations.
DO 1965 p = 1, NR
CALL DCOPY( N-p+1, A(p,p), LDA, U(p,p), 1 )
1965 CONTINUE
CALL DLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, U(1,2), LDU )
*
CALL DGEQRF( N, NR, U, LDU, WORK(N+1), WORK(2*N+1),
& LWORK-2*N, IERR )
*
DO 1967 p = 1, NR - 1
CALL DCOPY( NR-p, U(p,p+1), LDU, U(p+1,p), 1 )
1967 CONTINUE
CALL DLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, U(1,2), LDU )
*
CALL DGESVJ( 'Lower', 'U', 'N', NR,NR, U, LDU, SVA, NR, A,
& LDA, WORK(N+1), LWORK-N, INFO )
SCALEM = WORK(N+1)
NUMRANK = IDNINT(WORK(N+2))
*
IF ( NR .LT. M ) THEN
CALL DLASET( 'A', M-NR, NR,ZERO, ZERO, U(NR+1,1), LDU )
IF ( NR .LT. N1 ) THEN
CALL DLASET( 'A',NR, N1-NR, ZERO, ZERO, U(1,NR+1), LDU )
CALL DLASET( 'A',M-NR,N1-NR,ZERO,ONE,U(NR+1,NR+1), LDU )
END IF
END IF
*
CALL DORMQR( 'Left', 'No Tr', M, N1, N, A, LDA, WORK, U,
& LDU, WORK(N+1), LWORK-N, IERR )
*
IF ( ROWPIV )
& CALL DLASWP( N1, U, LDU, 1, M-1, IWORK(2*N+1), -1 )
*
DO 1974 p = 1, N1
XSC = ONE / DNRM2( M, U(1,p), 1 )
CALL DSCAL( M, XSC, U(1,p), 1 )
1974 CONTINUE
*
IF ( TRANSP ) THEN
CALL DLACPY( 'All', N, N, U, LDU, V, LDV )
END IF
*
ELSE
*
* .. Full SVD ..
*
IF ( .NOT. JRACC ) THEN
*
IF ( .NOT. ALMORT ) THEN
*
* Second Preconditioning Step (QRF [with pivoting])
* Note that the composition of TRANSPOSE, QRF and TRANSPOSE is
* equivalent to an LQF CALL. Since in many libraries the QRF
* seems to be better optimized than the LQF, we do explicit
* transpose and use the QRF. This is subject to changes in an
* optimized implementation of DGEJSV.
*
DO 1968 p = 1, NR
CALL DCOPY( N-p+1, A(p,p), LDA, V(p,p), 1 )
1968 CONTINUE
*
* .. the following two loops perturb small entries to avoid
* denormals in the second QR factorization, where they are
* as good as zeros. This is done to avoid painfully slow
* computation with denormals. The relative size of the perturbation
* is a parameter that can be changed by the implementer.
* This perturbation device will be obsolete on machines with
* properly implemented arithmetic.
* To switch it off, set L2PERT=.FALSE. To remove it from the
* code, remove the action under L2PERT=.TRUE., leave the ELSE part.
* The following two loops should be blocked and fused with the
* transposed copy above.
*
IF ( L2PERT ) THEN
XSC = DSQRT(SMALL)
DO 2969 q = 1, NR
TEMP1 = XSC*DABS( V(q,q) )
DO 2968 p = 1, N
IF ( ( p .GT. q ) .AND. ( DABS(V(p,q)) .LE. TEMP1 )
& .OR. ( p .LT. q ) )
& V(p,q) = DSIGN( TEMP1, V(p,q) )
IF ( p. LT. q ) V(p,q) = - V(p,q)
2968 CONTINUE
2969 CONTINUE
ELSE
CALL DLASET( 'U', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
END IF
*
* Estimate the row scaled condition number of R1
* (If R1 is rectangular, N > NR, then the condition number
* of the leading NR x NR submatrix is estimated.)
*
CALL DLACPY( 'L', NR, NR, V, LDV, WORK(2*N+1), NR )
DO 3950 p = 1, NR
TEMP1 = DNRM2(NR-p+1,WORK(2*N+(p-1)*NR+p),1)
CALL DSCAL(NR-p+1,ONE/TEMP1,WORK(2*N+(p-1)*NR+p),1)
3950 CONTINUE
CALL DPOCON('Lower',NR,WORK(2*N+1),NR,ONE,TEMP1,
& WORK(2*N+NR*NR+1),IWORK(M+2*N+1),IERR)
CONDR1 = ONE / DSQRT(TEMP1)
* .. here need a second oppinion on the condition number
* .. then assume worst case scenario
* R1 is OK for inverse <=> CONDR1 .LT. DBLE(N)
* more conservative <=> CONDR1 .LT. DSQRT(DBLE(N))
*
COND_OK = DSQRT(DBLE(NR))
*[TP] COND_OK is a tuning parameter.
IF ( CONDR1 .LT. COND_OK ) THEN
* .. the second QRF without pivoting. Note: in an optimized
* implementation, this QRF should be implemented as the QRF
* of a lower triangular matrix.
* R1^t = Q2 * R2
CALL DGEQRF( N, NR, V, LDV, WORK(N+1), WORK(2*N+1),
& LWORK-2*N, IERR )
*
IF ( L2PERT ) THEN
XSC = DSQRT(SMALL)/EPSLN
DO 3959 p = 2, NR
DO 3958 q = 1, p - 1
TEMP1 = XSC * DMIN1(DABS(V(p,p)),DABS(V(q,q)))
IF ( DABS(V(q,p)) .LE. TEMP1 )
& V(q,p) = DSIGN( TEMP1, V(q,p) )
3958 CONTINUE
3959 CONTINUE
END IF
*
IF ( NR .NE. N )
* .. save ...
& CALL DLACPY( 'A', N, NR, V, LDV, WORK(2*N+1), N )
*
* .. this transposed copy should be better than naive
DO 1969 p = 1, NR - 1
CALL DCOPY( NR-p, V(p,p+1), LDV, V(p+1,p), 1 )
1969 CONTINUE
*
CONDR2 = CONDR1
*
ELSE
*
* .. ill-conditioned case: second QRF with pivoting
* Note that windowed pivoting would be equaly good
* numerically, and more run-time efficient. So, in
* an optimal implementation, the next call to DGEQP3
* should be replaced with eg. CALL SGEQPX (ACM TOMS #782)
* with properly (carefully) chosen parameters.
*
* R1^t * P2 = Q2 * R2
DO 3003 p = 1, NR
IWORK(N+p) = 0
3003 CONTINUE
CALL DGEQP3( N, NR, V, LDV, IWORK(N+1), WORK(N+1),
& WORK(2*N+1), LWORK-2*N, IERR )
** CALL DGEQRF( N, NR, V, LDV, WORK(N+1), WORK(2*N+1),
** & LWORK-2*N, IERR )
IF ( L2PERT ) THEN
XSC = DSQRT(SMALL)
DO 3969 p = 2, NR
DO 3968 q = 1, p - 1
TEMP1 = XSC * DMIN1(DABS(V(p,p)),DABS(V(q,q)))
IF ( DABS(V(q,p)) .LE. TEMP1 )
& V(q,p) = DSIGN( TEMP1, V(q,p) )
3968 CONTINUE
3969 CONTINUE
END IF
*
CALL DLACPY( 'A', N, NR, V, LDV, WORK(2*N+1), N )
*
IF ( L2PERT ) THEN
XSC = DSQRT(SMALL)
DO 8970 p = 2, NR
DO 8971 q = 1, p - 1
TEMP1 = XSC * DMIN1(DABS(V(p,p)),DABS(V(q,q)))
V(p,q) = - DSIGN( TEMP1, V(q,p) )
8971 CONTINUE
8970 CONTINUE
ELSE
CALL DLASET( 'L',NR-1,NR-1,ZERO,ZERO,V(2,1),LDV )
END IF
* Now, compute R2 = L3 * Q3, the LQ factorization.
CALL DGELQF( NR, NR, V, LDV, WORK(2*N+N*NR+1),
& WORK(2*N+N*NR+NR+1), LWORK-2*N-N*NR-NR, IERR )
* .. and estimate the condition number
CALL DLACPY( 'L',NR,NR,V,LDV,WORK(2*N+N*NR+NR+1),NR )
DO 4950 p = 1, NR
TEMP1 = DNRM2( p, WORK(2*N+N*NR+NR+p), NR )
CALL DSCAL( p, ONE/TEMP1, WORK(2*N+N*NR+NR+p), NR )
4950 CONTINUE
CALL DPOCON( 'L',NR,WORK(2*N+N*NR+NR+1),NR,ONE,TEMP1,
& WORK(2*N+N*NR+NR+NR*NR+1),IWORK(M+2*N+1),IERR )
CONDR2 = ONE / DSQRT(TEMP1)
*
IF ( CONDR2 .GE. COND_OK ) THEN
* .. save the Householder vectors used for Q3
* (this overwrittes the copy of R2, as it will not be
* needed in this branch, but it does not overwritte the
* Huseholder vectors of Q2.).
CALL DLACPY( 'U', NR, NR, V, LDV, WORK(2*N+1), N )
* .. and the rest of the information on Q3 is in
* WORK(2*N+N*NR+1:2*N+N*NR+N)
END IF
*
END IF
*
IF ( L2PERT ) THEN
XSC = DSQRT(SMALL)
DO 4968 q = 2, NR
TEMP1 = XSC * V(q,q)
DO 4969 p = 1, q - 1
* V(p,q) = - DSIGN( TEMP1, V(q,p) )
V(p,q) = - DSIGN( TEMP1, V(p,q) )
4969 CONTINUE
4968 CONTINUE
ELSE
CALL DLASET( 'U', NR-1,NR-1, ZERO,ZERO, V(1,2), LDV )
END IF
*
* Second preconditioning finished; continue with Jacobi SVD
* The input matrix is lower trinagular.
*
* Recover the right singular vectors as solution of a well
* conditioned triangular matrix equation.
*
IF ( CONDR1 .LT. COND_OK ) THEN
*
CALL DGESVJ( 'L','U','N',NR,NR,V,LDV,SVA,NR,U,
& LDU,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,INFO )
SCALEM = WORK(2*N+N*NR+NR+1)
NUMRANK = IDNINT(WORK(2*N+N*NR+NR+2))
DO 3970 p = 1, NR
CALL DCOPY( NR, V(1,p), 1, U(1,p), 1 )
CALL DSCAL( NR, SVA(p), V(1,p), 1 )
3970 CONTINUE
* .. pick the right matrix equation and solve it
*
IF ( NR. EQ. N ) THEN
* :)) .. best case, R1 is inverted. The solution of this matrix
* equation is Q2*V2 = the product of the Jacobi rotations
* used in DGESVJ, premultiplied with the orthogonal matrix
* from the second QR factorization.
CALL DTRSM( 'L','U','N','N', NR,NR,ONE, A,LDA, V,LDV )
ELSE
* .. R1 is well conditioned, but non-square. Transpose(R2)
* is inverted to get the product of the Jacobi rotations
* used in DGESVJ. The Q-factor from the second QR
* factorization is then built in explicitly.
CALL DTRSM('L','U','T','N',NR,NR,ONE,WORK(2*N+1),
& N,V,LDV)
IF ( NR .LT. N ) THEN
CALL DLASET('A',N-NR,NR,ZERO,ZERO,V(NR+1,1),LDV)
CALL DLASET('A',NR,N-NR,ZERO,ZERO,V(1,NR+1),LDV)
CALL DLASET('A',N-NR,N-NR,ZERO,ONE,V(NR+1,NR+1),LDV)
END IF
CALL DORMQR('L','N',N,N,NR,WORK(2*N+1),N,WORK(N+1),
& V,LDV,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR)
END IF
*
ELSE IF ( CONDR2 .LT. COND_OK ) THEN
*
* :) .. the input matrix A is very likely a relative of
* the Kahan matrix :)
* The matrix R2 is inverted. The solution of the matrix equation
* is Q3^T*V3 = the product of the Jacobi rotations (appplied to
* the lower triangular L3 from the LQ factorization of
* R2=L3*Q3), pre-multiplied with the transposed Q3.
CALL DGESVJ( 'L', 'U', 'N', NR, NR, V, LDV, SVA, NR, U,
& LDU, WORK(2*N+N*NR+NR+1), LWORK-2*N-N*NR-NR, INFO )
SCALEM = WORK(2*N+N*NR+NR+1)
NUMRANK = IDNINT(WORK(2*N+N*NR+NR+2))
DO 3870 p = 1, NR
CALL DCOPY( NR, V(1,p), 1, U(1,p), 1 )
CALL DSCAL( NR, SVA(p), U(1,p), 1 )
3870 CONTINUE
CALL DTRSM('L','U','N','N',NR,NR,ONE,WORK(2*N+1),N,U,LDU)
* .. apply the permutation from the second QR factorization
DO 873 q = 1, NR
DO 872 p = 1, NR
WORK(2*N+N*NR+NR+IWORK(N+p)) = U(p,q)
872 CONTINUE
DO 874 p = 1, NR
U(p,q) = WORK(2*N+N*NR+NR+p)
874 CONTINUE
873 CONTINUE
IF ( NR .LT. N ) THEN
CALL DLASET( 'A',N-NR,NR,ZERO,ZERO,V(NR+1,1),LDV )
CALL DLASET( 'A',NR,N-NR,ZERO,ZERO,V(1,NR+1),LDV )
CALL DLASET( 'A',N-NR,N-NR,ZERO,ONE,V(NR+1,NR+1),LDV )
END IF
CALL DORMQR( 'L','N',N,N,NR,WORK(2*N+1),N,WORK(N+1),
& V,LDV,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR )
ELSE
* Last line of defense.
* #:( This is a rather pathological case: no scaled condition
* improvement after two pivoted QR factorizations. Other
* possibility is that the rank revealing QR factorization
* or the condition estimator has failed, or the COND_OK
* is set very close to ONE (which is unnecessary). Normally,
* this branch should never be executed, but in rare cases of
* failure of the RRQR or condition estimator, the last line of
* defense ensures that DGEJSV completes the task.
* Compute the full SVD of L3 using DGESVJ with explicit
* accumulation of Jacobi rotations.
CALL DGESVJ( 'L', 'U', 'V', NR, NR, V, LDV, SVA, NR, U,
& LDU, WORK(2*N+N*NR+NR+1), LWORK-2*N-N*NR-NR, INFO )
SCALEM = WORK(2*N+N*NR+NR+1)
NUMRANK = IDNINT(WORK(2*N+N*NR+NR+2))
IF ( NR .LT. N ) THEN
CALL DLASET( 'A',N-NR,NR,ZERO,ZERO,V(NR+1,1),LDV )
CALL DLASET( 'A',NR,N-NR,ZERO,ZERO,V(1,NR+1),LDV )
CALL DLASET( 'A',N-NR,N-NR,ZERO,ONE,V(NR+1,NR+1),LDV )
END IF
CALL DORMQR( 'L','N',N,N,NR,WORK(2*N+1),N,WORK(N+1),
& V,LDV,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR )
*
CALL DORMLQ( 'L', 'T', NR, NR, NR, WORK(2*N+1), N,
& WORK(2*N+N*NR+1), U, LDU, WORK(2*N+N*NR+NR+1),
& LWORK-2*N-N*NR-NR, IERR )
DO 773 q = 1, NR
DO 772 p = 1, NR
WORK(2*N+N*NR+NR+IWORK(N+p)) = U(p,q)
772 CONTINUE
DO 774 p = 1, NR
U(p,q) = WORK(2*N+N*NR+NR+p)
774 CONTINUE
773 CONTINUE
*
END IF
*
* Permute the rows of V using the (column) permutation from the
* first QRF. Also, scale the columns to make them unit in
* Euclidean norm. This applies to all cases.
*
TEMP1 = DSQRT(DBLE(N)) * EPSLN
DO 1972 q = 1, N
DO 972 p = 1, N
WORK(2*N+N*NR+NR+IWORK(p)) = V(p,q)
972 CONTINUE
DO 973 p = 1, N
V(p,q) = WORK(2*N+N*NR+NR+p)
973 CONTINUE
XSC = ONE / DNRM2( N, V(1,q), 1 )
IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
& CALL DSCAL( N, XSC, V(1,q), 1 )
1972 CONTINUE
* At this moment, V contains the right singular vectors of A.
* Next, assemble the left singular vector matrix U (M x N).
IF ( NR .LT. M ) THEN
CALL DLASET( 'A', M-NR, NR, ZERO, ZERO, U(NR+1,1), LDU )
IF ( NR .LT. N1 ) THEN
CALL DLASET('A',NR,N1-NR,ZERO,ZERO,U(1,NR+1),LDU)
CALL DLASET('A',M-NR,N1-NR,ZERO,ONE,U(NR+1,NR+1),LDU)
END IF
END IF
*
* The Q matrix from the first QRF is built into the left singular
* matrix U. This applies to all cases.
*
CALL DORMQR( 'Left', 'No_Tr', M, N1, N, A, LDA, WORK, U,
& LDU, WORK(N+1), LWORK-N, IERR )
* The columns of U are normalized. The cost is O(M*N) flops.
TEMP1 = DSQRT(DBLE(M)) * EPSLN
DO 1973 p = 1, NR
XSC = ONE / DNRM2( M, U(1,p), 1 )
IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
& CALL DSCAL( M, XSC, U(1,p), 1 )
1973 CONTINUE
*
* If the initial QRF is computed with row pivoting, the left
* singular vectors must be adjusted.
*
IF ( ROWPIV )
& CALL DLASWP( N1, U, LDU, 1, M-1, IWORK(2*N+1), -1 )
*
ELSE
*
* .. the initial matrix A has almost orthogonal columns and
* the second QRF is not needed
*
CALL DLACPY( 'Upper', N, N, A, LDA, WORK(N+1), N )
IF ( L2PERT ) THEN
XSC = DSQRT(SMALL)
DO 5970 p = 2, N
TEMP1 = XSC * WORK( N + (p-1)*N + p )
DO 5971 q = 1, p - 1
WORK(N+(q-1)*N+p)=-DSIGN(TEMP1,WORK(N+(p-1)*N+q))
5971 CONTINUE
5970 CONTINUE
ELSE
CALL DLASET( 'Lower',N-1,N-1,ZERO,ZERO,WORK(N+2),N )
END IF
*
CALL DGESVJ( 'Upper', 'U', 'N', N, N, WORK(N+1), N, SVA,
& N, U, LDU, WORK(N+N*N+1), LWORK-N-N*N, INFO )
*
SCALEM = WORK(N+N*N+1)
NUMRANK = IDNINT(WORK(N+N*N+2))
DO 6970 p = 1, N
CALL DCOPY( N, WORK(N+(p-1)*N+1), 1, U(1,p), 1 )
CALL DSCAL( N, SVA(p), WORK(N+(p-1)*N+1), 1 )
6970 CONTINUE
*
CALL DTRSM( 'Left', 'Upper', 'NoTrans', 'No UD', N, N,
& ONE, A, LDA, WORK(N+1), N )
DO 6972 p = 1, N
CALL DCOPY( N, WORK(N+p), N, V(IWORK(p),1), LDV )
6972 CONTINUE
TEMP1 = DSQRT(DBLE(N))*EPSLN
DO 6971 p = 1, N
XSC = ONE / DNRM2( N, V(1,p), 1 )
IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
& CALL DSCAL( N, XSC, V(1,p), 1 )
6971 CONTINUE
*
* Assemble the left singular vector matrix U (M x N).
*
IF ( N .LT. M ) THEN
CALL DLASET( 'A', M-N, N, ZERO, ZERO, U(N+1,1), LDU )
IF ( N .LT. N1 ) THEN
CALL DLASET( 'A',N, N1-N, ZERO, ZERO, U(1,N+1),LDU )
CALL DLASET( 'A',M-N,N1-N, ZERO, ONE,U(N+1,N+1),LDU )
END IF
END IF
CALL DORMQR( 'Left', 'No Tr', M, N1, N, A, LDA, WORK, U,
& LDU, WORK(N+1), LWORK-N, IERR )
TEMP1 = DSQRT(DBLE(M))*EPSLN
DO 6973 p = 1, N1
XSC = ONE / DNRM2( M, U(1,p), 1 )
IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
& CALL DSCAL( M, XSC, U(1,p), 1 )
6973 CONTINUE
*
IF ( ROWPIV )
& CALL DLASWP( N1, U, LDU, 1, M-1, IWORK(2*N+1), -1 )
*
END IF
*
* end of the >> almost orthogonal case << in the full SVD
*
ELSE
*
* This branch deploys a preconditioned Jacobi SVD with explicitly
* accumulated rotations. It is included as optional, mainly for
* experimental purposes. It does perfom well, and can also be used.
* In this implementation, this branch will be automatically activated
* if the condition number sigma_max(A) / sigma_min(A) is predicted
* to be greater than the overflow threshold. This is because the
* a posteriori computation of the singular vectors assumes robust
* implementation of BLAS and some LAPACK procedures, capable of working
* in presence of extreme values. Since that is not always the case, ...
*
DO 7968 p = 1, NR
CALL DCOPY( N-p+1, A(p,p), LDA, V(p,p), 1 )
7968 CONTINUE
*
IF ( L2PERT ) THEN
XSC = DSQRT(SMALL/EPSLN)
DO 5969 q = 1, NR
TEMP1 = XSC*DABS( V(q,q) )
DO 5968 p = 1, N
IF ( ( p .GT. q ) .AND. ( DABS(V(p,q)) .LE. TEMP1 )
& .OR. ( p .LT. q ) )
& V(p,q) = DSIGN( TEMP1, V(p,q) )
IF ( p. LT. q ) V(p,q) = - V(p,q)
5968 CONTINUE
5969 CONTINUE
ELSE
CALL DLASET( 'U', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
END IF
CALL DGEQRF( N, NR, V, LDV, WORK(N+1), WORK(2*N+1),
& LWORK-2*N, IERR )
CALL DLACPY( 'L', N, NR, V, LDV, WORK(2*N+1), N )
*
DO 7969 p = 1, NR
CALL DCOPY( NR-p+1, V(p,p), LDV, U(p,p), 1 )
7969 CONTINUE
IF ( L2PERT ) THEN
XSC = DSQRT(SMALL/EPSLN)
DO 9970 q = 2, NR
DO 9971 p = 1, q - 1
TEMP1 = XSC * DMIN1(DABS(U(p,p)),DABS(U(q,q)))
U(p,q) = - DSIGN( TEMP1, U(q,p) )
9971 CONTINUE
9970 CONTINUE
ELSE
CALL DLASET('U', NR-1, NR-1, ZERO, ZERO, U(1,2), LDU )
END IF
CALL DGESVJ( 'G', 'U', 'V', NR, NR, U, LDU, SVA,
& N, V, LDV, WORK(2*N+N*NR+1), LWORK-2*N-N*NR, INFO )
SCALEM = WORK(2*N+N*NR+1)
NUMRANK = IDNINT(WORK(2*N+N*NR+2))
IF ( NR .LT. N ) THEN
CALL DLASET( 'A',N-NR,NR,ZERO,ZERO,V(NR+1,1),LDV )
CALL DLASET( 'A',NR,N-NR,ZERO,ZERO,V(1,NR+1),LDV )
CALL DLASET( 'A',N-NR,N-NR,ZERO,ONE,V(NR+1,NR+1),LDV )
END IF
CALL DORMQR( 'L','N',N,N,NR,WORK(2*N+1),N,WORK(N+1),
& V,LDV,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR )
*
* Permute the rows of V using the (column) permutation from the
* first QRF. Also, scale the columns to make them unit in
* Euclidean norm. This applies to all cases.
*
TEMP1 = DSQRT(DBLE(N)) * EPSLN
DO 7972 q = 1, N
DO 8972 p = 1, N
WORK(2*N+N*NR+NR+IWORK(p)) = V(p,q)
8972 CONTINUE
DO 8973 p = 1, N
V(p,q) = WORK(2*N+N*NR+NR+p)
8973 CONTINUE
XSC = ONE / DNRM2( N, V(1,q), 1 )
IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
& CALL DSCAL( N, XSC, V(1,q), 1 )
7972 CONTINUE
*
* At this moment, V contains the right singular vectors of A.
* Next, assemble the left singular vector matrix U (M x N).
*
IF ( NR .LT. M ) THEN
CALL DLASET( 'A', M-NR, NR, ZERO, ZERO, U(NR+1,1), LDU )
IF ( NR .LT. N1 ) THEN
CALL DLASET( 'A',NR, N1-NR, ZERO, ZERO, U(1,NR+1),LDU )
CALL DLASET( 'A',M-NR,N1-NR, ZERO, ONE,U(NR+1,NR+1),LDU )
END IF
END IF
*
CALL DORMQR( 'Left', 'No Tr', M, N1, N, A, LDA, WORK, U,
& LDU, WORK(N+1), LWORK-N, IERR )
*
IF ( ROWPIV )
& CALL DLASWP( N1, U, LDU, 1, M-1, IWORK(2*N+1), -1 )
*
*
END IF
IF ( TRANSP ) THEN
* .. swap U and V because the procedure worked on A^t
DO 6974 p = 1, N
CALL DSWAP( N, U(1,p), 1, V(1,p), 1 )
6974 CONTINUE
END IF
*
END IF
* end of the full SVD
*
* Undo scaling, if necessary (and possible)
*
IF ( USCAL2 .LE. (BIG/SVA(1))*USCAL1 ) THEN
CALL DLASCL( 'G', 0, 0, USCAL1, USCAL2, NR, 1, SVA, N, IERR )
USCAL1 = ONE
USCAL2 = ONE
END IF
*
IF ( NR .LT. N ) THEN
DO 3004 p = NR+1, N
SVA(p) = ZERO
3004 CONTINUE
END IF
*
WORK(1) = USCAL2 * SCALEM
WORK(2) = USCAL1
IF ( ERREST ) WORK(3) = SCONDA
IF ( LSVEC .AND. RSVEC ) THEN
WORK(4) = CONDR1
WORK(5) = CONDR2
END IF
IF ( L2TRAN ) THEN
WORK(6) = ENTRA
WORK(7) = ENTRAT
END IF
*
IWORK(1) = NR
IWORK(2) = NUMRANK
IWORK(3) = WARNING
*
RETURN
* ..
* .. END OF DGEJSV
* ..
END
*
go to the page top
dgelq2
USAGE:
tau, info, a = NumRu::Lapack.dgelq2( a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGELQ2( M, N, A, LDA, TAU, WORK, INFO )
* Purpose
* =======
*
* DGELQ2 computes an LQ factorization of a real m by n matrix A:
* A = L * Q.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the m by n matrix A.
* On exit, the elements on and below the diagonal of the array
* contain the m by min(m,n) lower trapezoidal matrix L (L is
* lower triangular if m <= n); the elements above the diagonal,
* with the array TAU, represent the orthogonal matrix Q as a
* product of elementary reflectors (see Further Details).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* TAU (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors (see Further
* Details).
*
* WORK (workspace) DOUBLE PRECISION array, dimension (M)
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of elementary reflectors
*
* Q = H(k) . . . H(2) H(1), where k = min(m,n).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(1:i-1) = 0 and v(i) = 1; v(i+1:n) is stored on exit in A(i,i+1:n),
* and tau in TAU(i).
*
* =====================================================================
*
go to the page top
dgelqf
USAGE:
tau, work, info, a = NumRu::Lapack.dgelqf( m, a, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGELQF( M, N, A, LDA, TAU, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGELQF computes an LQ factorization of a real M-by-N matrix A:
* A = L * Q.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit, the elements on and below the diagonal of the array
* contain the m-by-min(m,n) lower trapezoidal matrix L (L is
* lower triangular if m <= n); the elements above the diagonal,
* with the array TAU, represent the orthogonal matrix Q as a
* product of elementary reflectors (see Further Details).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* TAU (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors (see Further
* Details).
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= max(1,M).
* For optimum performance LWORK >= M*NB, where NB is the
* optimal blocksize.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of elementary reflectors
*
* Q = H(k) . . . H(2) H(1), where k = min(m,n).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(1:i-1) = 0 and v(i) = 1; v(i+1:n) is stored on exit in A(i,i+1:n),
* and tau in TAU(i).
*
* =====================================================================
*
* .. Local Scalars ..
LOGICAL LQUERY
INTEGER I, IB, IINFO, IWS, K, LDWORK, LWKOPT, NB,
$ NBMIN, NX
* ..
* .. External Subroutines ..
EXTERNAL DGELQ2, DLARFB, DLARFT, XERBLA
* ..
* .. Intrinsic Functions ..
INTRINSIC MAX, MIN
* ..
* .. External Functions ..
INTEGER ILAENV
EXTERNAL ILAENV
* ..
go to the page top
dgels
USAGE:
work, info, a, b = NumRu::Lapack.dgels( trans, a, b, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGELS( TRANS, M, N, NRHS, A, LDA, B, LDB, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGELS solves overdetermined or underdetermined real linear systems
* involving an M-by-N matrix A, or its transpose, using a QR or LQ
* factorization of A. It is assumed that A has full rank.
*
* The following options are provided:
*
* 1. If TRANS = 'N' and m >= n: find the least squares solution of
* an overdetermined system, i.e., solve the least squares problem
* minimize || B - A*X ||.
*
* 2. If TRANS = 'N' and m < n: find the minimum norm solution of
* an underdetermined system A * X = B.
*
* 3. If TRANS = 'T' and m >= n: find the minimum norm solution of
* an undetermined system A**T * X = B.
*
* 4. If TRANS = 'T' and m < n: find the least squares solution of
* an overdetermined system, i.e., solve the least squares problem
* minimize || B - A**T * X ||.
*
* Several right hand side vectors b and solution vectors x can be
* handled in a single call; they are stored as the columns of the
* M-by-NRHS right hand side matrix B and the N-by-NRHS solution
* matrix X.
*
* Arguments
* =========
*
* TRANS (input) CHARACTER*1
* = 'N': the linear system involves A;
* = 'T': the linear system involves A**T.
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* NRHS (input) INTEGER
* The number of right hand sides, i.e., the number of
* columns of the matrices B and X. NRHS >=0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit,
* if M >= N, A is overwritten by details of its QR
* factorization as returned by DGEQRF;
* if M < N, A is overwritten by details of its LQ
* factorization as returned by DGELQF.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS)
* On entry, the matrix B of right hand side vectors, stored
* columnwise; B is M-by-NRHS if TRANS = 'N', or N-by-NRHS
* if TRANS = 'T'.
* On exit, if INFO = 0, B is overwritten by the solution
* vectors, stored columnwise:
* if TRANS = 'N' and m >= n, rows 1 to n of B contain the least
* squares solution vectors; the residual sum of squares for the
* solution in each column is given by the sum of squares of
* elements N+1 to M in that column;
* if TRANS = 'N' and m < n, rows 1 to N of B contain the
* minimum norm solution vectors;
* if TRANS = 'T' and m >= n, rows 1 to M of B contain the
* minimum norm solution vectors;
* if TRANS = 'T' and m < n, rows 1 to M of B contain the
* least squares solution vectors; the residual sum of squares
* for the solution in each column is given by the sum of
* squares of elements M+1 to N in that column.
*
* LDB (input) INTEGER
* The leading dimension of the array B. LDB >= MAX(1,M,N).
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK.
* LWORK >= max( 1, MN + max( MN, NRHS ) ).
* For optimal performance,
* LWORK >= max( 1, MN + max( MN, NRHS )*NB ).
* where MN = min(M,N) and NB is the optimum block size.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
* > 0: if INFO = i, the i-th diagonal element of the
* triangular factor of A is zero, so that A does not have
* full rank; the least squares solution could not be
* computed.
*
* =====================================================================
*
go to the page top
dgelsd
USAGE:
s, rank, work, info, b = NumRu::Lapack.dgelsd( a, b, rcond, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGELSD( M, N, NRHS, A, LDA, B, LDB, S, RCOND, RANK, WORK, LWORK, IWORK, INFO )
* Purpose
* =======
*
* DGELSD computes the minimum-norm solution to a real linear least
* squares problem:
* minimize 2-norm(| b - A*x |)
* using the singular value decomposition (SVD) of A. A is an M-by-N
* matrix which may be rank-deficient.
*
* Several right hand side vectors b and solution vectors x can be
* handled in a single call; they are stored as the columns of the
* M-by-NRHS right hand side matrix B and the N-by-NRHS solution
* matrix X.
*
* The problem is solved in three steps:
* (1) Reduce the coefficient matrix A to bidiagonal form with
* Householder transformations, reducing the original problem
* into a "bidiagonal least squares problem" (BLS)
* (2) Solve the BLS using a divide and conquer approach.
* (3) Apply back all the Householder tranformations to solve
* the original least squares problem.
*
* The effective rank of A is determined by treating as zero those
* singular values which are less than RCOND times the largest singular
* value.
*
* The divide and conquer algorithm makes very mild assumptions about
* floating point arithmetic. It will work on machines with a guard
* digit in add/subtract, or on those binary machines without guard
* digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
* Cray-2. It could conceivably fail on hexadecimal or decimal machines
* without guard digits, but we know of none.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of A. M >= 0.
*
* N (input) INTEGER
* The number of columns of A. N >= 0.
*
* NRHS (input) INTEGER
* The number of right hand sides, i.e., the number of columns
* of the matrices B and X. NRHS >= 0.
*
* A (input) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit, A has been destroyed.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS)
* On entry, the M-by-NRHS right hand side matrix B.
* On exit, B is overwritten by the N-by-NRHS solution
* matrix X. If m >= n and RANK = n, the residual
* sum-of-squares for the solution in the i-th column is given
* by the sum of squares of elements n+1:m in that column.
*
* LDB (input) INTEGER
* The leading dimension of the array B. LDB >= max(1,max(M,N)).
*
* S (output) DOUBLE PRECISION array, dimension (min(M,N))
* The singular values of A in decreasing order.
* The condition number of A in the 2-norm = S(1)/S(min(m,n)).
*
* RCOND (input) DOUBLE PRECISION
* RCOND is used to determine the effective rank of A.
* Singular values S(i) <= RCOND*S(1) are treated as zero.
* If RCOND < 0, machine precision is used instead.
*
* RANK (output) INTEGER
* The effective rank of A, i.e., the number of singular values
* which are greater than RCOND*S(1).
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK must be at least 1.
* The exact minimum amount of workspace needed depends on M,
* N and NRHS. As long as LWORK is at least
* 12*N + 2*N*SMLSIZ + 8*N*NLVL + N*NRHS + (SMLSIZ+1)**2,
* if M is greater than or equal to N or
* 12*M + 2*M*SMLSIZ + 8*M*NLVL + M*NRHS + (SMLSIZ+1)**2,
* if M is less than N, the code will execute correctly.
* SMLSIZ is returned by ILAENV and is equal to the maximum
* size of the subproblems at the bottom of the computation
* tree (usually about 25), and
* NLVL = MAX( 0, INT( LOG_2( MIN( M,N )/(SMLSIZ+1) ) ) + 1 )
* For good performance, LWORK should generally be larger.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* IWORK (workspace) INTEGER array, dimension (MAX(1,LIWORK))
* LIWORK >= max(1, 3 * MINMN * NLVL + 11 * MINMN),
* where MINMN = MIN( M,N ).
* On exit, if INFO = 0, IWORK(1) returns the minimum LIWORK.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value.
* > 0: the algorithm for computing the SVD failed to converge;
* if INFO = i, i off-diagonal elements of an intermediate
* bidiagonal form did not converge to zero.
*
* Further Details
* ===============
*
* Based on contributions by
* Ming Gu and Ren-Cang Li, Computer Science Division, University of
* California at Berkeley, USA
* Osni Marques, LBNL/NERSC, USA
*
* =====================================================================
*
go to the page top
dgelss
USAGE:
s, rank, work, info, a, b = NumRu::Lapack.dgelss( a, b, rcond, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGELSS( M, N, NRHS, A, LDA, B, LDB, S, RCOND, RANK, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGELSS computes the minimum norm solution to a real linear least
* squares problem:
*
* Minimize 2-norm(| b - A*x |).
*
* using the singular value decomposition (SVD) of A. A is an M-by-N
* matrix which may be rank-deficient.
*
* Several right hand side vectors b and solution vectors x can be
* handled in a single call; they are stored as the columns of the
* M-by-NRHS right hand side matrix B and the N-by-NRHS solution matrix
* X.
*
* The effective rank of A is determined by treating as zero those
* singular values which are less than RCOND times the largest singular
* value.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* NRHS (input) INTEGER
* The number of right hand sides, i.e., the number of columns
* of the matrices B and X. NRHS >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit, the first min(m,n) rows of A are overwritten with
* its right singular vectors, stored rowwise.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS)
* On entry, the M-by-NRHS right hand side matrix B.
* On exit, B is overwritten by the N-by-NRHS solution
* matrix X. If m >= n and RANK = n, the residual
* sum-of-squares for the solution in the i-th column is given
* by the sum of squares of elements n+1:m in that column.
*
* LDB (input) INTEGER
* The leading dimension of the array B. LDB >= max(1,max(M,N)).
*
* S (output) DOUBLE PRECISION array, dimension (min(M,N))
* The singular values of A in decreasing order.
* The condition number of A in the 2-norm = S(1)/S(min(m,n)).
*
* RCOND (input) DOUBLE PRECISION
* RCOND is used to determine the effective rank of A.
* Singular values S(i) <= RCOND*S(1) are treated as zero.
* If RCOND < 0, machine precision is used instead.
*
* RANK (output) INTEGER
* The effective rank of A, i.e., the number of singular values
* which are greater than RCOND*S(1).
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= 1, and also:
* LWORK >= 3*min(M,N) + max( 2*min(M,N), max(M,N), NRHS )
* For good performance, LWORK should generally be larger.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value.
* > 0: the algorithm for computing the SVD failed to converge;
* if INFO = i, i off-diagonal elements of an intermediate
* bidiagonal form did not converge to zero.
*
* =====================================================================
*
go to the page top
dgelsx
USAGE:
rank, info, a, b, jpvt = NumRu::Lapack.dgelsx( m, a, b, jpvt, rcond, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGELSX( M, N, NRHS, A, LDA, B, LDB, JPVT, RCOND, RANK, WORK, INFO )
* Purpose
* =======
*
* This routine is deprecated and has been replaced by routine DGELSY.
*
* DGELSX computes the minimum-norm solution to a real linear least
* squares problem:
* minimize || A * X - B ||
* using a complete orthogonal factorization of A. A is an M-by-N
* matrix which may be rank-deficient.
*
* Several right hand side vectors b and solution vectors x can be
* handled in a single call; they are stored as the columns of the
* M-by-NRHS right hand side matrix B and the N-by-NRHS solution
* matrix X.
*
* The routine first computes a QR factorization with column pivoting:
* A * P = Q * [ R11 R12 ]
* [ 0 R22 ]
* with R11 defined as the largest leading submatrix whose estimated
* condition number is less than 1/RCOND. The order of R11, RANK,
* is the effective rank of A.
*
* Then, R22 is considered to be negligible, and R12 is annihilated
* by orthogonal transformations from the right, arriving at the
* complete orthogonal factorization:
* A * P = Q * [ T11 0 ] * Z
* [ 0 0 ]
* The minimum-norm solution is then
* X = P * Z' [ inv(T11)*Q1'*B ]
* [ 0 ]
* where Q1 consists of the first RANK columns of Q.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* NRHS (input) INTEGER
* The number of right hand sides, i.e., the number of
* columns of matrices B and X. NRHS >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit, A has been overwritten by details of its
* complete orthogonal factorization.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS)
* On entry, the M-by-NRHS right hand side matrix B.
* On exit, the N-by-NRHS solution matrix X.
* If m >= n and RANK = n, the residual sum-of-squares for
* the solution in the i-th column is given by the sum of
* squares of elements N+1:M in that column.
*
* LDB (input) INTEGER
* The leading dimension of the array B. LDB >= max(1,M,N).
*
* JPVT (input/output) INTEGER array, dimension (N)
* On entry, if JPVT(i) .ne. 0, the i-th column of A is an
* initial column, otherwise it is a free column. Before
* the QR factorization of A, all initial columns are
* permuted to the leading positions; only the remaining
* free columns are moved as a result of column pivoting
* during the factorization.
* On exit, if JPVT(i) = k, then the i-th column of A*P
* was the k-th column of A.
*
* RCOND (input) DOUBLE PRECISION
* RCOND is used to determine the effective rank of A, which
* is defined as the order of the largest leading triangular
* submatrix R11 in the QR factorization with pivoting of A,
* whose estimated condition number < 1/RCOND.
*
* RANK (output) INTEGER
* The effective rank of A, i.e., the order of the submatrix
* R11. This is the same as the order of the submatrix T11
* in the complete orthogonal factorization of A.
*
* WORK (workspace) DOUBLE PRECISION array, dimension
* (max( min(M,N)+3*N, 2*min(M,N)+NRHS )),
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* =====================================================================
*
go to the page top
dgelsy
USAGE:
rank, work, info, a, b, jpvt = NumRu::Lapack.dgelsy( a, b, jpvt, rcond, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGELSY( M, N, NRHS, A, LDA, B, LDB, JPVT, RCOND, RANK, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGELSY computes the minimum-norm solution to a real linear least
* squares problem:
* minimize || A * X - B ||
* using a complete orthogonal factorization of A. A is an M-by-N
* matrix which may be rank-deficient.
*
* Several right hand side vectors b and solution vectors x can be
* handled in a single call; they are stored as the columns of the
* M-by-NRHS right hand side matrix B and the N-by-NRHS solution
* matrix X.
*
* The routine first computes a QR factorization with column pivoting:
* A * P = Q * [ R11 R12 ]
* [ 0 R22 ]
* with R11 defined as the largest leading submatrix whose estimated
* condition number is less than 1/RCOND. The order of R11, RANK,
* is the effective rank of A.
*
* Then, R22 is considered to be negligible, and R12 is annihilated
* by orthogonal transformations from the right, arriving at the
* complete orthogonal factorization:
* A * P = Q * [ T11 0 ] * Z
* [ 0 0 ]
* The minimum-norm solution is then
* X = P * Z' [ inv(T11)*Q1'*B ]
* [ 0 ]
* where Q1 consists of the first RANK columns of Q.
*
* This routine is basically identical to the original xGELSX except
* three differences:
* o The call to the subroutine xGEQPF has been substituted by the
* the call to the subroutine xGEQP3. This subroutine is a Blas-3
* version of the QR factorization with column pivoting.
* o Matrix B (the right hand side) is updated with Blas-3.
* o The permutation of matrix B (the right hand side) is faster and
* more simple.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* NRHS (input) INTEGER
* The number of right hand sides, i.e., the number of
* columns of matrices B and X. NRHS >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit, A has been overwritten by details of its
* complete orthogonal factorization.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS)
* On entry, the M-by-NRHS right hand side matrix B.
* On exit, the N-by-NRHS solution matrix X.
*
* LDB (input) INTEGER
* The leading dimension of the array B. LDB >= max(1,M,N).
*
* JPVT (input/output) INTEGER array, dimension (N)
* On entry, if JPVT(i) .ne. 0, the i-th column of A is permuted
* to the front of AP, otherwise column i is a free column.
* On exit, if JPVT(i) = k, then the i-th column of AP
* was the k-th column of A.
*
* RCOND (input) DOUBLE PRECISION
* RCOND is used to determine the effective rank of A, which
* is defined as the order of the largest leading triangular
* submatrix R11 in the QR factorization with pivoting of A,
* whose estimated condition number < 1/RCOND.
*
* RANK (output) INTEGER
* The effective rank of A, i.e., the order of the submatrix
* R11. This is the same as the order of the submatrix T11
* in the complete orthogonal factorization of A.
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK.
* The unblocked strategy requires that:
* LWORK >= MAX( MN+3*N+1, 2*MN+NRHS ),
* where MN = min( M, N ).
* The block algorithm requires that:
* LWORK >= MAX( MN+2*N+NB*(N+1), 2*MN+NB*NRHS ),
* where NB is an upper bound on the blocksize returned
* by ILAENV for the routines DGEQP3, DTZRZF, STZRQF, DORMQR,
* and DORMRZ.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: If INFO = -i, the i-th argument had an illegal value.
*
* Further Details
* ===============
*
* Based on contributions by
* A. Petitet, Computer Science Dept., Univ. of Tenn., Knoxville, USA
* E. Quintana-Orti, Depto. de Informatica, Universidad Jaime I, Spain
* G. Quintana-Orti, Depto. de Informatica, Universidad Jaime I, Spain
*
* =====================================================================
*
go to the page top
dgeql2
USAGE:
tau, info, a = NumRu::Lapack.dgeql2( m, a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEQL2( M, N, A, LDA, TAU, WORK, INFO )
* Purpose
* =======
*
* DGEQL2 computes a QL factorization of a real m by n matrix A:
* A = Q * L.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the m by n matrix A.
* On exit, if m >= n, the lower triangle of the subarray
* A(m-n+1:m,1:n) contains the n by n lower triangular matrix L;
* if m <= n, the elements on and below the (n-m)-th
* superdiagonal contain the m by n lower trapezoidal matrix L;
* the remaining elements, with the array TAU, represent the
* orthogonal matrix Q as a product of elementary reflectors
* (see Further Details).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* TAU (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors (see Further
* Details).
*
* WORK (workspace) DOUBLE PRECISION array, dimension (N)
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of elementary reflectors
*
* Q = H(k) . . . H(2) H(1), where k = min(m,n).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(m-k+i+1:m) = 0 and v(m-k+i) = 1; v(1:m-k+i-1) is stored on exit in
* A(1:m-k+i-1,n-k+i), and tau in TAU(i).
*
* =====================================================================
*
go to the page top
dgeqlf
USAGE:
tau, work, info, a = NumRu::Lapack.dgeqlf( m, a, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEQLF( M, N, A, LDA, TAU, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGEQLF computes a QL factorization of a real M-by-N matrix A:
* A = Q * L.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit,
* if m >= n, the lower triangle of the subarray
* A(m-n+1:m,1:n) contains the N-by-N lower triangular matrix L;
* if m <= n, the elements on and below the (n-m)-th
* superdiagonal contain the M-by-N lower trapezoidal matrix L;
* the remaining elements, with the array TAU, represent the
* orthogonal matrix Q as a product of elementary reflectors
* (see Further Details).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* TAU (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors (see Further
* Details).
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= max(1,N).
* For optimum performance LWORK >= N*NB, where NB is the
* optimal blocksize.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of elementary reflectors
*
* Q = H(k) . . . H(2) H(1), where k = min(m,n).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(m-k+i+1:m) = 0 and v(m-k+i) = 1; v(1:m-k+i-1) is stored on exit in
* A(1:m-k+i-1,n-k+i), and tau in TAU(i).
*
* =====================================================================
*
* .. Local Scalars ..
LOGICAL LQUERY
INTEGER I, IB, IINFO, IWS, K, KI, KK, LDWORK, LWKOPT,
$ MU, NB, NBMIN, NU, NX
* ..
* .. External Subroutines ..
EXTERNAL DGEQL2, DLARFB, DLARFT, XERBLA
* ..
* .. Intrinsic Functions ..
INTRINSIC MAX, MIN
* ..
* .. External Functions ..
INTEGER ILAENV
EXTERNAL ILAENV
* ..
go to the page top
dgeqp3
USAGE:
tau, work, info, a, jpvt = NumRu::Lapack.dgeqp3( m, a, jpvt, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEQP3( M, N, A, LDA, JPVT, TAU, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGEQP3 computes a QR factorization with column pivoting of a
* matrix A: A*P = Q*R using Level 3 BLAS.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit, the upper triangle of the array contains the
* min(M,N)-by-N upper trapezoidal matrix R; the elements below
* the diagonal, together with the array TAU, represent the
* orthogonal matrix Q as a product of min(M,N) elementary
* reflectors.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* JPVT (input/output) INTEGER array, dimension (N)
* On entry, if JPVT(J).ne.0, the J-th column of A is permuted
* to the front of A*P (a leading column); if JPVT(J)=0,
* the J-th column of A is a free column.
* On exit, if JPVT(J)=K, then the J-th column of A*P was the
* the K-th column of A.
*
* TAU (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors.
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO=0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= 3*N+1.
* For optimal performance LWORK >= 2*N+( N+1 )*NB, where NB
* is the optimal blocksize.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit.
* < 0: if INFO = -i, the i-th argument had an illegal value.
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of elementary reflectors
*
* Q = H(1) H(2) . . . H(k), where k = min(m,n).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real/complex scalar, and v is a real/complex vector
* with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in
* A(i+1:m,i), and tau in TAU(i).
*
* Based on contributions by
* G. Quintana-Orti, Depto. de Informatica, Universidad Jaime I, Spain
* X. Sun, Computer Science Dept., Duke University, USA
*
* =====================================================================
*
go to the page top
dgeqpf
USAGE:
tau, info, a, jpvt = NumRu::Lapack.dgeqpf( m, a, jpvt, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEQPF( M, N, A, LDA, JPVT, TAU, WORK, INFO )
* Purpose
* =======
*
* This routine is deprecated and has been replaced by routine DGEQP3.
*
* DGEQPF computes a QR factorization with column pivoting of a
* real M-by-N matrix A: A*P = Q*R.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit, the upper triangle of the array contains the
* min(M,N)-by-N upper triangular matrix R; the elements
* below the diagonal, together with the array TAU,
* represent the orthogonal matrix Q as a product of
* min(m,n) elementary reflectors.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* JPVT (input/output) INTEGER array, dimension (N)
* On entry, if JPVT(i) .ne. 0, the i-th column of A is permuted
* to the front of A*P (a leading column); if JPVT(i) = 0,
* the i-th column of A is a free column.
* On exit, if JPVT(i) = k, then the i-th column of A*P
* was the k-th column of A.
*
* TAU (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors.
*
* WORK (workspace) DOUBLE PRECISION array, dimension (3*N)
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of elementary reflectors
*
* Q = H(1) H(2) . . . H(n)
*
* Each H(i) has the form
*
* H = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(i+1:m,i).
*
* The matrix P is represented in jpvt as follows: If
* jpvt(j) = i
* then the jth column of P is the ith canonical unit vector.
*
* Partial column norm updating strategy modified by
* Z. Drmac and Z. Bujanovic, Dept. of Mathematics,
* University of Zagreb, Croatia.
* June 2010
* For more details see LAPACK Working Note 176.
*
* =====================================================================
*
go to the page top
dgeqr2
USAGE:
tau, info, a = NumRu::Lapack.dgeqr2( m, a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEQR2( M, N, A, LDA, TAU, WORK, INFO )
* Purpose
* =======
*
* DGEQR2 computes a QR factorization of a real m by n matrix A:
* A = Q * R.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the m by n matrix A.
* On exit, the elements on and above the diagonal of the array
* contain the min(m,n) by n upper trapezoidal matrix R (R is
* upper triangular if m >= n); the elements below the diagonal,
* with the array TAU, represent the orthogonal matrix Q as a
* product of elementary reflectors (see Further Details).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* TAU (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors (see Further
* Details).
*
* WORK (workspace) DOUBLE PRECISION array, dimension (N)
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of elementary reflectors
*
* Q = H(1) H(2) . . . H(k), where k = min(m,n).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(i+1:m,i),
* and tau in TAU(i).
*
* =====================================================================
*
go to the page top
dgeqr2p
USAGE:
tau, info, a = NumRu::Lapack.dgeqr2p( m, a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEQR2P( M, N, A, LDA, TAU, WORK, INFO )
* Purpose
* =======
*
* DGEQR2 computes a QR factorization of a real m by n matrix A:
* A = Q * R.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the m by n matrix A.
* On exit, the elements on and above the diagonal of the array
* contain the min(m,n) by n upper trapezoidal matrix R (R is
* upper triangular if m >= n); the elements below the diagonal,
* with the array TAU, represent the orthogonal matrix Q as a
* product of elementary reflectors (see Further Details).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* TAU (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors (see Further
* Details).
*
* WORK (workspace) DOUBLE PRECISION array, dimension (N)
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of elementary reflectors
*
* Q = H(1) H(2) . . . H(k), where k = min(m,n).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(i+1:m,i),
* and tau in TAU(i).
*
* =====================================================================
*
go to the page top
dgeqrf
USAGE:
tau, work, info, a = NumRu::Lapack.dgeqrf( m, a, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEQRF( M, N, A, LDA, TAU, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGEQRF computes a QR factorization of a real M-by-N matrix A:
* A = Q * R.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit, the elements on and above the diagonal of the array
* contain the min(M,N)-by-N upper trapezoidal matrix R (R is
* upper triangular if m >= n); the elements below the diagonal,
* with the array TAU, represent the orthogonal matrix Q as a
* product of min(m,n) elementary reflectors (see Further
* Details).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* TAU (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors (see Further
* Details).
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= max(1,N).
* For optimum performance LWORK >= N*NB, where NB is
* the optimal blocksize.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of elementary reflectors
*
* Q = H(1) H(2) . . . H(k), where k = min(m,n).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(i+1:m,i),
* and tau in TAU(i).
*
* =====================================================================
*
* .. Local Scalars ..
LOGICAL LQUERY
INTEGER I, IB, IINFO, IWS, K, LDWORK, LWKOPT, NB,
$ NBMIN, NX
* ..
* .. External Subroutines ..
EXTERNAL DGEQR2, DLARFB, DLARFT, XERBLA
* ..
* .. Intrinsic Functions ..
INTRINSIC MAX, MIN
* ..
* .. External Functions ..
INTEGER ILAENV
EXTERNAL ILAENV
* ..
go to the page top
dgeqrfp
USAGE:
tau, work, info, a = NumRu::Lapack.dgeqrfp( m, a, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGEQRFP( M, N, A, LDA, TAU, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGEQRFP computes a QR factorization of a real M-by-N matrix A:
* A = Q * R.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit, the elements on and above the diagonal of the array
* contain the min(M,N)-by-N upper trapezoidal matrix R (R is
* upper triangular if m >= n); the elements below the diagonal,
* with the array TAU, represent the orthogonal matrix Q as a
* product of min(m,n) elementary reflectors (see Further
* Details).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* TAU (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors (see Further
* Details).
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= max(1,N).
* For optimum performance LWORK >= N*NB, where NB is
* the optimal blocksize.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of elementary reflectors
*
* Q = H(1) H(2) . . . H(k), where k = min(m,n).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(i+1:m,i),
* and tau in TAU(i).
*
* =====================================================================
*
* .. Local Scalars ..
LOGICAL LQUERY
INTEGER I, IB, IINFO, IWS, K, LDWORK, LWKOPT, NB,
$ NBMIN, NX
* ..
* .. External Subroutines ..
EXTERNAL DGEQR2P, DLARFB, DLARFT, XERBLA
* ..
* .. Intrinsic Functions ..
INTRINSIC MAX, MIN
* ..
* .. External Functions ..
INTEGER ILAENV
EXTERNAL ILAENV
* ..
go to the page top
dgerfs
USAGE:
ferr, berr, info, x = NumRu::Lapack.dgerfs( trans, a, af, ipiv, b, x, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGERFS( TRANS, N, NRHS, A, LDA, AF, LDAF, IPIV, B, LDB, X, LDX, FERR, BERR, WORK, IWORK, INFO )
* Purpose
* =======
*
* DGERFS improves the computed solution to a system of linear
* equations and provides error bounds and backward error estimates for
* the solution.
*
* Arguments
* =========
*
* TRANS (input) CHARACTER*1
* Specifies the form of the system of equations:
* = 'N': A * X = B (No transpose)
* = 'T': A**T * X = B (Transpose)
* = 'C': A**H * X = B (Conjugate transpose = Transpose)
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* NRHS (input) INTEGER
* The number of right hand sides, i.e., the number of columns
* of the matrices B and X. NRHS >= 0.
*
* A (input) DOUBLE PRECISION array, dimension (LDA,N)
* The original N-by-N matrix A.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* AF (input) DOUBLE PRECISION array, dimension (LDAF,N)
* The factors L and U from the factorization A = P*L*U
* as computed by DGETRF.
*
* LDAF (input) INTEGER
* The leading dimension of the array AF. LDAF >= max(1,N).
*
* IPIV (input) INTEGER array, dimension (N)
* The pivot indices from DGETRF; for 1<=i<=N, row i of the
* matrix was interchanged with row IPIV(i).
*
* B (input) DOUBLE PRECISION array, dimension (LDB,NRHS)
* The right hand side matrix B.
*
* LDB (input) INTEGER
* The leading dimension of the array B. LDB >= max(1,N).
*
* X (input/output) DOUBLE PRECISION array, dimension (LDX,NRHS)
* On entry, the solution matrix X, as computed by DGETRS.
* On exit, the improved solution matrix X.
*
* LDX (input) INTEGER
* The leading dimension of the array X. LDX >= max(1,N).
*
* FERR (output) DOUBLE PRECISION array, dimension (NRHS)
* The estimated forward error bound for each solution vector
* X(j) (the j-th column of the solution matrix X).
* If XTRUE is the true solution corresponding to X(j), FERR(j)
* is an estimated upper bound for the magnitude of the largest
* element in (X(j) - XTRUE) divided by the magnitude of the
* largest element in X(j). The estimate is as reliable as
* the estimate for RCOND, and is almost always a slight
* overestimate of the true error.
*
* BERR (output) DOUBLE PRECISION array, dimension (NRHS)
* The componentwise relative backward error of each solution
* vector X(j) (i.e., the smallest relative change in
* any element of A or B that makes X(j) an exact solution).
*
* WORK (workspace) DOUBLE PRECISION array, dimension (3*N)
*
* IWORK (workspace) INTEGER array, dimension (N)
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* Internal Parameters
* ===================
*
* ITMAX is the maximum number of steps of iterative refinement.
*
* =====================================================================
*
go to the page top
dgerfsx
USAGE:
rcond, berr, err_bnds_norm, err_bnds_comp, info, x, params = NumRu::Lapack.dgerfsx( trans, equed, a, af, ipiv, r, c, b, x, params, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGERFSX( TRANS, EQUED, N, NRHS, A, LDA, AF, LDAF, IPIV, R, C, B, LDB, X, LDX, RCOND, BERR, N_ERR_BNDS, ERR_BNDS_NORM, ERR_BNDS_COMP, NPARAMS, PARAMS, WORK, IWORK, INFO )
* Purpose
* =======
*
* DGERFSX improves the computed solution to a system of linear
* equations and provides error bounds and backward error estimates
* for the solution. In addition to normwise error bound, the code
* provides maximum componentwise error bound if possible. See
* comments for ERR_BNDS_NORM and ERR_BNDS_COMP for details of the
* error bounds.
*
* The original system of linear equations may have been equilibrated
* before calling this routine, as described by arguments EQUED, R
* and C below. In this case, the solution and error bounds returned
* are for the original unequilibrated system.
*
* Arguments
* =========
*
* Some optional parameters are bundled in the PARAMS array. These
* settings determine how refinement is performed, but often the
* defaults are acceptable. If the defaults are acceptable, users
* can pass NPARAMS = 0 which prevents the source code from accessing
* the PARAMS argument.
*
* TRANS (input) CHARACTER*1
* Specifies the form of the system of equations:
* = 'N': A * X = B (No transpose)
* = 'T': A**T * X = B (Transpose)
* = 'C': A**H * X = B (Conjugate transpose = Transpose)
*
* EQUED (input) CHARACTER*1
* Specifies the form of equilibration that was done to A
* before calling this routine. This is needed to compute
* the solution and error bounds correctly.
* = 'N': No equilibration
* = 'R': Row equilibration, i.e., A has been premultiplied by
* diag(R).
* = 'C': Column equilibration, i.e., A has been postmultiplied
* by diag(C).
* = 'B': Both row and column equilibration, i.e., A has been
* replaced by diag(R) * A * diag(C).
* The right hand side B has been changed accordingly.
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* NRHS (input) INTEGER
* The number of right hand sides, i.e., the number of columns
* of the matrices B and X. NRHS >= 0.
*
* A (input) DOUBLE PRECISION array, dimension (LDA,N)
* The original N-by-N matrix A.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* AF (input) DOUBLE PRECISION array, dimension (LDAF,N)
* The factors L and U from the factorization A = P*L*U
* as computed by DGETRF.
*
* LDAF (input) INTEGER
* The leading dimension of the array AF. LDAF >= max(1,N).
*
* IPIV (input) INTEGER array, dimension (N)
* The pivot indices from DGETRF; for 1<=i<=N, row i of the
* matrix was interchanged with row IPIV(i).
*
* R (input) DOUBLE PRECISION array, dimension (N)
* The row scale factors for A. If EQUED = 'R' or 'B', A is
* multiplied on the left by diag(R); if EQUED = 'N' or 'C', R
* is not accessed.
* If R is accessed, each element of R should be a power of the radix
* to ensure a reliable solution and error estimates. Scaling by
* powers of the radix does not cause rounding errors unless the
* result underflows or overflows. Rounding errors during scaling
* lead to refining with a matrix that is not equivalent to the
* input matrix, producing error estimates that may not be
* reliable.
*
* C (input) DOUBLE PRECISION array, dimension (N)
* The column scale factors for A. If EQUED = 'C' or 'B', A is
* multiplied on the right by diag(C); if EQUED = 'N' or 'R', C
* is not accessed.
* If C is accessed, each element of C should be a power of the radix
* to ensure a reliable solution and error estimates. Scaling by
* powers of the radix does not cause rounding errors unless the
* result underflows or overflows. Rounding errors during scaling
* lead to refining with a matrix that is not equivalent to the
* input matrix, producing error estimates that may not be
* reliable.
*
* B (input) DOUBLE PRECISION array, dimension (LDB,NRHS)
* The right hand side matrix B.
*
* LDB (input) INTEGER
* The leading dimension of the array B. LDB >= max(1,N).
*
* X (input/output) DOUBLE PRECISION array, dimension (LDX,NRHS)
* On entry, the solution matrix X, as computed by DGETRS.
* On exit, the improved solution matrix X.
*
* LDX (input) INTEGER
* The leading dimension of the array X. LDX >= max(1,N).
*
* RCOND (output) DOUBLE PRECISION
* Reciprocal scaled condition number. This is an estimate of the
* reciprocal Skeel condition number of the matrix A after
* equilibration (if done). If this is less than the machine
* precision (in particular, if it is zero), the matrix is singular
* to working precision. Note that the error may still be small even
* if this number is very small and the matrix appears ill-
* conditioned.
*
* BERR (output) DOUBLE PRECISION array, dimension (NRHS)
* Componentwise relative backward error. This is the
* componentwise relative backward error of each solution vector X(j)
* (i.e., the smallest relative change in any element of A or B that
* makes X(j) an exact solution).
*
* N_ERR_BNDS (input) INTEGER
* Number of error bounds to return for each right hand side
* and each type (normwise or componentwise). See ERR_BNDS_NORM and
* ERR_BNDS_COMP below.
*
* ERR_BNDS_NORM (output) DOUBLE PRECISION array, dimension (NRHS, N_ERR_BNDS)
* For each right-hand side, this array contains information about
* various error bounds and condition numbers corresponding to the
* normwise relative error, which is defined as follows:
*
* Normwise relative error in the ith solution vector:
* max_j (abs(XTRUE(j,i) - X(j,i)))
* ------------------------------
* max_j abs(X(j,i))
*
* The array is indexed by the type of error information as described
* below. There currently are up to three pieces of information
* returned.
*
* The first index in ERR_BNDS_NORM(i,:) corresponds to the ith
* right-hand side.
*
* The second index in ERR_BNDS_NORM(:,err) contains the following
* three fields:
* err = 1 "Trust/don't trust" boolean. Trust the answer if the
* reciprocal condition number is less than the threshold
* sqrt(n) * dlamch('Epsilon').
*
* err = 2 "Guaranteed" error bound: The estimated forward error,
* almost certainly within a factor of 10 of the true error
* so long as the next entry is greater than the threshold
* sqrt(n) * dlamch('Epsilon'). This error bound should only
* be trusted if the previous boolean is true.
*
* err = 3 Reciprocal condition number: Estimated normwise
* reciprocal condition number. Compared with the threshold
* sqrt(n) * dlamch('Epsilon') to determine if the error
* estimate is "guaranteed". These reciprocal condition
* numbers are 1 / (norm(Z^{-1},inf) * norm(Z,inf)) for some
* appropriately scaled matrix Z.
* Let Z = S*A, where S scales each row by a power of the
* radix so all absolute row sums of Z are approximately 1.
*
* See Lapack Working Note 165 for further details and extra
* cautions.
*
* ERR_BNDS_COMP (output) DOUBLE PRECISION array, dimension (NRHS, N_ERR_BNDS)
* For each right-hand side, this array contains information about
* various error bounds and condition numbers corresponding to the
* componentwise relative error, which is defined as follows:
*
* Componentwise relative error in the ith solution vector:
* abs(XTRUE(j,i) - X(j,i))
* max_j ----------------------
* abs(X(j,i))
*
* The array is indexed by the right-hand side i (on which the
* componentwise relative error depends), and the type of error
* information as described below. There currently are up to three
* pieces of information returned for each right-hand side. If
* componentwise accuracy is not requested (PARAMS(3) = 0.0), then
* ERR_BNDS_COMP is not accessed. If N_ERR_BNDS .LT. 3, then at most
* the first (:,N_ERR_BNDS) entries are returned.
*
* The first index in ERR_BNDS_COMP(i,:) corresponds to the ith
* right-hand side.
*
* The second index in ERR_BNDS_COMP(:,err) contains the following
* three fields:
* err = 1 "Trust/don't trust" boolean. Trust the answer if the
* reciprocal condition number is less than the threshold
* sqrt(n) * dlamch('Epsilon').
*
* err = 2 "Guaranteed" error bound: The estimated forward error,
* almost certainly within a factor of 10 of the true error
* so long as the next entry is greater than the threshold
* sqrt(n) * dlamch('Epsilon'). This error bound should only
* be trusted if the previous boolean is true.
*
* err = 3 Reciprocal condition number: Estimated componentwise
* reciprocal condition number. Compared with the threshold
* sqrt(n) * dlamch('Epsilon') to determine if the error
* estimate is "guaranteed". These reciprocal condition
* numbers are 1 / (norm(Z^{-1},inf) * norm(Z,inf)) for some
* appropriately scaled matrix Z.
* Let Z = S*(A*diag(x)), where x is the solution for the
* current right-hand side and S scales each row of
* A*diag(x) by a power of the radix so all absolute row
* sums of Z are approximately 1.
*
* See Lapack Working Note 165 for further details and extra
* cautions.
*
* NPARAMS (input) INTEGER
* Specifies the number of parameters set in PARAMS. If .LE. 0, the
* PARAMS array is never referenced and default values are used.
*
* PARAMS (input / output) DOUBLE PRECISION array, dimension (NPARAMS)
* Specifies algorithm parameters. If an entry is .LT. 0.0, then
* that entry will be filled with default value used for that
* parameter. Only positions up to NPARAMS are accessed; defaults
* are used for higher-numbered parameters.
*
* PARAMS(LA_LINRX_ITREF_I = 1) : Whether to perform iterative
* refinement or not.
* Default: 1.0D+0
* = 0.0 : No refinement is performed, and no error bounds are
* computed.
* = 1.0 : Use the double-precision refinement algorithm,
* possibly with doubled-single computations if the
* compilation environment does not support DOUBLE
* PRECISION.
* (other values are reserved for future use)
*
* PARAMS(LA_LINRX_ITHRESH_I = 2) : Maximum number of residual
* computations allowed for refinement.
* Default: 10
* Aggressive: Set to 100 to permit convergence using approximate
* factorizations or factorizations other than LU. If
* the factorization uses a technique other than
* Gaussian elimination, the guarantees in
* err_bnds_norm and err_bnds_comp may no longer be
* trustworthy.
*
* PARAMS(LA_LINRX_CWISE_I = 3) : Flag determining if the code
* will attempt to find a solution with small componentwise
* relative error in the double-precision algorithm. Positive
* is true, 0.0 is false.
* Default: 1.0 (attempt componentwise convergence)
*
* WORK (workspace) DOUBLE PRECISION array, dimension (4*N)
*
* IWORK (workspace) INTEGER array, dimension (N)
*
* INFO (output) INTEGER
* = 0: Successful exit. The solution to every right-hand side is
* guaranteed.
* < 0: If INFO = -i, the i-th argument had an illegal value
* > 0 and <= N: U(INFO,INFO) is exactly zero. The factorization
* has been completed, but the factor U is exactly singular, so
* the solution and error bounds could not be computed. RCOND = 0
* is returned.
* = N+J: The solution corresponding to the Jth right-hand side is
* not guaranteed. The solutions corresponding to other right-
* hand sides K with K > J may not be guaranteed as well, but
* only the first such right-hand side is reported. If a small
* componentwise error is not requested (PARAMS(3) = 0.0) then
* the Jth right-hand side is the first with a normwise error
* bound that is not guaranteed (the smallest J such
* that ERR_BNDS_NORM(J,1) = 0.0). By default (PARAMS(3) = 1.0)
* the Jth right-hand side is the first with either a normwise or
* componentwise error bound that is not guaranteed (the smallest
* J such that either ERR_BNDS_NORM(J,1) = 0.0 or
* ERR_BNDS_COMP(J,1) = 0.0). See the definition of
* ERR_BNDS_NORM(:,1) and ERR_BNDS_COMP(:,1). To get information
* about all of the right-hand sides check ERR_BNDS_NORM or
* ERR_BNDS_COMP.
*
* ==================================================================
*
go to the page top
dgerq2
USAGE:
tau, info, a = NumRu::Lapack.dgerq2( a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGERQ2( M, N, A, LDA, TAU, WORK, INFO )
* Purpose
* =======
*
* DGERQ2 computes an RQ factorization of a real m by n matrix A:
* A = R * Q.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the m by n matrix A.
* On exit, if m <= n, the upper triangle of the subarray
* A(1:m,n-m+1:n) contains the m by m upper triangular matrix R;
* if m >= n, the elements on and above the (m-n)-th subdiagonal
* contain the m by n upper trapezoidal matrix R; the remaining
* elements, with the array TAU, represent the orthogonal matrix
* Q as a product of elementary reflectors (see Further
* Details).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* TAU (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors (see Further
* Details).
*
* WORK (workspace) DOUBLE PRECISION array, dimension (M)
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of elementary reflectors
*
* Q = H(1) H(2) . . . H(k), where k = min(m,n).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(n-k+i+1:n) = 0 and v(n-k+i) = 1; v(1:n-k+i-1) is stored on exit in
* A(m-k+i,1:n-k+i-1), and tau in TAU(i).
*
* =====================================================================
*
go to the page top
dgerqf
USAGE:
tau, work, info, a = NumRu::Lapack.dgerqf( m, a, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGERQF( M, N, A, LDA, TAU, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGERQF computes an RQ factorization of a real M-by-N matrix A:
* A = R * Q.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit,
* if m <= n, the upper triangle of the subarray
* A(1:m,n-m+1:n) contains the M-by-M upper triangular matrix R;
* if m >= n, the elements on and above the (m-n)-th subdiagonal
* contain the M-by-N upper trapezoidal matrix R;
* the remaining elements, with the array TAU, represent the
* orthogonal matrix Q as a product of min(m,n) elementary
* reflectors (see Further Details).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* TAU (output) DOUBLE PRECISION array, dimension (min(M,N))
* The scalar factors of the elementary reflectors (see Further
* Details).
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= max(1,M).
* For optimum performance LWORK >= M*NB, where NB is
* the optimal blocksize.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* Further Details
* ===============
*
* The matrix Q is represented as a product of elementary reflectors
*
* Q = H(1) H(2) . . . H(k), where k = min(m,n).
*
* Each H(i) has the form
*
* H(i) = I - tau * v * v'
*
* where tau is a real scalar, and v is a real vector with
* v(n-k+i+1:n) = 0 and v(n-k+i) = 1; v(1:n-k+i-1) is stored on exit in
* A(m-k+i,1:n-k+i-1), and tau in TAU(i).
*
* =====================================================================
*
* .. Local Scalars ..
LOGICAL LQUERY
INTEGER I, IB, IINFO, IWS, K, KI, KK, LDWORK, LWKOPT,
$ MU, NB, NBMIN, NU, NX
* ..
* .. External Subroutines ..
EXTERNAL DGERQ2, DLARFB, DLARFT, XERBLA
* ..
* .. Intrinsic Functions ..
INTRINSIC MAX, MIN
* ..
* .. External Functions ..
INTEGER ILAENV
EXTERNAL ILAENV
* ..
go to the page top
dgesc2
USAGE:
scale, rhs = NumRu::Lapack.dgesc2( a, rhs, ipiv, jpiv, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGESC2( N, A, LDA, RHS, IPIV, JPIV, SCALE )
* Purpose
* =======
*
* DGESC2 solves a system of linear equations
*
* A * X = scale* RHS
*
* with a general N-by-N matrix A using the LU factorization with
* complete pivoting computed by DGETC2.
*
* Arguments
* =========
*
* N (input) INTEGER
* The order of the matrix A.
*
* A (input) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the LU part of the factorization of the n-by-n
* matrix A computed by DGETC2: A = P * L * U * Q
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1, N).
*
* RHS (input/output) DOUBLE PRECISION array, dimension (N).
* On entry, the right hand side vector b.
* On exit, the solution vector X.
*
* IPIV (input) INTEGER array, dimension (N).
* The pivot indices; for 1 <= i <= N, row i of the
* matrix has been interchanged with row IPIV(i).
*
* JPIV (input) INTEGER array, dimension (N).
* The pivot indices; for 1 <= j <= N, column j of the
* matrix has been interchanged with column JPIV(j).
*
* SCALE (output) DOUBLE PRECISION
* On exit, SCALE contains the scale factor. SCALE is chosen
* 0 <= SCALE <= 1 to prevent owerflow in the solution.
*
* Further Details
* ===============
*
* Based on contributions by
* Bo Kagstrom and Peter Poromaa, Department of Computing Science,
* Umea University, S-901 87 Umea, Sweden.
*
* =====================================================================
*
go to the page top
dgesdd
USAGE:
s, u, vt, work, info, a = NumRu::Lapack.dgesdd( jobz, a, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGESDD( JOBZ, M, N, A, LDA, S, U, LDU, VT, LDVT, WORK, LWORK, IWORK, INFO )
* Purpose
* =======
*
* DGESDD computes the singular value decomposition (SVD) of a real
* M-by-N matrix A, optionally computing the left and right singular
* vectors. If singular vectors are desired, it uses a
* divide-and-conquer algorithm.
*
* The SVD is written
*
* A = U * SIGMA * transpose(V)
*
* where SIGMA is an M-by-N matrix which is zero except for its
* min(m,n) diagonal elements, U is an M-by-M orthogonal matrix, and
* V is an N-by-N orthogonal matrix. The diagonal elements of SIGMA
* are the singular values of A; they are real and non-negative, and
* are returned in descending order. The first min(m,n) columns of
* U and V are the left and right singular vectors of A.
*
* Note that the routine returns VT = V**T, not V.
*
* The divide and conquer algorithm makes very mild assumptions about
* floating point arithmetic. It will work on machines with a guard
* digit in add/subtract, or on those binary machines without guard
* digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
* Cray-2. It could conceivably fail on hexadecimal or decimal machines
* without guard digits, but we know of none.
*
* Arguments
* =========
*
* JOBZ (input) CHARACTER*1
* Specifies options for computing all or part of the matrix U:
* = 'A': all M columns of U and all N rows of V**T are
* returned in the arrays U and VT;
* = 'S': the first min(M,N) columns of U and the first
* min(M,N) rows of V**T are returned in the arrays U
* and VT;
* = 'O': If M >= N, the first N columns of U are overwritten
* on the array A and all rows of V**T are returned in
* the array VT;
* otherwise, all columns of U are returned in the
* array U and the first M rows of V**T are overwritten
* in the array A;
* = 'N': no columns of U or rows of V**T are computed.
*
* M (input) INTEGER
* The number of rows of the input matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the input matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit,
* if JOBZ = 'O', A is overwritten with the first N columns
* of U (the left singular vectors, stored
* columnwise) if M >= N;
* A is overwritten with the first M rows
* of V**T (the right singular vectors, stored
* rowwise) otherwise.
* if JOBZ .ne. 'O', the contents of A are destroyed.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* S (output) DOUBLE PRECISION array, dimension (min(M,N))
* The singular values of A, sorted so that S(i) >= S(i+1).
*
* U (output) DOUBLE PRECISION array, dimension (LDU,UCOL)
* UCOL = M if JOBZ = 'A' or JOBZ = 'O' and M < N;
* UCOL = min(M,N) if JOBZ = 'S'.
* If JOBZ = 'A' or JOBZ = 'O' and M < N, U contains the M-by-M
* orthogonal matrix U;
* if JOBZ = 'S', U contains the first min(M,N) columns of U
* (the left singular vectors, stored columnwise);
* if JOBZ = 'O' and M >= N, or JOBZ = 'N', U is not referenced.
*
* LDU (input) INTEGER
* The leading dimension of the array U. LDU >= 1; if
* JOBZ = 'S' or 'A' or JOBZ = 'O' and M < N, LDU >= M.
*
* VT (output) DOUBLE PRECISION array, dimension (LDVT,N)
* If JOBZ = 'A' or JOBZ = 'O' and M >= N, VT contains the
* N-by-N orthogonal matrix V**T;
* if JOBZ = 'S', VT contains the first min(M,N) rows of
* V**T (the right singular vectors, stored rowwise);
* if JOBZ = 'O' and M < N, or JOBZ = 'N', VT is not referenced.
*
* LDVT (input) INTEGER
* The leading dimension of the array VT. LDVT >= 1; if
* JOBZ = 'A' or JOBZ = 'O' and M >= N, LDVT >= N;
* if JOBZ = 'S', LDVT >= min(M,N).
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK;
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= 1.
* If JOBZ = 'N',
* LWORK >= 3*min(M,N) + max(max(M,N),7*min(M,N)).
* If JOBZ = 'O',
* LWORK >= 3*min(M,N) +
* max(max(M,N),5*min(M,N)*min(M,N)+4*min(M,N)).
* If JOBZ = 'S' or 'A'
* LWORK >= 3*min(M,N) +
* max(max(M,N),4*min(M,N)*min(M,N)+4*min(M,N)).
* For good performance, LWORK should generally be larger.
* If LWORK = -1 but other input arguments are legal, WORK(1)
* returns the optimal LWORK.
*
* IWORK (workspace) INTEGER array, dimension (8*min(M,N))
*
* INFO (output) INTEGER
* = 0: successful exit.
* < 0: if INFO = -i, the i-th argument had an illegal value.
* > 0: DBDSDC did not converge, updating process failed.
*
* Further Details
* ===============
*
* Based on contributions by
* Ming Gu and Huan Ren, Computer Science Division, University of
* California at Berkeley, USA
*
* =====================================================================
*
go to the page top
dgesv
USAGE:
ipiv, info, a, b = NumRu::Lapack.dgesv( a, b, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGESV( N, NRHS, A, LDA, IPIV, B, LDB, INFO )
* Purpose
* =======
*
* DGESV computes the solution to a real system of linear equations
* A * X = B,
* where A is an N-by-N matrix and X and B are N-by-NRHS matrices.
*
* The LU decomposition with partial pivoting and row interchanges is
* used to factor A as
* A = P * L * U,
* where P is a permutation matrix, L is unit lower triangular, and U is
* upper triangular. The factored form of A is then used to solve the
* system of equations A * X = B.
*
* Arguments
* =========
*
* N (input) INTEGER
* The number of linear equations, i.e., the order of the
* matrix A. N >= 0.
*
* NRHS (input) INTEGER
* The number of right hand sides, i.e., the number of columns
* of the matrix B. NRHS >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the N-by-N coefficient matrix A.
* On exit, the factors L and U from the factorization
* A = P*L*U; the unit diagonal elements of L are not stored.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* IPIV (output) INTEGER array, dimension (N)
* The pivot indices that define the permutation matrix P;
* row i of the matrix was interchanged with row IPIV(i).
*
* B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS)
* On entry, the N-by-NRHS matrix of right hand side matrix B.
* On exit, if INFO = 0, the N-by-NRHS solution matrix X.
*
* LDB (input) INTEGER
* The leading dimension of the array B. LDB >= max(1,N).
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
* > 0: if INFO = i, U(i,i) is exactly zero. The factorization
* has been completed, but the factor U is exactly
* singular, so the solution could not be computed.
*
* =====================================================================
*
* .. External Subroutines ..
EXTERNAL DGETRF, DGETRS, XERBLA
* ..
* .. Intrinsic Functions ..
INTRINSIC MAX
* ..
go to the page top
dgesvd
USAGE:
s, u, vt, work, info, a = NumRu::Lapack.dgesvd( jobu, jobvt, a, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGESVD( JOBU, JOBVT, M, N, A, LDA, S, U, LDU, VT, LDVT, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGESVD computes the singular value decomposition (SVD) of a real
* M-by-N matrix A, optionally computing the left and/or right singular
* vectors. The SVD is written
*
* A = U * SIGMA * transpose(V)
*
* where SIGMA is an M-by-N matrix which is zero except for its
* min(m,n) diagonal elements, U is an M-by-M orthogonal matrix, and
* V is an N-by-N orthogonal matrix. The diagonal elements of SIGMA
* are the singular values of A; they are real and non-negative, and
* are returned in descending order. The first min(m,n) columns of
* U and V are the left and right singular vectors of A.
*
* Note that the routine returns V**T, not V.
*
* Arguments
* =========
*
* JOBU (input) CHARACTER*1
* Specifies options for computing all or part of the matrix U:
* = 'A': all M columns of U are returned in array U:
* = 'S': the first min(m,n) columns of U (the left singular
* vectors) are returned in the array U;
* = 'O': the first min(m,n) columns of U (the left singular
* vectors) are overwritten on the array A;
* = 'N': no columns of U (no left singular vectors) are
* computed.
*
* JOBVT (input) CHARACTER*1
* Specifies options for computing all or part of the matrix
* V**T:
* = 'A': all N rows of V**T are returned in the array VT;
* = 'S': the first min(m,n) rows of V**T (the right singular
* vectors) are returned in the array VT;
* = 'O': the first min(m,n) rows of V**T (the right singular
* vectors) are overwritten on the array A;
* = 'N': no rows of V**T (no right singular vectors) are
* computed.
*
* JOBVT and JOBU cannot both be 'O'.
*
* M (input) INTEGER
* The number of rows of the input matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the input matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit,
* if JOBU = 'O', A is overwritten with the first min(m,n)
* columns of U (the left singular vectors,
* stored columnwise);
* if JOBVT = 'O', A is overwritten with the first min(m,n)
* rows of V**T (the right singular vectors,
* stored rowwise);
* if JOBU .ne. 'O' and JOBVT .ne. 'O', the contents of A
* are destroyed.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* S (output) DOUBLE PRECISION array, dimension (min(M,N))
* The singular values of A, sorted so that S(i) >= S(i+1).
*
* U (output) DOUBLE PRECISION array, dimension (LDU,UCOL)
* (LDU,M) if JOBU = 'A' or (LDU,min(M,N)) if JOBU = 'S'.
* If JOBU = 'A', U contains the M-by-M orthogonal matrix U;
* if JOBU = 'S', U contains the first min(m,n) columns of U
* (the left singular vectors, stored columnwise);
* if JOBU = 'N' or 'O', U is not referenced.
*
* LDU (input) INTEGER
* The leading dimension of the array U. LDU >= 1; if
* JOBU = 'S' or 'A', LDU >= M.
*
* VT (output) DOUBLE PRECISION array, dimension (LDVT,N)
* If JOBVT = 'A', VT contains the N-by-N orthogonal matrix
* V**T;
* if JOBVT = 'S', VT contains the first min(m,n) rows of
* V**T (the right singular vectors, stored rowwise);
* if JOBVT = 'N' or 'O', VT is not referenced.
*
* LDVT (input) INTEGER
* The leading dimension of the array VT. LDVT >= 1; if
* JOBVT = 'A', LDVT >= N; if JOBVT = 'S', LDVT >= min(M,N).
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO = 0, WORK(1) returns the optimal LWORK;
* if INFO > 0, WORK(2:MIN(M,N)) contains the unconverged
* superdiagonal elements of an upper bidiagonal matrix B
* whose diagonal is in S (not necessarily sorted). B
* satisfies A = U * B * VT, so it has the same singular values
* as A, and singular vectors related by U and VT.
*
* LWORK (input) INTEGER
* The dimension of the array WORK.
* LWORK >= MAX(1,3*MIN(M,N)+MAX(M,N),5*MIN(M,N)).
* For good performance, LWORK should generally be larger.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit.
* < 0: if INFO = -i, the i-th argument had an illegal value.
* > 0: if DBDSQR did not converge, INFO specifies how many
* superdiagonals of an intermediate bidiagonal form B
* did not converge to zero. See the description of WORK
* above for details.
*
* =====================================================================
*
go to the page top
dgesvj
USAGE:
sva, info, a, v, work = NumRu::Lapack.dgesvj( joba, jobu, jobv, m, a, mv, v, work, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGESVJ( JOBA, JOBU, JOBV, M, N, A, LDA, SVA, MV, V, LDV, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGESVJ computes the singular value decomposition (SVD) of a real
* M-by-N matrix A, where M >= N. The SVD of A is written as
* [++] [xx] [x0] [xx]
* A = U * SIGMA * V^t, [++] = [xx] * [ox] * [xx]
* [++] [xx]
* where SIGMA is an N-by-N diagonal matrix, U is an M-by-N orthonormal
* matrix, and V is an N-by-N orthogonal matrix. The diagonal elements
* of SIGMA are the singular values of A. The columns of U and V are the
* left and the right singular vectors of A, respectively.
*
* Further Details
* ~~~~~~~~~~~~~~~
* The orthogonal N-by-N matrix V is obtained as a product of Jacobi plane
* rotations. The rotations are implemented as fast scaled rotations of
* Anda and Park [1]. In the case of underflow of the Jacobi angle, a
* modified Jacobi transformation of Drmac [4] is used. Pivot strategy uses
* column interchanges of de Rijk [2]. The relative accuracy of the computed
* singular values and the accuracy of the computed singular vectors (in
* angle metric) is as guaranteed by the theory of Demmel and Veselic [3].
* The condition number that determines the accuracy in the full rank case
* is essentially min_{D=diag} kappa(A*D), where kappa(.) is the
* spectral condition number. The best performance of this Jacobi SVD
* procedure is achieved if used in an accelerated version of Drmac and
* Veselic [5,6], and it is the kernel routine in the SIGMA library [7].
* Some tunning parameters (marked with [TP]) are available for the
* implementer.
* The computational range for the nonzero singular values is the machine
* number interval ( UNDERFLOW , OVERFLOW ). In extreme cases, even
* denormalized singular values can be computed with the corresponding
* gradual loss of accurate digits.
*
* Contributors
* ~~~~~~~~~~~~
* Zlatko Drmac (Zagreb, Croatia) and Kresimir Veselic (Hagen, Germany)
*
* References
* ~~~~~~~~~~
* [1] A. A. Anda and H. Park: Fast plane rotations with dynamic scaling.
* SIAM J. matrix Anal. Appl., Vol. 15 (1994), pp. 162-174.
* [2] P. P. M. De Rijk: A one-sided Jacobi algorithm for computing the
* singular value decomposition on a vector computer.
* SIAM J. Sci. Stat. Comp., Vol. 10 (1998), pp. 359-371.
* [3] J. Demmel and K. Veselic: Jacobi method is more accurate than QR.
* [4] Z. Drmac: Implementation of Jacobi rotations for accurate singular
* value computation in floating point arithmetic.
* SIAM J. Sci. Comp., Vol. 18 (1997), pp. 1200-1222.
* [5] Z. Drmac and K. Veselic: New fast and accurate Jacobi SVD algorithm I.
* SIAM J. Matrix Anal. Appl. Vol. 35, No. 2 (2008), pp. 1322-1342.
* LAPACK Working note 169.
* [6] Z. Drmac and K. Veselic: New fast and accurate Jacobi SVD algorithm II.
* SIAM J. Matrix Anal. Appl. Vol. 35, No. 2 (2008), pp. 1343-1362.
* LAPACK Working note 170.
* [7] Z. Drmac: SIGMA - mathematical software library for accurate SVD, PSV,
* QSVD, (H,K)-SVD computations.
* Department of Mathematics, University of Zagreb, 2008.
*
* Bugs, Examples and Comments
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Please report all bugs and send interesting test examples and comments to
* [email protected]. Thank you.
*
* Arguments
* =========
*
* JOBA (input) CHARACTER* 1
* Specifies the structure of A.
* = 'L': The input matrix A is lower triangular;
* = 'U': The input matrix A is upper triangular;
* = 'G': The input matrix A is general M-by-N matrix, M >= N.
*
* JOBU (input) CHARACTER*1
* Specifies whether to compute the left singular vectors
* (columns of U):
* = 'U': The left singular vectors corresponding to the nonzero
* singular values are computed and returned in the leading
* columns of A. See more details in the description of A.
* The default numerical orthogonality threshold is set to
* approximately TOL=CTOL*EPS, CTOL=DSQRT(M), EPS=DLAMCH('E').
* = 'C': Analogous to JOBU='U', except that user can control the
* level of numerical orthogonality of the computed left
* singular vectors. TOL can be set to TOL = CTOL*EPS, where
* CTOL is given on input in the array WORK.
* No CTOL smaller than ONE is allowed. CTOL greater
* than 1 / EPS is meaningless. The option 'C'
* can be used if M*EPS is satisfactory orthogonality
* of the computed left singular vectors, so CTOL=M could
* save few sweeps of Jacobi rotations.
* See the descriptions of A and WORK(1).
* = 'N': The matrix U is not computed. However, see the
* description of A.
*
* JOBV (input) CHARACTER*1
* Specifies whether to compute the right singular vectors, that
* is, the matrix V:
* = 'V' : the matrix V is computed and returned in the array V
* = 'A' : the Jacobi rotations are applied to the MV-by-N
* array V. In other words, the right singular vector
* matrix V is not computed explicitly, instead it is
* applied to an MV-by-N matrix initially stored in the
* first MV rows of V.
* = 'N' : the matrix V is not computed and the array V is not
* referenced
*
* M (input) INTEGER
* The number of rows of the input matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the input matrix A.
* M >= N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix A.
* On exit :
* If JOBU .EQ. 'U' .OR. JOBU .EQ. 'C' :
* If INFO .EQ. 0 :
* RANKA orthonormal columns of U are returned in the
* leading RANKA columns of the array A. Here RANKA <= N
* is the number of computed singular values of A that are
* above the underflow threshold DLAMCH('S'). The singular
* vectors corresponding to underflowed or zero singular
* values are not computed. The value of RANKA is returned
* in the array WORK as RANKA=NINT(WORK(2)). Also see the
* descriptions of SVA and WORK. The computed columns of U
* are mutually numerically orthogonal up to approximately
* TOL=DSQRT(M)*EPS (default); or TOL=CTOL*EPS (JOBU.EQ.'C'),
* see the description of JOBU.
* If INFO .GT. 0 :
* the procedure DGESVJ did not converge in the given number
* of iterations (sweeps). In that case, the computed
* columns of U may not be orthogonal up to TOL. The output
* U (stored in A), SIGMA (given by the computed singular
* values in SVA(1:N)) and V is still a decomposition of the
* input matrix A in the sense that the residual
* ||A-SCALE*U*SIGMA*V^T||_2 / ||A||_2 is small.
*
* If JOBU .EQ. 'N' :
* If INFO .EQ. 0 :
* Note that the left singular vectors are 'for free' in the
* one-sided Jacobi SVD algorithm. However, if only the
* singular values are needed, the level of numerical
* orthogonality of U is not an issue and iterations are
* stopped when the columns of the iterated matrix are
* numerically orthogonal up to approximately M*EPS. Thus,
* on exit, A contains the columns of U scaled with the
* corresponding singular values.
* If INFO .GT. 0 :
* the procedure DGESVJ did not converge in the given number
* of iterations (sweeps).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* SVA (workspace/output) DOUBLE PRECISION array, dimension (N)
* On exit :
* If INFO .EQ. 0 :
* depending on the value SCALE = WORK(1), we have:
* If SCALE .EQ. ONE :
* SVA(1:N) contains the computed singular values of A.
* During the computation SVA contains the Euclidean column
* norms of the iterated matrices in the array A.
* If SCALE .NE. ONE :
* The singular values of A are SCALE*SVA(1:N), and this
* factored representation is due to the fact that some of the
* singular values of A might underflow or overflow.
* If INFO .GT. 0 :
* the procedure DGESVJ did not converge in the given number of
* iterations (sweeps) and SCALE*SVA(1:N) may not be accurate.
*
* MV (input) INTEGER
* If JOBV .EQ. 'A', then the product of Jacobi rotations in DGESVJ
* is applied to the first MV rows of V. See the description of JOBV.
*
* V (input/output) DOUBLE PRECISION array, dimension (LDV,N)
* If JOBV = 'V', then V contains on exit the N-by-N matrix of
* the right singular vectors;
* If JOBV = 'A', then V contains the product of the computed right
* singular vector matrix and the initial matrix in
* the array V.
* If JOBV = 'N', then V is not referenced.
*
* LDV (input) INTEGER
* The leading dimension of the array V, LDV .GE. 1.
* If JOBV .EQ. 'V', then LDV .GE. max(1,N).
* If JOBV .EQ. 'A', then LDV .GE. max(1,MV) .
*
* WORK (input/workspace/output) DOUBLE PRECISION array, dimension max(4,M+N).
* On entry :
* If JOBU .EQ. 'C' :
* WORK(1) = CTOL, where CTOL defines the threshold for convergence.
* The process stops if all columns of A are mutually
* orthogonal up to CTOL*EPS, EPS=DLAMCH('E').
* It is required that CTOL >= ONE, i.e. it is not
* allowed to force the routine to obtain orthogonality
* below EPS.
* On exit :
* WORK(1) = SCALE is the scaling factor such that SCALE*SVA(1:N)
* are the computed singular values of A.
* (See description of SVA().)
* WORK(2) = NINT(WORK(2)) is the number of the computed nonzero
* singular values.
* WORK(3) = NINT(WORK(3)) is the number of the computed singular
* values that are larger than the underflow threshold.
* WORK(4) = NINT(WORK(4)) is the number of sweeps of Jacobi
* rotations needed for numerical convergence.
* WORK(5) = max_{i.NE.j} |COS(A(:,i),A(:,j))| in the last sweep.
* This is useful information in cases when DGESVJ did
* not converge, as it can be used to estimate whether
* the output is stil useful and for post festum analysis.
* WORK(6) = the largest absolute value over all sines of the
* Jacobi rotation angles in the last sweep. It can be
* useful for a post festum analysis.
*
* LWORK (input) INTEGER
* length of WORK, WORK >= MAX(6,M+N)
*
* INFO (output) INTEGER
* = 0 : successful exit.
* < 0 : if INFO = -i, then the i-th argument had an illegal value
* > 0 : DGESVJ did not converge in the maximal allowed number (30)
* of sweeps. The output may still be useful. See the
* description of WORK.
*
* =====================================================================
*
* .. Local Parameters ..
DOUBLE PRECISION ZERO, HALF, ONE, TWO
PARAMETER ( ZERO = 0.0D0, HALF = 0.5D0, ONE = 1.0D0,
+ TWO = 2.0D0 )
INTEGER NSWEEP
PARAMETER ( NSWEEP = 30 )
* ..
* .. Local Scalars ..
DOUBLE PRECISION AAPP, AAPP0, AAPQ, AAQQ, APOAQ, AQOAP, BIG,
+ BIGTHETA, CS, CTOL, EPSLN, LARGE, MXAAPQ,
+ MXSINJ, ROOTBIG, ROOTEPS, ROOTSFMIN, ROOTTOL,
+ SKL, SFMIN, SMALL, SN, T, TEMP1, THETA,
+ THSIGN, TOL
INTEGER BLSKIP, EMPTSW, i, ibr, IERR, igl, IJBLSK, ir1,
+ ISWROT, jbc, jgl, KBL, LKAHEAD, MVL, N2, N34,
+ N4, NBL, NOTROT, p, PSKIPPED, q, ROWSKIP,
+ SWBAND
LOGICAL APPLV, GOSCALE, LOWER, LSVEC, NOSCALE, ROTOK,
+ RSVEC, UCTOL, UPPER
* ..
* .. Local Arrays ..
DOUBLE PRECISION FASTR( 5 )
* ..
* .. Intrinsic Functions ..
INTRINSIC DABS, DMAX1, DMIN1, DBLE, MIN0, DSIGN, DSQRT
* ..
* .. External Functions ..
* ..
* from BLAS
DOUBLE PRECISION DDOT, DNRM2
EXTERNAL DDOT, DNRM2
INTEGER IDAMAX
EXTERNAL IDAMAX
* from LAPACK
DOUBLE PRECISION DLAMCH
EXTERNAL DLAMCH
LOGICAL LSAME
EXTERNAL LSAME
* ..
* .. External Subroutines ..
* ..
* from BLAS
EXTERNAL DAXPY, DCOPY, DROTM, DSCAL, DSWAP
* from LAPACK
EXTERNAL DLASCL, DLASET, DLASSQ, XERBLA
*
EXTERNAL DGSVJ0, DGSVJ1
* ..
go to the page top
dgesvx
USAGE:
x, rcond, ferr, berr, work, info, a, af, ipiv, equed, r, c, b = NumRu::Lapack.dgesvx( fact, trans, a, b, [:af => af, :ipiv => ipiv, :equed => equed, :r => r, :c => c, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGESVX( FACT, TRANS, N, NRHS, A, LDA, AF, LDAF, IPIV, EQUED, R, C, B, LDB, X, LDX, RCOND, FERR, BERR, WORK, IWORK, INFO )
* Purpose
* =======
*
* DGESVX uses the LU factorization to compute the solution to a real
* system of linear equations
* A * X = B,
* where A is an N-by-N matrix and X and B are N-by-NRHS matrices.
*
* Error bounds on the solution and a condition estimate are also
* provided.
*
* Description
* ===========
*
* The following steps are performed:
*
* 1. If FACT = 'E', real scaling factors are computed to equilibrate
* the system:
* TRANS = 'N': diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
* TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
* TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
* Whether or not the system will be equilibrated depends on the
* scaling of the matrix A, but if equilibration is used, A is
* overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N')
* or diag(C)*B (if TRANS = 'T' or 'C').
*
* 2. If FACT = 'N' or 'E', the LU decomposition is used to factor the
* matrix A (after equilibration if FACT = 'E') as
* A = P * L * U,
* where P is a permutation matrix, L is a unit lower triangular
* matrix, and U is upper triangular.
*
* 3. If some U(i,i)=0, so that U is exactly singular, then the routine
* returns with INFO = i. Otherwise, the factored form of A is used
* to estimate the condition number of the matrix A. If the
* reciprocal of the condition number is less than machine precision,
* INFO = N+1 is returned as a warning, but the routine still goes on
* to solve for X and compute error bounds as described below.
*
* 4. The system of equations is solved for X using the factored form
* of A.
*
* 5. Iterative refinement is applied to improve the computed solution
* matrix and calculate error bounds and backward error estimates
* for it.
*
* 6. If equilibration was used, the matrix X is premultiplied by
* diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so
* that it solves the original system before equilibration.
*
* Arguments
* =========
*
* FACT (input) CHARACTER*1
* Specifies whether or not the factored form of the matrix A is
* supplied on entry, and if not, whether the matrix A should be
* equilibrated before it is factored.
* = 'F': On entry, AF and IPIV contain the factored form of A.
* If EQUED is not 'N', the matrix A has been
* equilibrated with scaling factors given by R and C.
* A, AF, and IPIV are not modified.
* = 'N': The matrix A will be copied to AF and factored.
* = 'E': The matrix A will be equilibrated if necessary, then
* copied to AF and factored.
*
* TRANS (input) CHARACTER*1
* Specifies the form of the system of equations:
* = 'N': A * X = B (No transpose)
* = 'T': A**T * X = B (Transpose)
* = 'C': A**H * X = B (Transpose)
*
* N (input) INTEGER
* The number of linear equations, i.e., the order of the
* matrix A. N >= 0.
*
* NRHS (input) INTEGER
* The number of right hand sides, i.e., the number of columns
* of the matrices B and X. NRHS >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the N-by-N matrix A. If FACT = 'F' and EQUED is
* not 'N', then A must have been equilibrated by the scaling
* factors in R and/or C. A is not modified if FACT = 'F' or
* 'N', or if FACT = 'E' and EQUED = 'N' on exit.
*
* On exit, if EQUED .ne. 'N', A is scaled as follows:
* EQUED = 'R': A := diag(R) * A
* EQUED = 'C': A := A * diag(C)
* EQUED = 'B': A := diag(R) * A * diag(C).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* AF (input or output) DOUBLE PRECISION array, dimension (LDAF,N)
* If FACT = 'F', then AF is an input argument and on entry
* contains the factors L and U from the factorization
* A = P*L*U as computed by DGETRF. If EQUED .ne. 'N', then
* AF is the factored form of the equilibrated matrix A.
*
* If FACT = 'N', then AF is an output argument and on exit
* returns the factors L and U from the factorization A = P*L*U
* of the original matrix A.
*
* If FACT = 'E', then AF is an output argument and on exit
* returns the factors L and U from the factorization A = P*L*U
* of the equilibrated matrix A (see the description of A for
* the form of the equilibrated matrix).
*
* LDAF (input) INTEGER
* The leading dimension of the array AF. LDAF >= max(1,N).
*
* IPIV (input or output) INTEGER array, dimension (N)
* If FACT = 'F', then IPIV is an input argument and on entry
* contains the pivot indices from the factorization A = P*L*U
* as computed by DGETRF; row i of the matrix was interchanged
* with row IPIV(i).
*
* If FACT = 'N', then IPIV is an output argument and on exit
* contains the pivot indices from the factorization A = P*L*U
* of the original matrix A.
*
* If FACT = 'E', then IPIV is an output argument and on exit
* contains the pivot indices from the factorization A = P*L*U
* of the equilibrated matrix A.
*
* EQUED (input or output) CHARACTER*1
* Specifies the form of equilibration that was done.
* = 'N': No equilibration (always true if FACT = 'N').
* = 'R': Row equilibration, i.e., A has been premultiplied by
* diag(R).
* = 'C': Column equilibration, i.e., A has been postmultiplied
* by diag(C).
* = 'B': Both row and column equilibration, i.e., A has been
* replaced by diag(R) * A * diag(C).
* EQUED is an input argument if FACT = 'F'; otherwise, it is an
* output argument.
*
* R (input or output) DOUBLE PRECISION array, dimension (N)
* The row scale factors for A. If EQUED = 'R' or 'B', A is
* multiplied on the left by diag(R); if EQUED = 'N' or 'C', R
* is not accessed. R is an input argument if FACT = 'F';
* otherwise, R is an output argument. If FACT = 'F' and
* EQUED = 'R' or 'B', each element of R must be positive.
*
* C (input or output) DOUBLE PRECISION array, dimension (N)
* The column scale factors for A. If EQUED = 'C' or 'B', A is
* multiplied on the right by diag(C); if EQUED = 'N' or 'R', C
* is not accessed. C is an input argument if FACT = 'F';
* otherwise, C is an output argument. If FACT = 'F' and
* EQUED = 'C' or 'B', each element of C must be positive.
*
* B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS)
* On entry, the N-by-NRHS right hand side matrix B.
* On exit,
* if EQUED = 'N', B is not modified;
* if TRANS = 'N' and EQUED = 'R' or 'B', B is overwritten by
* diag(R)*B;
* if TRANS = 'T' or 'C' and EQUED = 'C' or 'B', B is
* overwritten by diag(C)*B.
*
* LDB (input) INTEGER
* The leading dimension of the array B. LDB >= max(1,N).
*
* X (output) DOUBLE PRECISION array, dimension (LDX,NRHS)
* If INFO = 0 or INFO = N+1, the N-by-NRHS solution matrix X
* to the original system of equations. Note that A and B are
* modified on exit if EQUED .ne. 'N', and the solution to the
* equilibrated system is inv(diag(C))*X if TRANS = 'N' and
* EQUED = 'C' or 'B', or inv(diag(R))*X if TRANS = 'T' or 'C'
* and EQUED = 'R' or 'B'.
*
* LDX (input) INTEGER
* The leading dimension of the array X. LDX >= max(1,N).
*
* RCOND (output) DOUBLE PRECISION
* The estimate of the reciprocal condition number of the matrix
* A after equilibration (if done). If RCOND is less than the
* machine precision (in particular, if RCOND = 0), the matrix
* is singular to working precision. This condition is
* indicated by a return code of INFO > 0.
*
* FERR (output) DOUBLE PRECISION array, dimension (NRHS)
* The estimated forward error bound for each solution vector
* X(j) (the j-th column of the solution matrix X).
* If XTRUE is the true solution corresponding to X(j), FERR(j)
* is an estimated upper bound for the magnitude of the largest
* element in (X(j) - XTRUE) divided by the magnitude of the
* largest element in X(j). The estimate is as reliable as
* the estimate for RCOND, and is almost always a slight
* overestimate of the true error.
*
* BERR (output) DOUBLE PRECISION array, dimension (NRHS)
* The componentwise relative backward error of each solution
* vector X(j) (i.e., the smallest relative change in
* any element of A or B that makes X(j) an exact solution).
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (4*N)
* On exit, WORK(1) contains the reciprocal pivot growth
* factor norm(A)/norm(U). The "max absolute element" norm is
* used. If WORK(1) is much less than 1, then the stability
* of the LU factorization of the (equilibrated) matrix A
* could be poor. This also means that the solution X, condition
* estimator RCOND, and forward error bound FERR could be
* unreliable. If factorization fails with 0 0: if INFO = i, and i is
* <= N: U(i,i) is exactly zero. The factorization has
* been completed, but the factor U is exactly
* singular, so the solution and error bounds
* could not be computed. RCOND = 0 is returned.
* = N+1: U is nonsingular, but RCOND is less than machine
* precision, meaning that the matrix is singular
* to working precision. Nevertheless, the
* solution and error bounds are computed because
* there are a number of situations where the
* computed solution can be more accurate than the
* value of RCOND would suggest.
*
* =====================================================================
*
go to the page top
dgesvxx
USAGE:
x, rcond, rpvgrw, berr, err_bnds_norm, err_bnds_comp, info, a, af, ipiv, equed, r, c, b, params = NumRu::Lapack.dgesvxx( fact, trans, a, af, ipiv, equed, r, c, b, params, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGESVXX( FACT, TRANS, N, NRHS, A, LDA, AF, LDAF, IPIV, EQUED, R, C, B, LDB, X, LDX, RCOND, RPVGRW, BERR, N_ERR_BNDS, ERR_BNDS_NORM, ERR_BNDS_COMP, NPARAMS, PARAMS, WORK, IWORK, INFO )
* Purpose
* =======
*
* DGESVXX uses the LU factorization to compute the solution to a
* double precision system of linear equations A * X = B, where A is an
* N-by-N matrix and X and B are N-by-NRHS matrices.
*
* If requested, both normwise and maximum componentwise error bounds
* are returned. DGESVXX will return a solution with a tiny
* guaranteed error (O(eps) where eps is the working machine
* precision) unless the matrix is very ill-conditioned, in which
* case a warning is returned. Relevant condition numbers also are
* calculated and returned.
*
* DGESVXX accepts user-provided factorizations and equilibration
* factors; see the definitions of the FACT and EQUED options.
* Solving with refinement and using a factorization from a previous
* DGESVXX call will also produce a solution with either O(eps)
* errors or warnings, but we cannot make that claim for general
* user-provided factorizations and equilibration factors if they
* differ from what DGESVXX would itself produce.
*
* Description
* ===========
*
* The following steps are performed:
*
* 1. If FACT = 'E', double precision scaling factors are computed to equilibrate
* the system:
*
* TRANS = 'N': diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
* TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
* TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
*
* Whether or not the system will be equilibrated depends on the
* scaling of the matrix A, but if equilibration is used, A is
* overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N')
* or diag(C)*B (if TRANS = 'T' or 'C').
*
* 2. If FACT = 'N' or 'E', the LU decomposition is used to factor
* the matrix A (after equilibration if FACT = 'E') as
*
* A = P * L * U,
*
* where P is a permutation matrix, L is a unit lower triangular
* matrix, and U is upper triangular.
*
* 3. If some U(i,i)=0, so that U is exactly singular, then the
* routine returns with INFO = i. Otherwise, the factored form of A
* is used to estimate the condition number of the matrix A (see
* argument RCOND). If the reciprocal of the condition number is less
* than machine precision, the routine still goes on to solve for X
* and compute error bounds as described below.
*
* 4. The system of equations is solved for X using the factored form
* of A.
*
* 5. By default (unless PARAMS(LA_LINRX_ITREF_I) is set to zero),
* the routine will use iterative refinement to try to get a small
* error and error bounds. Refinement calculates the residual to at
* least twice the working precision.
*
* 6. If equilibration was used, the matrix X is premultiplied by
* diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so
* that it solves the original system before equilibration.
*
* Arguments
* =========
*
* Some optional parameters are bundled in the PARAMS array. These
* settings determine how refinement is performed, but often the
* defaults are acceptable. If the defaults are acceptable, users
* can pass NPARAMS = 0 which prevents the source code from accessing
* the PARAMS argument.
*
* FACT (input) CHARACTER*1
* Specifies whether or not the factored form of the matrix A is
* supplied on entry, and if not, whether the matrix A should be
* equilibrated before it is factored.
* = 'F': On entry, AF and IPIV contain the factored form of A.
* If EQUED is not 'N', the matrix A has been
* equilibrated with scaling factors given by R and C.
* A, AF, and IPIV are not modified.
* = 'N': The matrix A will be copied to AF and factored.
* = 'E': The matrix A will be equilibrated if necessary, then
* copied to AF and factored.
*
* TRANS (input) CHARACTER*1
* Specifies the form of the system of equations:
* = 'N': A * X = B (No transpose)
* = 'T': A**T * X = B (Transpose)
* = 'C': A**H * X = B (Conjugate Transpose = Transpose)
*
* N (input) INTEGER
* The number of linear equations, i.e., the order of the
* matrix A. N >= 0.
*
* NRHS (input) INTEGER
* The number of right hand sides, i.e., the number of columns
* of the matrices B and X. NRHS >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the N-by-N matrix A. If FACT = 'F' and EQUED is
* not 'N', then A must have been equilibrated by the scaling
* factors in R and/or C. A is not modified if FACT = 'F' or
* 'N', or if FACT = 'E' and EQUED = 'N' on exit.
*
* On exit, if EQUED .ne. 'N', A is scaled as follows:
* EQUED = 'R': A := diag(R) * A
* EQUED = 'C': A := A * diag(C)
* EQUED = 'B': A := diag(R) * A * diag(C).
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* AF (input or output) DOUBLE PRECISION array, dimension (LDAF,N)
* If FACT = 'F', then AF is an input argument and on entry
* contains the factors L and U from the factorization
* A = P*L*U as computed by DGETRF. If EQUED .ne. 'N', then
* AF is the factored form of the equilibrated matrix A.
*
* If FACT = 'N', then AF is an output argument and on exit
* returns the factors L and U from the factorization A = P*L*U
* of the original matrix A.
*
* If FACT = 'E', then AF is an output argument and on exit
* returns the factors L and U from the factorization A = P*L*U
* of the equilibrated matrix A (see the description of A for
* the form of the equilibrated matrix).
*
* LDAF (input) INTEGER
* The leading dimension of the array AF. LDAF >= max(1,N).
*
* IPIV (input or output) INTEGER array, dimension (N)
* If FACT = 'F', then IPIV is an input argument and on entry
* contains the pivot indices from the factorization A = P*L*U
* as computed by DGETRF; row i of the matrix was interchanged
* with row IPIV(i).
*
* If FACT = 'N', then IPIV is an output argument and on exit
* contains the pivot indices from the factorization A = P*L*U
* of the original matrix A.
*
* If FACT = 'E', then IPIV is an output argument and on exit
* contains the pivot indices from the factorization A = P*L*U
* of the equilibrated matrix A.
*
* EQUED (input or output) CHARACTER*1
* Specifies the form of equilibration that was done.
* = 'N': No equilibration (always true if FACT = 'N').
* = 'R': Row equilibration, i.e., A has been premultiplied by
* diag(R).
* = 'C': Column equilibration, i.e., A has been postmultiplied
* by diag(C).
* = 'B': Both row and column equilibration, i.e., A has been
* replaced by diag(R) * A * diag(C).
* EQUED is an input argument if FACT = 'F'; otherwise, it is an
* output argument.
*
* R (input or output) DOUBLE PRECISION array, dimension (N)
* The row scale factors for A. If EQUED = 'R' or 'B', A is
* multiplied on the left by diag(R); if EQUED = 'N' or 'C', R
* is not accessed. R is an input argument if FACT = 'F';
* otherwise, R is an output argument. If FACT = 'F' and
* EQUED = 'R' or 'B', each element of R must be positive.
* If R is output, each element of R is a power of the radix.
* If R is input, each element of R should be a power of the radix
* to ensure a reliable solution and error estimates. Scaling by
* powers of the radix does not cause rounding errors unless the
* result underflows or overflows. Rounding errors during scaling
* lead to refining with a matrix that is not equivalent to the
* input matrix, producing error estimates that may not be
* reliable.
*
* C (input or output) DOUBLE PRECISION array, dimension (N)
* The column scale factors for A. If EQUED = 'C' or 'B', A is
* multiplied on the right by diag(C); if EQUED = 'N' or 'R', C
* is not accessed. C is an input argument if FACT = 'F';
* otherwise, C is an output argument. If FACT = 'F' and
* EQUED = 'C' or 'B', each element of C must be positive.
* If C is output, each element of C is a power of the radix.
* If C is input, each element of C should be a power of the radix
* to ensure a reliable solution and error estimates. Scaling by
* powers of the radix does not cause rounding errors unless the
* result underflows or overflows. Rounding errors during scaling
* lead to refining with a matrix that is not equivalent to the
* input matrix, producing error estimates that may not be
* reliable.
*
* B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS)
* On entry, the N-by-NRHS right hand side matrix B.
* On exit,
* if EQUED = 'N', B is not modified;
* if TRANS = 'N' and EQUED = 'R' or 'B', B is overwritten by
* diag(R)*B;
* if TRANS = 'T' or 'C' and EQUED = 'C' or 'B', B is
* overwritten by diag(C)*B.
*
* LDB (input) INTEGER
* The leading dimension of the array B. LDB >= max(1,N).
*
* X (output) DOUBLE PRECISION array, dimension (LDX,NRHS)
* If INFO = 0, the N-by-NRHS solution matrix X to the original
* system of equations. Note that A and B are modified on exit
* if EQUED .ne. 'N', and the solution to the equilibrated system is
* inv(diag(C))*X if TRANS = 'N' and EQUED = 'C' or 'B', or
* inv(diag(R))*X if TRANS = 'T' or 'C' and EQUED = 'R' or 'B'.
*
* LDX (input) INTEGER
* The leading dimension of the array X. LDX >= max(1,N).
*
* RCOND (output) DOUBLE PRECISION
* Reciprocal scaled condition number. This is an estimate of the
* reciprocal Skeel condition number of the matrix A after
* equilibration (if done). If this is less than the machine
* precision (in particular, if it is zero), the matrix is singular
* to working precision. Note that the error may still be small even
* if this number is very small and the matrix appears ill-
* conditioned.
*
* RPVGRW (output) DOUBLE PRECISION
* Reciprocal pivot growth. On exit, this contains the reciprocal
* pivot growth factor norm(A)/norm(U). The "max absolute element"
* norm is used. If this is much less than 1, then the stability of
* the LU factorization of the (equilibrated) matrix A could be poor.
* This also means that the solution X, estimated condition numbers,
* and error bounds could be unreliable. If factorization fails with
* 0 0 and <= N: U(INFO,INFO) is exactly zero. The factorization
* has been completed, but the factor U is exactly singular, so
* the solution and error bounds could not be computed. RCOND = 0
* is returned.
* = N+J: The solution corresponding to the Jth right-hand side is
* not guaranteed. The solutions corresponding to other right-
* hand sides K with K > J may not be guaranteed as well, but
* only the first such right-hand side is reported. If a small
* componentwise error is not requested (PARAMS(3) = 0.0) then
* the Jth right-hand side is the first with a normwise error
* bound that is not guaranteed (the smallest J such
* that ERR_BNDS_NORM(J,1) = 0.0). By default (PARAMS(3) = 1.0)
* the Jth right-hand side is the first with either a normwise or
* componentwise error bound that is not guaranteed (the smallest
* J such that either ERR_BNDS_NORM(J,1) = 0.0 or
* ERR_BNDS_COMP(J,1) = 0.0). See the definition of
* ERR_BNDS_NORM(:,1) and ERR_BNDS_COMP(:,1). To get information
* about all of the right-hand sides check ERR_BNDS_NORM or
* ERR_BNDS_COMP.
*
* ==================================================================
*
go to the page top
dgetc2
USAGE:
ipiv, jpiv, info, a = NumRu::Lapack.dgetc2( a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGETC2( N, A, LDA, IPIV, JPIV, INFO )
* Purpose
* =======
*
* DGETC2 computes an LU factorization with complete pivoting of the
* n-by-n matrix A. The factorization has the form A = P * L * U * Q,
* where P and Q are permutation matrices, L is lower triangular with
* unit diagonal elements and U is upper triangular.
*
* This is the Level 2 BLAS algorithm.
*
* Arguments
* =========
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA, N)
* On entry, the n-by-n matrix A to be factored.
* On exit, the factors L and U from the factorization
* A = P*L*U*Q; the unit diagonal elements of L are not stored.
* If U(k, k) appears to be less than SMIN, U(k, k) is given the
* value of SMIN, i.e., giving a nonsingular perturbed system.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* IPIV (output) INTEGER array, dimension(N).
* The pivot indices; for 1 <= i <= N, row i of the
* matrix has been interchanged with row IPIV(i).
*
* JPIV (output) INTEGER array, dimension(N).
* The pivot indices; for 1 <= j <= N, column j of the
* matrix has been interchanged with column JPIV(j).
*
* INFO (output) INTEGER
* = 0: successful exit
* > 0: if INFO = k, U(k, k) is likely to produce owerflow if
* we try to solve for x in Ax = b. So U is perturbed to
* avoid the overflow.
*
* Further Details
* ===============
*
* Based on contributions by
* Bo Kagstrom and Peter Poromaa, Department of Computing Science,
* Umea University, S-901 87 Umea, Sweden.
*
* =====================================================================
*
go to the page top
dgetf2
USAGE:
ipiv, info, a = NumRu::Lapack.dgetf2( m, a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGETF2( M, N, A, LDA, IPIV, INFO )
* Purpose
* =======
*
* DGETF2 computes an LU factorization of a general m-by-n matrix A
* using partial pivoting with row interchanges.
*
* The factorization has the form
* A = P * L * U
* where P is a permutation matrix, L is lower triangular with unit
* diagonal elements (lower trapezoidal if m > n), and U is upper
* triangular (upper trapezoidal if m < n).
*
* This is the right-looking Level 2 BLAS version of the algorithm.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the m by n matrix to be factored.
* On exit, the factors L and U from the factorization
* A = P*L*U; the unit diagonal elements of L are not stored.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* IPIV (output) INTEGER array, dimension (min(M,N))
* The pivot indices; for 1 <= i <= min(M,N), row i of the
* matrix was interchanged with row IPIV(i).
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -k, the k-th argument had an illegal value
* > 0: if INFO = k, U(k,k) is exactly zero. The factorization
* has been completed, but the factor U is exactly
* singular, and division by zero will occur if it is used
* to solve a system of equations.
*
* =====================================================================
*
go to the page top
dgetrf
USAGE:
ipiv, info, a = NumRu::Lapack.dgetrf( m, a, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGETRF( M, N, A, LDA, IPIV, INFO )
* Purpose
* =======
*
* DGETRF computes an LU factorization of a general M-by-N matrix A
* using partial pivoting with row interchanges.
*
* The factorization has the form
* A = P * L * U
* where P is a permutation matrix, L is lower triangular with unit
* diagonal elements (lower trapezoidal if m > n), and U is upper
* triangular (upper trapezoidal if m < n).
*
* This is the right-looking Level 3 BLAS version of the algorithm.
*
* Arguments
* =========
*
* M (input) INTEGER
* The number of rows of the matrix A. M >= 0.
*
* N (input) INTEGER
* The number of columns of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the M-by-N matrix to be factored.
* On exit, the factors L and U from the factorization
* A = P*L*U; the unit diagonal elements of L are not stored.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,M).
*
* IPIV (output) INTEGER array, dimension (min(M,N))
* The pivot indices; for 1 <= i <= min(M,N), row i of the
* matrix was interchanged with row IPIV(i).
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
* > 0: if INFO = i, U(i,i) is exactly zero. The factorization
* has been completed, but the factor U is exactly
* singular, and division by zero will occur if it is used
* to solve a system of equations.
*
* =====================================================================
*
go to the page top
dgetri
USAGE:
work, info, a = NumRu::Lapack.dgetri( a, ipiv, [:lwork => lwork, :usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGETRI( N, A, LDA, IPIV, WORK, LWORK, INFO )
* Purpose
* =======
*
* DGETRI computes the inverse of a matrix using the LU factorization
* computed by DGETRF.
*
* This method inverts U and then computes inv(A) by solving the system
* inv(A)*L = inv(U) for inv(A).
*
* Arguments
* =========
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the factors L and U from the factorization
* A = P*L*U as computed by DGETRF.
* On exit, if INFO = 0, the inverse of the original matrix A.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* IPIV (input) INTEGER array, dimension (N)
* The pivot indices from DGETRF; for 1<=i<=N, row i of the
* matrix was interchanged with row IPIV(i).
*
* WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK))
* On exit, if INFO=0, then WORK(1) returns the optimal LWORK.
*
* LWORK (input) INTEGER
* The dimension of the array WORK. LWORK >= max(1,N).
* For optimal performance LWORK >= N*NB, where NB is
* the optimal blocksize returned by ILAENV.
*
* If LWORK = -1, then a workspace query is assumed; the routine
* only calculates the optimal size of the WORK array, returns
* this value as the first entry of the WORK array, and no error
* message related to LWORK is issued by XERBLA.
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
* > 0: if INFO = i, U(i,i) is exactly zero; the matrix is
* singular and its inverse could not be computed.
*
* =====================================================================
*
go to the page top
dgetrs
USAGE:
info, b = NumRu::Lapack.dgetrs( trans, a, ipiv, b, [:usage => usage, :help => help])
FORTRAN MANUAL
SUBROUTINE DGETRS( TRANS, N, NRHS, A, LDA, IPIV, B, LDB, INFO )
* Purpose
* =======
*
* DGETRS solves a system of linear equations
* A * X = B or A' * X = B
* with a general N-by-N matrix A using the LU factorization computed
* by DGETRF.
*
* Arguments
* =========
*
* TRANS (input) CHARACTER*1
* Specifies the form of the system of equations:
* = 'N': A * X = B (No transpose)
* = 'T': A'* X = B (Transpose)
* = 'C': A'* X = B (Conjugate transpose = Transpose)
*
* N (input) INTEGER
* The order of the matrix A. N >= 0.
*
* NRHS (input) INTEGER
* The number of right hand sides, i.e., the number of columns
* of the matrix B. NRHS >= 0.
*
* A (input) DOUBLE PRECISION array, dimension (LDA,N)
* The factors L and U from the factorization A = P*L*U
* as computed by DGETRF.
*
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
*
* IPIV (input) INTEGER array, dimension (N)
* The pivot indices from DGETRF; for 1<=i<=N, row i of the
* matrix was interchanged with row IPIV(i).
*
* B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS)
* On entry, the right hand side matrix B.
* On exit, the solution matrix X.
*
* LDB (input) INTEGER
* The leading dimension of the array B. LDB >= max(1,N).
*
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
*
* =====================================================================
*
go to the page top
back to matrix types
back to data types